00:00:00.002 Started by upstream project "autotest-per-patch" build number 127109 00:00:00.002 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.111 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.111 The recommended git tool is: git 00:00:00.112 using credential 00000000-0000-0000-0000-000000000002 00:00:00.114 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.156 Fetching changes from the remote Git repository 00:00:00.160 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.201 Using shallow fetch with depth 1 00:00:00.201 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.201 > git --version # timeout=10 00:00:00.233 > git --version # 'git version 2.39.2' 00:00:00.233 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.274 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.274 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.856 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.865 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.877 Checking out Revision f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 (FETCH_HEAD) 00:00:05.877 > git config core.sparsecheckout # timeout=10 00:00:05.888 > git read-tree -mu HEAD # timeout=10 00:00:05.904 > git checkout -f f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=5 00:00:05.948 Commit message: "spdk-abi-per-patch: fix check-so-deps-docker-autotest parameters" 00:00:05.948 > git rev-list --no-walk f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=10 00:00:06.044 [Pipeline] Start of Pipeline 00:00:06.055 [Pipeline] library 00:00:06.056 Loading library shm_lib@master 00:00:06.056 Library shm_lib@master is cached. Copying from home. 00:00:06.072 [Pipeline] node 00:00:06.083 Running on GP2 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.084 [Pipeline] { 00:00:06.095 [Pipeline] catchError 00:00:06.096 [Pipeline] { 00:00:06.108 [Pipeline] wrap 00:00:06.115 [Pipeline] { 00:00:06.121 [Pipeline] stage 00:00:06.122 [Pipeline] { (Prologue) 00:00:06.323 [Pipeline] sh 00:00:06.604 + logger -p user.info -t JENKINS-CI 00:00:06.620 [Pipeline] echo 00:00:06.621 Node: GP2 00:00:06.628 [Pipeline] sh 00:00:06.923 [Pipeline] setCustomBuildProperty 00:00:06.932 [Pipeline] echo 00:00:06.933 Cleanup processes 00:00:06.937 [Pipeline] sh 00:00:07.219 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.219 3666317 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.235 [Pipeline] sh 00:00:07.523 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.523 ++ grep -v 'sudo pgrep' 00:00:07.523 ++ awk '{print $1}' 00:00:07.523 + sudo kill -9 00:00:07.523 + true 00:00:07.540 [Pipeline] cleanWs 00:00:07.552 [WS-CLEANUP] Deleting project workspace... 00:00:07.552 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.559 [WS-CLEANUP] done 00:00:07.563 [Pipeline] setCustomBuildProperty 00:00:07.577 [Pipeline] sh 00:00:07.866 + sudo git config --global --replace-all safe.directory '*' 00:00:07.937 [Pipeline] httpRequest 00:00:07.971 [Pipeline] echo 00:00:07.972 Sorcerer 10.211.164.101 is alive 00:00:07.978 [Pipeline] httpRequest 00:00:07.983 HttpMethod: GET 00:00:07.983 URL: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:07.984 Sending request to url: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:07.986 Response Code: HTTP/1.1 200 OK 00:00:07.987 Success: Status code 200 is in the accepted range: 200,404 00:00:07.987 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:08.960 [Pipeline] sh 00:00:09.247 + tar --no-same-owner -xf jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:09.260 [Pipeline] httpRequest 00:00:09.291 [Pipeline] echo 00:00:09.292 Sorcerer 10.211.164.101 is alive 00:00:09.298 [Pipeline] httpRequest 00:00:09.302 HttpMethod: GET 00:00:09.302 URL: http://10.211.164.101/packages/spdk_643864934f886762436325fcde224d60f32ee368.tar.gz 00:00:09.303 Sending request to url: http://10.211.164.101/packages/spdk_643864934f886762436325fcde224d60f32ee368.tar.gz 00:00:09.305 Response Code: HTTP/1.1 200 OK 00:00:09.306 Success: Status code 200 is in the accepted range: 200,404 00:00:09.306 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_643864934f886762436325fcde224d60f32ee368.tar.gz 00:00:30.388 [Pipeline] sh 00:00:30.675 + tar --no-same-owner -xf spdk_643864934f886762436325fcde224d60f32ee368.tar.gz 00:00:33.993 [Pipeline] sh 00:00:34.281 + git -C spdk log --oneline -n5 00:00:34.281 643864934 scripts/pkgdep: Drop support for downloading shfmt binaries 00:00:34.281 78cbcfdde test/scheduler: fix cpu mask for rpc governor tests 00:00:34.281 ba69d4678 event/scheduler: remove custom opts from static scheduler 00:00:34.281 79fce488b test/scheduler: test scheduling period with dynamic scheduler 00:00:34.281 673f37314 ut/nvme_pcie: allocate nvme_pcie_qpair instead of spdk_nvme_qpair 00:00:34.294 [Pipeline] } 00:00:34.312 [Pipeline] // stage 00:00:34.323 [Pipeline] stage 00:00:34.325 [Pipeline] { (Prepare) 00:00:34.344 [Pipeline] writeFile 00:00:34.363 [Pipeline] sh 00:00:34.651 + logger -p user.info -t JENKINS-CI 00:00:34.666 [Pipeline] sh 00:00:34.952 + logger -p user.info -t JENKINS-CI 00:00:34.966 [Pipeline] sh 00:00:35.254 + cat autorun-spdk.conf 00:00:35.254 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:35.254 SPDK_TEST_NVMF=1 00:00:35.254 SPDK_TEST_NVME_CLI=1 00:00:35.254 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:35.254 SPDK_TEST_NVMF_NICS=e810 00:00:35.254 SPDK_TEST_VFIOUSER=1 00:00:35.254 SPDK_RUN_UBSAN=1 00:00:35.254 NET_TYPE=phy 00:00:35.264 RUN_NIGHTLY=0 00:00:35.268 [Pipeline] readFile 00:00:35.294 [Pipeline] withEnv 00:00:35.297 [Pipeline] { 00:00:35.310 [Pipeline] sh 00:00:35.598 + set -ex 00:00:35.598 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:35.598 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:35.598 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:35.598 ++ SPDK_TEST_NVMF=1 00:00:35.598 ++ SPDK_TEST_NVME_CLI=1 00:00:35.598 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:35.598 ++ SPDK_TEST_NVMF_NICS=e810 00:00:35.598 ++ SPDK_TEST_VFIOUSER=1 00:00:35.598 ++ SPDK_RUN_UBSAN=1 00:00:35.598 ++ NET_TYPE=phy 00:00:35.598 ++ RUN_NIGHTLY=0 00:00:35.598 + case $SPDK_TEST_NVMF_NICS in 00:00:35.598 + DRIVERS=ice 00:00:35.598 + [[ tcp == \r\d\m\a ]] 00:00:35.598 + [[ -n ice ]] 00:00:35.598 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:35.598 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:35.598 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:35.598 rmmod: ERROR: Module irdma is not currently loaded 00:00:35.598 rmmod: ERROR: Module i40iw is not currently loaded 00:00:35.598 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:35.598 + true 00:00:35.598 + for D in $DRIVERS 00:00:35.598 + sudo modprobe ice 00:00:35.598 + exit 0 00:00:35.608 [Pipeline] } 00:00:35.627 [Pipeline] // withEnv 00:00:35.632 [Pipeline] } 00:00:35.649 [Pipeline] // stage 00:00:35.660 [Pipeline] catchError 00:00:35.663 [Pipeline] { 00:00:35.680 [Pipeline] timeout 00:00:35.680 Timeout set to expire in 50 min 00:00:35.682 [Pipeline] { 00:00:35.699 [Pipeline] stage 00:00:35.701 [Pipeline] { (Tests) 00:00:35.720 [Pipeline] sh 00:00:36.010 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:36.010 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:36.010 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:36.010 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:36.010 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:36.010 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:36.010 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:36.010 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:36.010 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:36.010 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:36.010 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:36.010 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:36.010 + source /etc/os-release 00:00:36.010 ++ NAME='Fedora Linux' 00:00:36.010 ++ VERSION='38 (Cloud Edition)' 00:00:36.010 ++ ID=fedora 00:00:36.010 ++ VERSION_ID=38 00:00:36.010 ++ VERSION_CODENAME= 00:00:36.010 ++ PLATFORM_ID=platform:f38 00:00:36.010 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:36.010 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:36.010 ++ LOGO=fedora-logo-icon 00:00:36.010 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:36.010 ++ HOME_URL=https://fedoraproject.org/ 00:00:36.010 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:36.010 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:36.010 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:36.010 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:36.010 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:36.010 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:36.010 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:36.010 ++ SUPPORT_END=2024-05-14 00:00:36.010 ++ VARIANT='Cloud Edition' 00:00:36.010 ++ VARIANT_ID=cloud 00:00:36.010 + uname -a 00:00:36.010 Linux spdk-gp-02 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:36.010 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:36.951 Hugepages 00:00:36.951 node hugesize free / total 00:00:36.951 node0 1048576kB 0 / 0 00:00:36.951 node0 2048kB 0 / 0 00:00:36.951 node1 1048576kB 0 / 0 00:00:36.951 node1 2048kB 0 / 0 00:00:36.951 00:00:36.951 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:36.951 I/OAT 0000:00:04.0 8086 3c20 0 ioatdma - - 00:00:36.951 I/OAT 0000:00:04.1 8086 3c21 0 ioatdma - - 00:00:36.951 I/OAT 0000:00:04.2 8086 3c22 0 ioatdma - - 00:00:36.951 I/OAT 0000:00:04.3 8086 3c23 0 ioatdma - - 00:00:36.951 I/OAT 0000:00:04.4 8086 3c24 0 ioatdma - - 00:00:36.951 I/OAT 0000:00:04.5 8086 3c25 0 ioatdma - - 00:00:36.951 I/OAT 0000:00:04.6 8086 3c26 0 ioatdma - - 00:00:36.951 I/OAT 0000:00:04.7 8086 3c27 0 ioatdma - - 00:00:36.951 I/OAT 0000:80:04.0 8086 3c20 1 ioatdma - - 00:00:36.951 I/OAT 0000:80:04.1 8086 3c21 1 ioatdma - - 00:00:36.951 I/OAT 0000:80:04.2 8086 3c22 1 ioatdma - - 00:00:36.951 I/OAT 0000:80:04.3 8086 3c23 1 ioatdma - - 00:00:36.951 I/OAT 0000:80:04.4 8086 3c24 1 ioatdma - - 00:00:36.951 I/OAT 0000:80:04.5 8086 3c25 1 ioatdma - - 00:00:36.951 I/OAT 0000:80:04.6 8086 3c26 1 ioatdma - - 00:00:36.951 I/OAT 0000:80:04.7 8086 3c27 1 ioatdma - - 00:00:36.951 NVMe 0000:84:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:36.951 + rm -f /tmp/spdk-ld-path 00:00:36.951 + source autorun-spdk.conf 00:00:36.951 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:36.951 ++ SPDK_TEST_NVMF=1 00:00:36.951 ++ SPDK_TEST_NVME_CLI=1 00:00:36.951 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:36.951 ++ SPDK_TEST_NVMF_NICS=e810 00:00:36.951 ++ SPDK_TEST_VFIOUSER=1 00:00:36.951 ++ SPDK_RUN_UBSAN=1 00:00:36.951 ++ NET_TYPE=phy 00:00:36.951 ++ RUN_NIGHTLY=0 00:00:36.951 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:36.951 + [[ -n '' ]] 00:00:36.951 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:36.951 + for M in /var/spdk/build-*-manifest.txt 00:00:36.951 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:36.951 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:36.951 + for M in /var/spdk/build-*-manifest.txt 00:00:36.951 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:36.951 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:36.951 ++ uname 00:00:36.951 + [[ Linux == \L\i\n\u\x ]] 00:00:36.951 + sudo dmesg -T 00:00:36.951 + sudo dmesg --clear 00:00:36.951 + dmesg_pid=3666886 00:00:36.951 + [[ Fedora Linux == FreeBSD ]] 00:00:36.951 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:36.951 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:36.951 + sudo dmesg -Tw 00:00:36.951 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:36.951 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:36.951 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:36.951 + [[ -x /usr/src/fio-static/fio ]] 00:00:36.951 + export FIO_BIN=/usr/src/fio-static/fio 00:00:36.951 + FIO_BIN=/usr/src/fio-static/fio 00:00:36.951 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:36.951 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:36.951 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:36.951 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:36.951 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:36.951 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:36.951 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:36.951 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:36.951 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:36.951 Test configuration: 00:00:36.951 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:36.951 SPDK_TEST_NVMF=1 00:00:36.951 SPDK_TEST_NVME_CLI=1 00:00:36.951 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:36.951 SPDK_TEST_NVMF_NICS=e810 00:00:36.951 SPDK_TEST_VFIOUSER=1 00:00:36.951 SPDK_RUN_UBSAN=1 00:00:36.951 NET_TYPE=phy 00:00:37.213 RUN_NIGHTLY=0 22:10:02 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:37.213 22:10:02 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:37.213 22:10:02 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:37.213 22:10:02 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:37.213 22:10:02 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:37.213 22:10:02 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:37.213 22:10:02 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:37.213 22:10:02 -- paths/export.sh@5 -- $ export PATH 00:00:37.213 22:10:02 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:37.213 22:10:02 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:37.213 22:10:02 -- common/autobuild_common.sh@447 -- $ date +%s 00:00:37.214 22:10:02 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721851802.XXXXXX 00:00:37.214 22:10:02 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721851802.iw6t8T 00:00:37.214 22:10:02 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:00:37.214 22:10:02 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:00:37.214 22:10:02 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:37.214 22:10:02 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:37.214 22:10:02 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:37.214 22:10:02 -- common/autobuild_common.sh@463 -- $ get_config_params 00:00:37.214 22:10:02 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:00:37.214 22:10:02 -- common/autotest_common.sh@10 -- $ set +x 00:00:37.214 22:10:02 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:37.214 22:10:02 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:00:37.214 22:10:02 -- pm/common@17 -- $ local monitor 00:00:37.214 22:10:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:37.214 22:10:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:37.214 22:10:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:37.214 22:10:02 -- pm/common@21 -- $ date +%s 00:00:37.214 22:10:02 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:37.214 22:10:02 -- pm/common@21 -- $ date +%s 00:00:37.214 22:10:02 -- pm/common@25 -- $ sleep 1 00:00:37.214 22:10:02 -- pm/common@21 -- $ date +%s 00:00:37.214 22:10:02 -- pm/common@21 -- $ date +%s 00:00:37.214 22:10:02 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721851802 00:00:37.214 22:10:02 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721851802 00:00:37.214 22:10:02 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721851802 00:00:37.214 22:10:02 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721851802 00:00:37.214 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721851802_collect-vmstat.pm.log 00:00:37.214 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721851802_collect-cpu-load.pm.log 00:00:37.214 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721851802_collect-cpu-temp.pm.log 00:00:37.214 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721851802_collect-bmc-pm.bmc.pm.log 00:00:38.157 22:10:03 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:00:38.157 22:10:03 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:38.157 22:10:03 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:38.157 22:10:03 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:38.157 22:10:03 -- spdk/autobuild.sh@16 -- $ date -u 00:00:38.157 Wed Jul 24 08:10:03 PM UTC 2024 00:00:38.157 22:10:03 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:38.157 v24.09-pre-310-g643864934 00:00:38.157 22:10:03 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:38.157 22:10:03 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:38.157 22:10:03 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:38.157 22:10:03 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:00:38.157 22:10:03 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:00:38.157 22:10:03 -- common/autotest_common.sh@10 -- $ set +x 00:00:38.157 ************************************ 00:00:38.157 START TEST ubsan 00:00:38.157 ************************************ 00:00:38.157 22:10:03 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:00:38.157 using ubsan 00:00:38.157 00:00:38.157 real 0m0.000s 00:00:38.157 user 0m0.000s 00:00:38.157 sys 0m0.000s 00:00:38.157 22:10:03 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:00:38.157 22:10:03 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:38.157 ************************************ 00:00:38.157 END TEST ubsan 00:00:38.157 ************************************ 00:00:38.157 22:10:03 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:38.157 22:10:03 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:38.157 22:10:03 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:38.157 22:10:03 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:38.157 22:10:03 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:38.157 22:10:03 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:38.157 22:10:03 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:38.157 22:10:03 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:38.157 22:10:03 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:38.418 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:38.418 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:38.676 Using 'verbs' RDMA provider 00:00:49.269 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:01.489 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:01.489 Creating mk/config.mk...done. 00:01:01.489 Creating mk/cc.flags.mk...done. 00:01:01.489 Type 'make' to build. 00:01:01.489 22:10:25 -- spdk/autobuild.sh@69 -- $ run_test make make -j32 00:01:01.489 22:10:25 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:01.489 22:10:25 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:01.489 22:10:25 -- common/autotest_common.sh@10 -- $ set +x 00:01:01.489 ************************************ 00:01:01.489 START TEST make 00:01:01.489 ************************************ 00:01:01.489 22:10:25 make -- common/autotest_common.sh@1123 -- $ make -j32 00:01:01.489 make[1]: Nothing to be done for 'all'. 00:01:02.065 The Meson build system 00:01:02.065 Version: 1.3.1 00:01:02.065 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:02.065 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:02.065 Build type: native build 00:01:02.065 Project name: libvfio-user 00:01:02.065 Project version: 0.0.1 00:01:02.065 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:02.065 C linker for the host machine: cc ld.bfd 2.39-16 00:01:02.065 Host machine cpu family: x86_64 00:01:02.065 Host machine cpu: x86_64 00:01:02.065 Run-time dependency threads found: YES 00:01:02.065 Library dl found: YES 00:01:02.065 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:02.065 Run-time dependency json-c found: YES 0.17 00:01:02.065 Run-time dependency cmocka found: YES 1.1.7 00:01:02.065 Program pytest-3 found: NO 00:01:02.065 Program flake8 found: NO 00:01:02.065 Program misspell-fixer found: NO 00:01:02.065 Program restructuredtext-lint found: NO 00:01:02.065 Program valgrind found: YES (/usr/bin/valgrind) 00:01:02.065 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:02.065 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:02.065 Compiler for C supports arguments -Wwrite-strings: YES 00:01:02.065 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:02.065 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:02.065 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:02.065 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:02.065 Build targets in project: 8 00:01:02.065 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:02.065 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:02.065 00:01:02.065 libvfio-user 0.0.1 00:01:02.065 00:01:02.065 User defined options 00:01:02.065 buildtype : debug 00:01:02.065 default_library: shared 00:01:02.065 libdir : /usr/local/lib 00:01:02.065 00:01:02.065 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:02.642 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:02.906 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:02.906 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:02.906 [3/37] Compiling C object samples/null.p/null.c.o 00:01:02.906 [4/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:02.906 [5/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:02.906 [6/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:02.906 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:02.906 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:02.906 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:02.906 [10/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:02.906 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:02.906 [12/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:02.906 [13/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:02.906 [14/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:02.906 [15/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:02.906 [16/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:02.906 [17/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:03.167 [18/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:03.167 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:03.167 [20/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:03.167 [21/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:03.167 [22/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:03.167 [23/37] Compiling C object samples/server.p/server.c.o 00:01:03.167 [24/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:03.167 [25/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:03.167 [26/37] Compiling C object samples/client.p/client.c.o 00:01:03.167 [27/37] Linking target samples/client 00:01:03.167 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:03.167 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:01:03.167 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:03.429 [31/37] Linking target test/unit_tests 00:01:03.429 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:03.429 [33/37] Linking target samples/server 00:01:03.429 [34/37] Linking target samples/gpio-pci-idio-16 00:01:03.429 [35/37] Linking target samples/shadow_ioeventfd_server 00:01:03.429 [36/37] Linking target samples/null 00:01:03.429 [37/37] Linking target samples/lspci 00:01:03.429 INFO: autodetecting backend as ninja 00:01:03.429 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:03.690 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:04.266 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:04.266 ninja: no work to do. 00:01:10.865 The Meson build system 00:01:10.865 Version: 1.3.1 00:01:10.865 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:10.865 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:10.865 Build type: native build 00:01:10.865 Program cat found: YES (/usr/bin/cat) 00:01:10.865 Project name: DPDK 00:01:10.865 Project version: 24.03.0 00:01:10.865 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:10.865 C linker for the host machine: cc ld.bfd 2.39-16 00:01:10.865 Host machine cpu family: x86_64 00:01:10.865 Host machine cpu: x86_64 00:01:10.865 Message: ## Building in Developer Mode ## 00:01:10.865 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:10.866 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:10.866 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:10.866 Program python3 found: YES (/usr/bin/python3) 00:01:10.866 Program cat found: YES (/usr/bin/cat) 00:01:10.866 Compiler for C supports arguments -march=native: YES 00:01:10.866 Checking for size of "void *" : 8 00:01:10.866 Checking for size of "void *" : 8 (cached) 00:01:10.866 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:10.866 Library m found: YES 00:01:10.866 Library numa found: YES 00:01:10.866 Has header "numaif.h" : YES 00:01:10.866 Library fdt found: NO 00:01:10.866 Library execinfo found: NO 00:01:10.866 Has header "execinfo.h" : YES 00:01:10.866 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:10.866 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:10.866 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:10.866 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:10.866 Run-time dependency openssl found: YES 3.0.9 00:01:10.866 Run-time dependency libpcap found: YES 1.10.4 00:01:10.866 Has header "pcap.h" with dependency libpcap: YES 00:01:10.866 Compiler for C supports arguments -Wcast-qual: YES 00:01:10.866 Compiler for C supports arguments -Wdeprecated: YES 00:01:10.866 Compiler for C supports arguments -Wformat: YES 00:01:10.866 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:10.866 Compiler for C supports arguments -Wformat-security: NO 00:01:10.866 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:10.866 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:10.866 Compiler for C supports arguments -Wnested-externs: YES 00:01:10.866 Compiler for C supports arguments -Wold-style-definition: YES 00:01:10.866 Compiler for C supports arguments -Wpointer-arith: YES 00:01:10.866 Compiler for C supports arguments -Wsign-compare: YES 00:01:10.866 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:10.866 Compiler for C supports arguments -Wundef: YES 00:01:10.866 Compiler for C supports arguments -Wwrite-strings: YES 00:01:10.866 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:10.866 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:10.866 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:10.866 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:10.866 Program objdump found: YES (/usr/bin/objdump) 00:01:10.866 Compiler for C supports arguments -mavx512f: YES 00:01:10.866 Checking if "AVX512 checking" compiles: YES 00:01:10.866 Fetching value of define "__SSE4_2__" : 1 00:01:10.866 Fetching value of define "__AES__" : 1 00:01:10.866 Fetching value of define "__AVX__" : 1 00:01:10.866 Fetching value of define "__AVX2__" : (undefined) 00:01:10.866 Fetching value of define "__AVX512BW__" : (undefined) 00:01:10.866 Fetching value of define "__AVX512CD__" : (undefined) 00:01:10.866 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:10.866 Fetching value of define "__AVX512F__" : (undefined) 00:01:10.866 Fetching value of define "__AVX512VL__" : (undefined) 00:01:10.866 Fetching value of define "__PCLMUL__" : 1 00:01:10.866 Fetching value of define "__RDRND__" : (undefined) 00:01:10.866 Fetching value of define "__RDSEED__" : (undefined) 00:01:10.866 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:10.866 Fetching value of define "__znver1__" : (undefined) 00:01:10.866 Fetching value of define "__znver2__" : (undefined) 00:01:10.866 Fetching value of define "__znver3__" : (undefined) 00:01:10.866 Fetching value of define "__znver4__" : (undefined) 00:01:10.866 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:10.866 Message: lib/log: Defining dependency "log" 00:01:10.866 Message: lib/kvargs: Defining dependency "kvargs" 00:01:10.866 Message: lib/telemetry: Defining dependency "telemetry" 00:01:10.866 Checking for function "getentropy" : NO 00:01:10.866 Message: lib/eal: Defining dependency "eal" 00:01:10.866 Message: lib/ring: Defining dependency "ring" 00:01:10.866 Message: lib/rcu: Defining dependency "rcu" 00:01:10.866 Message: lib/mempool: Defining dependency "mempool" 00:01:10.866 Message: lib/mbuf: Defining dependency "mbuf" 00:01:10.866 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:10.866 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:10.866 Compiler for C supports arguments -mpclmul: YES 00:01:10.866 Compiler for C supports arguments -maes: YES 00:01:10.866 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:10.866 Compiler for C supports arguments -mavx512bw: YES 00:01:10.866 Compiler for C supports arguments -mavx512dq: YES 00:01:10.866 Compiler for C supports arguments -mavx512vl: YES 00:01:10.866 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:10.866 Compiler for C supports arguments -mavx2: YES 00:01:10.866 Compiler for C supports arguments -mavx: YES 00:01:10.866 Message: lib/net: Defining dependency "net" 00:01:10.866 Message: lib/meter: Defining dependency "meter" 00:01:10.866 Message: lib/ethdev: Defining dependency "ethdev" 00:01:10.866 Message: lib/pci: Defining dependency "pci" 00:01:10.866 Message: lib/cmdline: Defining dependency "cmdline" 00:01:10.866 Message: lib/hash: Defining dependency "hash" 00:01:10.866 Message: lib/timer: Defining dependency "timer" 00:01:10.866 Message: lib/compressdev: Defining dependency "compressdev" 00:01:10.866 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:10.866 Message: lib/dmadev: Defining dependency "dmadev" 00:01:10.866 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:10.866 Message: lib/power: Defining dependency "power" 00:01:10.866 Message: lib/reorder: Defining dependency "reorder" 00:01:10.866 Message: lib/security: Defining dependency "security" 00:01:10.866 Has header "linux/userfaultfd.h" : YES 00:01:10.866 Has header "linux/vduse.h" : YES 00:01:10.866 Message: lib/vhost: Defining dependency "vhost" 00:01:10.866 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:10.866 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:10.866 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:10.866 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:10.866 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:10.866 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:10.866 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:10.866 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:10.866 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:10.866 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:10.866 Program doxygen found: YES (/usr/bin/doxygen) 00:01:10.866 Configuring doxy-api-html.conf using configuration 00:01:10.866 Configuring doxy-api-man.conf using configuration 00:01:10.866 Program mandb found: YES (/usr/bin/mandb) 00:01:10.866 Program sphinx-build found: NO 00:01:10.866 Configuring rte_build_config.h using configuration 00:01:10.866 Message: 00:01:10.866 ================= 00:01:10.866 Applications Enabled 00:01:10.866 ================= 00:01:10.866 00:01:10.866 apps: 00:01:10.866 00:01:10.866 00:01:10.866 Message: 00:01:10.866 ================= 00:01:10.866 Libraries Enabled 00:01:10.866 ================= 00:01:10.866 00:01:10.866 libs: 00:01:10.866 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:10.866 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:10.866 cryptodev, dmadev, power, reorder, security, vhost, 00:01:10.866 00:01:10.866 Message: 00:01:10.866 =============== 00:01:10.866 Drivers Enabled 00:01:10.866 =============== 00:01:10.866 00:01:10.866 common: 00:01:10.866 00:01:10.866 bus: 00:01:10.866 pci, vdev, 00:01:10.866 mempool: 00:01:10.866 ring, 00:01:10.866 dma: 00:01:10.866 00:01:10.866 net: 00:01:10.866 00:01:10.866 crypto: 00:01:10.866 00:01:10.866 compress: 00:01:10.866 00:01:10.866 vdpa: 00:01:10.866 00:01:10.866 00:01:10.866 Message: 00:01:10.866 ================= 00:01:10.866 Content Skipped 00:01:10.866 ================= 00:01:10.866 00:01:10.866 apps: 00:01:10.866 dumpcap: explicitly disabled via build config 00:01:10.866 graph: explicitly disabled via build config 00:01:10.866 pdump: explicitly disabled via build config 00:01:10.866 proc-info: explicitly disabled via build config 00:01:10.866 test-acl: explicitly disabled via build config 00:01:10.866 test-bbdev: explicitly disabled via build config 00:01:10.866 test-cmdline: explicitly disabled via build config 00:01:10.866 test-compress-perf: explicitly disabled via build config 00:01:10.866 test-crypto-perf: explicitly disabled via build config 00:01:10.866 test-dma-perf: explicitly disabled via build config 00:01:10.866 test-eventdev: explicitly disabled via build config 00:01:10.866 test-fib: explicitly disabled via build config 00:01:10.866 test-flow-perf: explicitly disabled via build config 00:01:10.866 test-gpudev: explicitly disabled via build config 00:01:10.866 test-mldev: explicitly disabled via build config 00:01:10.866 test-pipeline: explicitly disabled via build config 00:01:10.866 test-pmd: explicitly disabled via build config 00:01:10.866 test-regex: explicitly disabled via build config 00:01:10.866 test-sad: explicitly disabled via build config 00:01:10.866 test-security-perf: explicitly disabled via build config 00:01:10.866 00:01:10.866 libs: 00:01:10.866 argparse: explicitly disabled via build config 00:01:10.866 metrics: explicitly disabled via build config 00:01:10.866 acl: explicitly disabled via build config 00:01:10.866 bbdev: explicitly disabled via build config 00:01:10.866 bitratestats: explicitly disabled via build config 00:01:10.866 bpf: explicitly disabled via build config 00:01:10.866 cfgfile: explicitly disabled via build config 00:01:10.866 distributor: explicitly disabled via build config 00:01:10.866 efd: explicitly disabled via build config 00:01:10.866 eventdev: explicitly disabled via build config 00:01:10.866 dispatcher: explicitly disabled via build config 00:01:10.866 gpudev: explicitly disabled via build config 00:01:10.866 gro: explicitly disabled via build config 00:01:10.867 gso: explicitly disabled via build config 00:01:10.867 ip_frag: explicitly disabled via build config 00:01:10.867 jobstats: explicitly disabled via build config 00:01:10.867 latencystats: explicitly disabled via build config 00:01:10.867 lpm: explicitly disabled via build config 00:01:10.867 member: explicitly disabled via build config 00:01:10.867 pcapng: explicitly disabled via build config 00:01:10.867 rawdev: explicitly disabled via build config 00:01:10.867 regexdev: explicitly disabled via build config 00:01:10.867 mldev: explicitly disabled via build config 00:01:10.867 rib: explicitly disabled via build config 00:01:10.867 sched: explicitly disabled via build config 00:01:10.867 stack: explicitly disabled via build config 00:01:10.867 ipsec: explicitly disabled via build config 00:01:10.867 pdcp: explicitly disabled via build config 00:01:10.867 fib: explicitly disabled via build config 00:01:10.867 port: explicitly disabled via build config 00:01:10.867 pdump: explicitly disabled via build config 00:01:10.867 table: explicitly disabled via build config 00:01:10.867 pipeline: explicitly disabled via build config 00:01:10.867 graph: explicitly disabled via build config 00:01:10.867 node: explicitly disabled via build config 00:01:10.867 00:01:10.867 drivers: 00:01:10.867 common/cpt: not in enabled drivers build config 00:01:10.867 common/dpaax: not in enabled drivers build config 00:01:10.867 common/iavf: not in enabled drivers build config 00:01:10.867 common/idpf: not in enabled drivers build config 00:01:10.867 common/ionic: not in enabled drivers build config 00:01:10.867 common/mvep: not in enabled drivers build config 00:01:10.867 common/octeontx: not in enabled drivers build config 00:01:10.867 bus/auxiliary: not in enabled drivers build config 00:01:10.867 bus/cdx: not in enabled drivers build config 00:01:10.867 bus/dpaa: not in enabled drivers build config 00:01:10.867 bus/fslmc: not in enabled drivers build config 00:01:10.867 bus/ifpga: not in enabled drivers build config 00:01:10.867 bus/platform: not in enabled drivers build config 00:01:10.867 bus/uacce: not in enabled drivers build config 00:01:10.867 bus/vmbus: not in enabled drivers build config 00:01:10.867 common/cnxk: not in enabled drivers build config 00:01:10.867 common/mlx5: not in enabled drivers build config 00:01:10.867 common/nfp: not in enabled drivers build config 00:01:10.867 common/nitrox: not in enabled drivers build config 00:01:10.867 common/qat: not in enabled drivers build config 00:01:10.867 common/sfc_efx: not in enabled drivers build config 00:01:10.867 mempool/bucket: not in enabled drivers build config 00:01:10.867 mempool/cnxk: not in enabled drivers build config 00:01:10.867 mempool/dpaa: not in enabled drivers build config 00:01:10.867 mempool/dpaa2: not in enabled drivers build config 00:01:10.867 mempool/octeontx: not in enabled drivers build config 00:01:10.867 mempool/stack: not in enabled drivers build config 00:01:10.867 dma/cnxk: not in enabled drivers build config 00:01:10.867 dma/dpaa: not in enabled drivers build config 00:01:10.867 dma/dpaa2: not in enabled drivers build config 00:01:10.867 dma/hisilicon: not in enabled drivers build config 00:01:10.867 dma/idxd: not in enabled drivers build config 00:01:10.867 dma/ioat: not in enabled drivers build config 00:01:10.867 dma/skeleton: not in enabled drivers build config 00:01:10.867 net/af_packet: not in enabled drivers build config 00:01:10.867 net/af_xdp: not in enabled drivers build config 00:01:10.867 net/ark: not in enabled drivers build config 00:01:10.867 net/atlantic: not in enabled drivers build config 00:01:10.867 net/avp: not in enabled drivers build config 00:01:10.867 net/axgbe: not in enabled drivers build config 00:01:10.867 net/bnx2x: not in enabled drivers build config 00:01:10.867 net/bnxt: not in enabled drivers build config 00:01:10.867 net/bonding: not in enabled drivers build config 00:01:10.867 net/cnxk: not in enabled drivers build config 00:01:10.867 net/cpfl: not in enabled drivers build config 00:01:10.867 net/cxgbe: not in enabled drivers build config 00:01:10.867 net/dpaa: not in enabled drivers build config 00:01:10.867 net/dpaa2: not in enabled drivers build config 00:01:10.867 net/e1000: not in enabled drivers build config 00:01:10.867 net/ena: not in enabled drivers build config 00:01:10.867 net/enetc: not in enabled drivers build config 00:01:10.867 net/enetfec: not in enabled drivers build config 00:01:10.867 net/enic: not in enabled drivers build config 00:01:10.867 net/failsafe: not in enabled drivers build config 00:01:10.867 net/fm10k: not in enabled drivers build config 00:01:10.867 net/gve: not in enabled drivers build config 00:01:10.867 net/hinic: not in enabled drivers build config 00:01:10.867 net/hns3: not in enabled drivers build config 00:01:10.867 net/i40e: not in enabled drivers build config 00:01:10.867 net/iavf: not in enabled drivers build config 00:01:10.867 net/ice: not in enabled drivers build config 00:01:10.867 net/idpf: not in enabled drivers build config 00:01:10.867 net/igc: not in enabled drivers build config 00:01:10.867 net/ionic: not in enabled drivers build config 00:01:10.867 net/ipn3ke: not in enabled drivers build config 00:01:10.867 net/ixgbe: not in enabled drivers build config 00:01:10.867 net/mana: not in enabled drivers build config 00:01:10.867 net/memif: not in enabled drivers build config 00:01:10.867 net/mlx4: not in enabled drivers build config 00:01:10.867 net/mlx5: not in enabled drivers build config 00:01:10.867 net/mvneta: not in enabled drivers build config 00:01:10.867 net/mvpp2: not in enabled drivers build config 00:01:10.867 net/netvsc: not in enabled drivers build config 00:01:10.867 net/nfb: not in enabled drivers build config 00:01:10.867 net/nfp: not in enabled drivers build config 00:01:10.867 net/ngbe: not in enabled drivers build config 00:01:10.867 net/null: not in enabled drivers build config 00:01:10.867 net/octeontx: not in enabled drivers build config 00:01:10.867 net/octeon_ep: not in enabled drivers build config 00:01:10.867 net/pcap: not in enabled drivers build config 00:01:10.867 net/pfe: not in enabled drivers build config 00:01:10.867 net/qede: not in enabled drivers build config 00:01:10.867 net/ring: not in enabled drivers build config 00:01:10.867 net/sfc: not in enabled drivers build config 00:01:10.867 net/softnic: not in enabled drivers build config 00:01:10.867 net/tap: not in enabled drivers build config 00:01:10.867 net/thunderx: not in enabled drivers build config 00:01:10.867 net/txgbe: not in enabled drivers build config 00:01:10.867 net/vdev_netvsc: not in enabled drivers build config 00:01:10.867 net/vhost: not in enabled drivers build config 00:01:10.867 net/virtio: not in enabled drivers build config 00:01:10.867 net/vmxnet3: not in enabled drivers build config 00:01:10.867 raw/*: missing internal dependency, "rawdev" 00:01:10.867 crypto/armv8: not in enabled drivers build config 00:01:10.867 crypto/bcmfs: not in enabled drivers build config 00:01:10.867 crypto/caam_jr: not in enabled drivers build config 00:01:10.867 crypto/ccp: not in enabled drivers build config 00:01:10.867 crypto/cnxk: not in enabled drivers build config 00:01:10.867 crypto/dpaa_sec: not in enabled drivers build config 00:01:10.867 crypto/dpaa2_sec: not in enabled drivers build config 00:01:10.867 crypto/ipsec_mb: not in enabled drivers build config 00:01:10.867 crypto/mlx5: not in enabled drivers build config 00:01:10.867 crypto/mvsam: not in enabled drivers build config 00:01:10.867 crypto/nitrox: not in enabled drivers build config 00:01:10.867 crypto/null: not in enabled drivers build config 00:01:10.867 crypto/octeontx: not in enabled drivers build config 00:01:10.867 crypto/openssl: not in enabled drivers build config 00:01:10.867 crypto/scheduler: not in enabled drivers build config 00:01:10.867 crypto/uadk: not in enabled drivers build config 00:01:10.867 crypto/virtio: not in enabled drivers build config 00:01:10.867 compress/isal: not in enabled drivers build config 00:01:10.867 compress/mlx5: not in enabled drivers build config 00:01:10.867 compress/nitrox: not in enabled drivers build config 00:01:10.867 compress/octeontx: not in enabled drivers build config 00:01:10.867 compress/zlib: not in enabled drivers build config 00:01:10.867 regex/*: missing internal dependency, "regexdev" 00:01:10.867 ml/*: missing internal dependency, "mldev" 00:01:10.867 vdpa/ifc: not in enabled drivers build config 00:01:10.867 vdpa/mlx5: not in enabled drivers build config 00:01:10.867 vdpa/nfp: not in enabled drivers build config 00:01:10.867 vdpa/sfc: not in enabled drivers build config 00:01:10.867 event/*: missing internal dependency, "eventdev" 00:01:10.867 baseband/*: missing internal dependency, "bbdev" 00:01:10.867 gpu/*: missing internal dependency, "gpudev" 00:01:10.867 00:01:10.867 00:01:10.867 Build targets in project: 85 00:01:10.867 00:01:10.867 DPDK 24.03.0 00:01:10.867 00:01:10.867 User defined options 00:01:10.867 buildtype : debug 00:01:10.867 default_library : shared 00:01:10.867 libdir : lib 00:01:10.867 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:10.867 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:10.867 c_link_args : 00:01:10.867 cpu_instruction_set: native 00:01:10.867 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:01:10.867 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:01:10.867 enable_docs : false 00:01:10.867 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:10.867 enable_kmods : false 00:01:10.867 max_lcores : 128 00:01:10.867 tests : false 00:01:10.867 00:01:10.867 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:11.438 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:11.438 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:11.438 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:11.438 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:11.438 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:11.438 [5/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:11.438 [6/268] Linking static target lib/librte_kvargs.a 00:01:11.438 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:11.438 [8/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:11.438 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:11.438 [10/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:11.438 [11/268] Linking static target lib/librte_log.a 00:01:11.438 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:11.698 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:11.698 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:12.273 [15/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.273 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:12.274 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:12.274 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:12.274 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:12.274 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:12.274 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:12.274 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:12.274 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:12.536 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:12.536 [25/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:12.537 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:12.537 [27/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:12.537 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:12.537 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:12.537 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:12.537 [31/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:12.537 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:12.537 [33/268] Linking static target lib/librte_telemetry.a 00:01:12.537 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:12.537 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:12.537 [36/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:12.537 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:12.537 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:12.537 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:12.537 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:12.537 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:12.799 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:12.799 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:12.799 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:12.799 [45/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:12.799 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:12.799 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:12.799 [48/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:12.799 [49/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:12.799 [50/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:13.062 [51/268] Linking target lib/librte_log.so.24.1 00:01:13.062 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:13.324 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:13.324 [54/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:13.324 [55/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:13.324 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:13.324 [57/268] Linking target lib/librte_kvargs.so.24.1 00:01:13.324 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:13.324 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:13.324 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:13.324 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:13.324 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:13.324 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:13.585 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:13.585 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:13.585 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:13.585 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:13.585 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:13.585 [69/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:13.585 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:13.585 [71/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:13.585 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:13.585 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:13.585 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:13.585 [75/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:13.585 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:13.850 [77/268] Linking target lib/librte_telemetry.so.24.1 00:01:13.850 [78/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:13.850 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:13.850 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:13.850 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:13.850 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:14.112 [83/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:14.112 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:14.112 [85/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:14.112 [86/268] Linking static target lib/librte_ring.a 00:01:14.112 [87/268] Linking static target lib/librte_eal.a 00:01:14.405 [88/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:14.405 [89/268] Linking static target lib/librte_rcu.a 00:01:14.405 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:14.405 [91/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:14.405 [92/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:14.405 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:14.405 [94/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:14.405 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:14.405 [96/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:14.405 [97/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:14.405 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:14.665 [99/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:14.665 [100/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:14.665 [101/268] Linking static target lib/librte_pci.a 00:01:14.665 [102/268] Linking static target lib/librte_mempool.a 00:01:14.665 [103/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:14.665 [104/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:14.665 [105/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:14.665 [106/268] Linking static target lib/librte_meter.a 00:01:14.665 [107/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:14.665 [108/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:14.665 [109/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.665 [110/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:14.927 [111/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:14.927 [112/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:14.927 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:14.927 [114/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:14.927 [115/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:14.927 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:14.927 [117/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:14.927 [118/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:14.927 [119/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:14.927 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:14.927 [121/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:14.927 [122/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:14.927 [123/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:14.927 [124/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:14.927 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:14.927 [126/268] Linking static target lib/librte_net.a 00:01:14.927 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:15.189 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:15.189 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:15.189 [130/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.189 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:15.189 [132/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:15.189 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:15.450 [134/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.450 [135/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:15.450 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:15.450 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:15.450 [138/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:15.450 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:15.711 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:15.711 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:15.711 [142/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.711 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:15.711 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:15.711 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:15.971 [146/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:15.971 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:15.971 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:15.971 [149/268] Linking static target lib/librte_cmdline.a 00:01:15.971 [150/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:15.971 [151/268] Linking static target lib/librte_timer.a 00:01:15.971 [152/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:15.972 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:16.234 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:16.234 [155/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:16.235 [156/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.235 [157/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:16.235 [158/268] Linking static target lib/librte_mbuf.a 00:01:16.235 [159/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:16.235 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:16.235 [161/268] Linking static target lib/librte_dmadev.a 00:01:16.497 [162/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:16.497 [163/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:16.497 [164/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:16.497 [165/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:16.497 [166/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:16.497 [167/268] Linking static target lib/librte_hash.a 00:01:16.497 [168/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:16.497 [169/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:16.497 [170/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:16.757 [171/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:16.757 [172/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:16.757 [173/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:16.757 [174/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:16.757 [175/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:16.757 [176/268] Linking static target lib/librte_compressdev.a 00:01:16.757 [177/268] Linking static target lib/librte_power.a 00:01:16.757 [178/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.018 [179/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:17.018 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:17.018 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:17.278 [182/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.278 [183/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:17.278 [184/268] Linking static target lib/librte_reorder.a 00:01:17.278 [185/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.278 [186/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:17.278 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:17.278 [188/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:17.537 [189/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.537 [190/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.537 [191/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:17.537 [192/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.537 [193/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:17.537 [194/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:17.537 [195/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:17.537 [196/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:17.537 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:17.537 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:17.537 [199/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:17.537 [200/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.537 [201/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.537 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:17.537 [203/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:17.796 [204/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:17.796 [205/268] Linking static target lib/librte_security.a 00:01:17.796 [206/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:17.796 [207/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:17.796 [208/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:17.796 [209/268] Linking static target drivers/librte_bus_vdev.a 00:01:17.796 [210/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:17.796 [211/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:17.796 [212/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:17.796 [213/268] Linking static target drivers/librte_bus_pci.a 00:01:17.796 [214/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:17.796 [215/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:17.796 [216/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:17.796 [217/268] Linking static target lib/librte_cryptodev.a 00:01:17.796 [218/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:17.796 [219/268] Linking static target lib/librte_ethdev.a 00:01:18.055 [220/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.055 [221/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:18.055 [222/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:18.055 [223/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:18.055 [224/268] Linking static target drivers/librte_mempool_ring.a 00:01:18.055 [225/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.313 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.878 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.772 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:21.705 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.705 [230/268] Linking target lib/librte_eal.so.24.1 00:01:21.705 [231/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.963 [232/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:21.963 [233/268] Linking target lib/librte_ring.so.24.1 00:01:21.963 [234/268] Linking target lib/librte_timer.so.24.1 00:01:21.963 [235/268] Linking target lib/librte_meter.so.24.1 00:01:21.963 [236/268] Linking target lib/librte_pci.so.24.1 00:01:21.963 [237/268] Linking target lib/librte_dmadev.so.24.1 00:01:21.963 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:21.963 [239/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:21.963 [240/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:21.963 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:21.963 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:21.963 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:22.222 [244/268] Linking target lib/librte_rcu.so.24.1 00:01:22.222 [245/268] Linking target lib/librte_mempool.so.24.1 00:01:22.222 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:22.222 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:22.222 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:22.222 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:22.222 [250/268] Linking target lib/librte_mbuf.so.24.1 00:01:22.480 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:22.480 [252/268] Linking target lib/librte_reorder.so.24.1 00:01:22.480 [253/268] Linking target lib/librte_compressdev.so.24.1 00:01:22.480 [254/268] Linking target lib/librte_net.so.24.1 00:01:22.480 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:01:22.738 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:22.738 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:22.738 [258/268] Linking target lib/librte_security.so.24.1 00:01:22.738 [259/268] Linking target lib/librte_hash.so.24.1 00:01:22.738 [260/268] Linking target lib/librte_cmdline.so.24.1 00:01:22.738 [261/268] Linking target lib/librte_ethdev.so.24.1 00:01:22.738 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:22.738 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:22.998 [264/268] Linking target lib/librte_power.so.24.1 00:01:26.286 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:26.286 [266/268] Linking static target lib/librte_vhost.a 00:01:27.222 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.222 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:27.222 INFO: autodetecting backend as ninja 00:01:27.222 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 32 00:01:28.207 CC lib/log/log.o 00:01:28.207 CC lib/log/log_flags.o 00:01:28.207 CC lib/log/log_deprecated.o 00:01:28.207 CC lib/ut_mock/mock.o 00:01:28.207 CC lib/ut/ut.o 00:01:28.465 LIB libspdk_log.a 00:01:28.465 LIB libspdk_ut.a 00:01:28.465 LIB libspdk_ut_mock.a 00:01:28.465 SO libspdk_ut_mock.so.6.0 00:01:28.465 SO libspdk_ut.so.2.0 00:01:28.465 SO libspdk_log.so.7.0 00:01:28.465 SYMLINK libspdk_ut_mock.so 00:01:28.465 SYMLINK libspdk_ut.so 00:01:28.465 SYMLINK libspdk_log.so 00:01:28.723 CC lib/util/base64.o 00:01:28.723 CXX lib/trace_parser/trace.o 00:01:28.723 CC lib/util/bit_array.o 00:01:28.723 CC lib/util/cpuset.o 00:01:28.723 CC lib/dma/dma.o 00:01:28.723 CC lib/util/crc16.o 00:01:28.723 CC lib/ioat/ioat.o 00:01:28.723 CC lib/util/crc32.o 00:01:28.723 CC lib/util/crc32c.o 00:01:28.723 CC lib/util/crc32_ieee.o 00:01:28.723 CC lib/util/crc64.o 00:01:28.723 CC lib/util/dif.o 00:01:28.723 CC lib/util/fd.o 00:01:28.723 CC lib/util/file.o 00:01:28.723 CC lib/util/fd_group.o 00:01:28.724 CC lib/util/hexlify.o 00:01:28.724 CC lib/util/iov.o 00:01:28.724 CC lib/util/math.o 00:01:28.724 CC lib/util/net.o 00:01:28.724 CC lib/util/pipe.o 00:01:28.724 CC lib/util/strerror_tls.o 00:01:28.724 CC lib/util/string.o 00:01:28.724 CC lib/util/uuid.o 00:01:28.724 CC lib/util/xor.o 00:01:28.724 CC lib/util/zipf.o 00:01:28.724 CC lib/vfio_user/host/vfio_user_pci.o 00:01:28.724 CC lib/vfio_user/host/vfio_user.o 00:01:28.981 LIB libspdk_dma.a 00:01:28.981 SO libspdk_dma.so.4.0 00:01:28.981 SYMLINK libspdk_dma.so 00:01:28.981 LIB libspdk_ioat.a 00:01:28.981 SO libspdk_ioat.so.7.0 00:01:29.238 LIB libspdk_vfio_user.a 00:01:29.238 SYMLINK libspdk_ioat.so 00:01:29.238 SO libspdk_vfio_user.so.5.0 00:01:29.238 SYMLINK libspdk_vfio_user.so 00:01:29.238 LIB libspdk_util.a 00:01:29.496 SO libspdk_util.so.10.0 00:01:29.496 SYMLINK libspdk_util.so 00:01:29.754 LIB libspdk_trace_parser.a 00:01:29.754 SO libspdk_trace_parser.so.5.0 00:01:29.754 CC lib/env_dpdk/env.o 00:01:29.754 CC lib/env_dpdk/memory.o 00:01:29.754 CC lib/env_dpdk/pci.o 00:01:29.754 CC lib/env_dpdk/init.o 00:01:29.754 CC lib/env_dpdk/threads.o 00:01:29.754 CC lib/vmd/vmd.o 00:01:29.754 CC lib/idxd/idxd.o 00:01:29.754 CC lib/vmd/led.o 00:01:29.754 CC lib/env_dpdk/pci_ioat.o 00:01:29.754 CC lib/idxd/idxd_user.o 00:01:29.754 CC lib/idxd/idxd_kernel.o 00:01:29.755 CC lib/env_dpdk/pci_virtio.o 00:01:29.755 CC lib/env_dpdk/pci_vmd.o 00:01:29.755 CC lib/env_dpdk/pci_event.o 00:01:29.755 CC lib/env_dpdk/pci_idxd.o 00:01:29.755 CC lib/env_dpdk/sigbus_handler.o 00:01:29.755 CC lib/rdma_utils/rdma_utils.o 00:01:29.755 CC lib/env_dpdk/pci_dpdk.o 00:01:29.755 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:29.755 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:29.755 CC lib/json/json_parse.o 00:01:29.755 CC lib/rdma_provider/common.o 00:01:29.755 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:29.755 CC lib/json/json_util.o 00:01:29.755 CC lib/json/json_write.o 00:01:29.755 CC lib/conf/conf.o 00:01:29.755 SYMLINK libspdk_trace_parser.so 00:01:30.012 LIB libspdk_conf.a 00:01:30.013 SO libspdk_conf.so.6.0 00:01:30.013 SYMLINK libspdk_conf.so 00:01:30.013 LIB libspdk_rdma_provider.a 00:01:30.013 SO libspdk_rdma_provider.so.6.0 00:01:30.013 LIB libspdk_rdma_utils.a 00:01:30.270 SO libspdk_rdma_utils.so.1.0 00:01:30.270 SYMLINK libspdk_rdma_provider.so 00:01:30.270 LIB libspdk_json.a 00:01:30.270 SYMLINK libspdk_rdma_utils.so 00:01:30.270 SO libspdk_json.so.6.0 00:01:30.270 SYMLINK libspdk_json.so 00:01:30.270 LIB libspdk_idxd.a 00:01:30.526 SO libspdk_idxd.so.12.0 00:01:30.526 CC lib/jsonrpc/jsonrpc_server.o 00:01:30.526 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:30.526 CC lib/jsonrpc/jsonrpc_client.o 00:01:30.526 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:30.526 LIB libspdk_vmd.a 00:01:30.526 SYMLINK libspdk_idxd.so 00:01:30.526 SO libspdk_vmd.so.6.0 00:01:30.526 SYMLINK libspdk_vmd.so 00:01:30.784 LIB libspdk_jsonrpc.a 00:01:30.784 SO libspdk_jsonrpc.so.6.0 00:01:30.784 SYMLINK libspdk_jsonrpc.so 00:01:31.041 CC lib/rpc/rpc.o 00:01:31.298 LIB libspdk_rpc.a 00:01:31.298 SO libspdk_rpc.so.6.0 00:01:31.298 SYMLINK libspdk_rpc.so 00:01:31.556 CC lib/keyring/keyring.o 00:01:31.556 CC lib/keyring/keyring_rpc.o 00:01:31.556 CC lib/notify/notify.o 00:01:31.556 CC lib/notify/notify_rpc.o 00:01:31.556 CC lib/trace/trace.o 00:01:31.556 CC lib/trace/trace_flags.o 00:01:31.556 CC lib/trace/trace_rpc.o 00:01:31.814 LIB libspdk_notify.a 00:01:31.814 LIB libspdk_env_dpdk.a 00:01:31.814 SO libspdk_notify.so.6.0 00:01:31.814 LIB libspdk_keyring.a 00:01:31.814 SO libspdk_env_dpdk.so.15.0 00:01:31.814 SYMLINK libspdk_notify.so 00:01:31.814 SO libspdk_keyring.so.1.0 00:01:31.814 LIB libspdk_trace.a 00:01:31.814 SYMLINK libspdk_keyring.so 00:01:31.814 SO libspdk_trace.so.10.0 00:01:31.814 SYMLINK libspdk_trace.so 00:01:32.073 SYMLINK libspdk_env_dpdk.so 00:01:32.073 CC lib/thread/thread.o 00:01:32.073 CC lib/thread/iobuf.o 00:01:32.073 CC lib/sock/sock.o 00:01:32.073 CC lib/sock/sock_rpc.o 00:01:32.639 LIB libspdk_sock.a 00:01:32.639 SO libspdk_sock.so.10.0 00:01:32.639 SYMLINK libspdk_sock.so 00:01:32.898 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:32.898 CC lib/nvme/nvme_ctrlr.o 00:01:32.898 CC lib/nvme/nvme_fabric.o 00:01:32.898 CC lib/nvme/nvme_ns_cmd.o 00:01:32.898 CC lib/nvme/nvme_ns.o 00:01:32.898 CC lib/nvme/nvme_pcie_common.o 00:01:32.898 CC lib/nvme/nvme_pcie.o 00:01:32.898 CC lib/nvme/nvme_qpair.o 00:01:32.898 CC lib/nvme/nvme.o 00:01:32.898 CC lib/nvme/nvme_quirks.o 00:01:32.898 CC lib/nvme/nvme_transport.o 00:01:32.898 CC lib/nvme/nvme_discovery.o 00:01:32.898 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:32.898 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:32.898 CC lib/nvme/nvme_tcp.o 00:01:32.898 CC lib/nvme/nvme_opal.o 00:01:32.898 CC lib/nvme/nvme_io_msg.o 00:01:32.898 CC lib/nvme/nvme_poll_group.o 00:01:32.898 CC lib/nvme/nvme_zns.o 00:01:32.898 CC lib/nvme/nvme_stubs.o 00:01:32.898 CC lib/nvme/nvme_auth.o 00:01:32.898 CC lib/nvme/nvme_cuse.o 00:01:32.898 CC lib/nvme/nvme_vfio_user.o 00:01:32.898 CC lib/nvme/nvme_rdma.o 00:01:33.466 LIB libspdk_thread.a 00:01:33.725 SO libspdk_thread.so.10.1 00:01:33.725 SYMLINK libspdk_thread.so 00:01:33.992 CC lib/init/json_config.o 00:01:33.992 CC lib/init/subsystem.o 00:01:33.992 CC lib/init/subsystem_rpc.o 00:01:33.992 CC lib/init/rpc.o 00:01:33.992 CC lib/blob/blobstore.o 00:01:33.992 CC lib/blob/request.o 00:01:33.992 CC lib/blob/zeroes.o 00:01:33.992 CC lib/blob/blob_bs_dev.o 00:01:33.992 CC lib/accel/accel.o 00:01:33.992 CC lib/vfu_tgt/tgt_endpoint.o 00:01:33.992 CC lib/vfu_tgt/tgt_rpc.o 00:01:33.992 CC lib/accel/accel_rpc.o 00:01:33.992 CC lib/virtio/virtio.o 00:01:33.992 CC lib/accel/accel_sw.o 00:01:33.992 CC lib/virtio/virtio_vhost_user.o 00:01:33.992 CC lib/virtio/virtio_vfio_user.o 00:01:33.992 CC lib/virtio/virtio_pci.o 00:01:34.252 LIB libspdk_init.a 00:01:34.252 SO libspdk_init.so.5.0 00:01:34.511 SYMLINK libspdk_init.so 00:01:34.511 LIB libspdk_vfu_tgt.a 00:01:34.511 LIB libspdk_virtio.a 00:01:34.511 SO libspdk_vfu_tgt.so.3.0 00:01:34.511 SO libspdk_virtio.so.7.0 00:01:34.511 CC lib/event/app.o 00:01:34.511 CC lib/event/reactor.o 00:01:34.511 CC lib/event/log_rpc.o 00:01:34.511 CC lib/event/app_rpc.o 00:01:34.511 CC lib/event/scheduler_static.o 00:01:34.511 SYMLINK libspdk_vfu_tgt.so 00:01:34.769 SYMLINK libspdk_virtio.so 00:01:35.027 LIB libspdk_event.a 00:01:35.027 SO libspdk_event.so.14.0 00:01:35.027 LIB libspdk_accel.a 00:01:35.027 SO libspdk_accel.so.16.0 00:01:35.285 SYMLINK libspdk_event.so 00:01:35.285 SYMLINK libspdk_accel.so 00:01:35.285 CC lib/bdev/bdev.o 00:01:35.285 CC lib/bdev/bdev_rpc.o 00:01:35.285 CC lib/bdev/bdev_zone.o 00:01:35.285 CC lib/bdev/part.o 00:01:35.285 CC lib/bdev/scsi_nvme.o 00:01:36.220 LIB libspdk_nvme.a 00:01:36.220 SO libspdk_nvme.so.13.1 00:01:36.479 SYMLINK libspdk_nvme.so 00:01:37.045 LIB libspdk_blob.a 00:01:37.045 SO libspdk_blob.so.11.0 00:01:37.304 SYMLINK libspdk_blob.so 00:01:37.304 CC lib/blobfs/blobfs.o 00:01:37.304 CC lib/blobfs/tree.o 00:01:37.304 CC lib/lvol/lvol.o 00:01:37.883 LIB libspdk_bdev.a 00:01:37.883 SO libspdk_bdev.so.16.0 00:01:37.883 SYMLINK libspdk_bdev.so 00:01:38.147 CC lib/ftl/ftl_core.o 00:01:38.147 CC lib/ftl/ftl_init.o 00:01:38.147 CC lib/ublk/ublk.o 00:01:38.147 CC lib/nvmf/ctrlr.o 00:01:38.147 CC lib/scsi/dev.o 00:01:38.147 CC lib/nvmf/ctrlr_discovery.o 00:01:38.147 CC lib/ftl/ftl_layout.o 00:01:38.147 CC lib/ublk/ublk_rpc.o 00:01:38.147 CC lib/scsi/lun.o 00:01:38.147 CC lib/nbd/nbd.o 00:01:38.147 CC lib/nvmf/ctrlr_bdev.o 00:01:38.147 CC lib/ftl/ftl_debug.o 00:01:38.147 CC lib/scsi/port.o 00:01:38.147 CC lib/scsi/scsi.o 00:01:38.147 CC lib/nvmf/subsystem.o 00:01:38.147 CC lib/ftl/ftl_io.o 00:01:38.147 CC lib/nvmf/nvmf.o 00:01:38.147 CC lib/nbd/nbd_rpc.o 00:01:38.147 CC lib/ftl/ftl_sb.o 00:01:38.147 CC lib/scsi/scsi_bdev.o 00:01:38.147 CC lib/ftl/ftl_l2p.o 00:01:38.147 CC lib/scsi/scsi_pr.o 00:01:38.147 CC lib/nvmf/nvmf_rpc.o 00:01:38.147 CC lib/nvmf/transport.o 00:01:38.147 CC lib/ftl/ftl_l2p_flat.o 00:01:38.147 CC lib/scsi/scsi_rpc.o 00:01:38.147 CC lib/nvmf/tcp.o 00:01:38.147 CC lib/ftl/ftl_nv_cache.o 00:01:38.147 CC lib/scsi/task.o 00:01:38.147 CC lib/nvmf/stubs.o 00:01:38.406 LIB libspdk_blobfs.a 00:01:38.406 CC lib/nvmf/mdns_server.o 00:01:38.406 CC lib/ftl/ftl_band.o 00:01:38.406 CC lib/nvmf/vfio_user.o 00:01:38.406 CC lib/ftl/ftl_band_ops.o 00:01:38.406 CC lib/ftl/ftl_writer.o 00:01:38.406 SO libspdk_blobfs.so.10.0 00:01:38.669 CC lib/nvmf/rdma.o 00:01:38.669 CC lib/ftl/ftl_rq.o 00:01:38.669 CC lib/ftl/ftl_reloc.o 00:01:38.669 CC lib/ftl/ftl_l2p_cache.o 00:01:38.669 CC lib/nvmf/auth.o 00:01:38.669 CC lib/ftl/ftl_p2l.o 00:01:38.669 SYMLINK libspdk_blobfs.so 00:01:38.669 CC lib/ftl/mngt/ftl_mngt.o 00:01:38.669 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:38.669 LIB libspdk_lvol.a 00:01:38.669 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:38.669 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:38.669 SO libspdk_lvol.so.10.0 00:01:38.669 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:38.931 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:38.931 SYMLINK libspdk_lvol.so 00:01:38.931 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:38.931 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:38.931 LIB libspdk_nbd.a 00:01:38.931 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:38.931 SO libspdk_nbd.so.7.0 00:01:38.931 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:38.931 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:38.931 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:38.931 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:39.192 LIB libspdk_scsi.a 00:01:39.192 SYMLINK libspdk_nbd.so 00:01:39.192 CC lib/ftl/utils/ftl_conf.o 00:01:39.192 CC lib/ftl/utils/ftl_md.o 00:01:39.192 CC lib/ftl/utils/ftl_mempool.o 00:01:39.192 CC lib/ftl/utils/ftl_bitmap.o 00:01:39.192 SO libspdk_scsi.so.9.0 00:01:39.192 CC lib/ftl/utils/ftl_property.o 00:01:39.192 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:39.192 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:39.192 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:39.192 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:39.192 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:39.192 LIB libspdk_ublk.a 00:01:39.192 SYMLINK libspdk_scsi.so 00:01:39.461 SO libspdk_ublk.so.3.0 00:01:39.461 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:39.461 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:01:39.461 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:39.461 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:39.461 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:39.461 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:39.461 CC lib/ftl/base/ftl_base_dev.o 00:01:39.461 CC lib/ftl/base/ftl_base_bdev.o 00:01:39.461 SYMLINK libspdk_ublk.so 00:01:39.461 CC lib/ftl/ftl_trace.o 00:01:39.461 CC lib/iscsi/conn.o 00:01:39.461 CC lib/iscsi/init_grp.o 00:01:39.461 CC lib/iscsi/iscsi.o 00:01:39.461 CC lib/iscsi/md5.o 00:01:39.461 CC lib/iscsi/param.o 00:01:39.461 CC lib/vhost/vhost.o 00:01:39.721 CC lib/iscsi/portal_grp.o 00:01:39.721 CC lib/vhost/vhost_rpc.o 00:01:39.721 CC lib/iscsi/tgt_node.o 00:01:39.721 CC lib/iscsi/iscsi_subsystem.o 00:01:39.721 CC lib/iscsi/iscsi_rpc.o 00:01:39.721 CC lib/iscsi/task.o 00:01:39.721 CC lib/vhost/vhost_scsi.o 00:01:39.721 CC lib/vhost/vhost_blk.o 00:01:39.721 CC lib/vhost/rte_vhost_user.o 00:01:39.979 LIB libspdk_ftl.a 00:01:39.979 SO libspdk_ftl.so.9.0 00:01:40.545 SYMLINK libspdk_ftl.so 00:01:41.110 LIB libspdk_vhost.a 00:01:41.110 SO libspdk_vhost.so.8.0 00:01:41.110 LIB libspdk_iscsi.a 00:01:41.110 SO libspdk_iscsi.so.8.0 00:01:41.110 SYMLINK libspdk_vhost.so 00:01:41.368 SYMLINK libspdk_iscsi.so 00:01:41.368 LIB libspdk_nvmf.a 00:01:41.368 SO libspdk_nvmf.so.19.0 00:01:41.626 SYMLINK libspdk_nvmf.so 00:01:41.884 CC module/env_dpdk/env_dpdk_rpc.o 00:01:41.884 CC module/vfu_device/vfu_virtio.o 00:01:41.884 CC module/vfu_device/vfu_virtio_blk.o 00:01:41.884 CC module/vfu_device/vfu_virtio_rpc.o 00:01:41.884 CC module/vfu_device/vfu_virtio_scsi.o 00:01:42.143 CC module/accel/dsa/accel_dsa.o 00:01:42.143 CC module/keyring/linux/keyring.o 00:01:42.143 CC module/accel/dsa/accel_dsa_rpc.o 00:01:42.143 CC module/keyring/linux/keyring_rpc.o 00:01:42.143 CC module/sock/posix/posix.o 00:01:42.143 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:42.143 CC module/accel/ioat/accel_ioat.o 00:01:42.143 CC module/accel/ioat/accel_ioat_rpc.o 00:01:42.143 CC module/accel/error/accel_error.o 00:01:42.143 CC module/accel/error/accel_error_rpc.o 00:01:42.143 CC module/blob/bdev/blob_bdev.o 00:01:42.143 CC module/scheduler/gscheduler/gscheduler.o 00:01:42.143 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:42.143 CC module/keyring/file/keyring.o 00:01:42.143 CC module/keyring/file/keyring_rpc.o 00:01:42.143 CC module/accel/iaa/accel_iaa.o 00:01:42.143 CC module/accel/iaa/accel_iaa_rpc.o 00:01:42.143 LIB libspdk_env_dpdk_rpc.a 00:01:42.143 SO libspdk_env_dpdk_rpc.so.6.0 00:01:42.143 SYMLINK libspdk_env_dpdk_rpc.so 00:01:42.143 LIB libspdk_scheduler_dynamic.a 00:01:42.401 LIB libspdk_keyring_linux.a 00:01:42.401 SO libspdk_scheduler_dynamic.so.4.0 00:01:42.401 LIB libspdk_keyring_file.a 00:01:42.401 LIB libspdk_scheduler_dpdk_governor.a 00:01:42.401 LIB libspdk_scheduler_gscheduler.a 00:01:42.401 SO libspdk_keyring_linux.so.1.0 00:01:42.401 SO libspdk_keyring_file.so.1.0 00:01:42.401 LIB libspdk_accel_dsa.a 00:01:42.401 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:42.401 LIB libspdk_accel_error.a 00:01:42.401 LIB libspdk_accel_ioat.a 00:01:42.401 SO libspdk_scheduler_gscheduler.so.4.0 00:01:42.401 SYMLINK libspdk_scheduler_dynamic.so 00:01:42.401 SO libspdk_accel_dsa.so.5.0 00:01:42.401 LIB libspdk_accel_iaa.a 00:01:42.401 SO libspdk_accel_error.so.2.0 00:01:42.401 SO libspdk_accel_ioat.so.6.0 00:01:42.401 SYMLINK libspdk_keyring_linux.so 00:01:42.401 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:42.401 SYMLINK libspdk_scheduler_gscheduler.so 00:01:42.401 SYMLINK libspdk_keyring_file.so 00:01:42.401 SO libspdk_accel_iaa.so.3.0 00:01:42.401 SYMLINK libspdk_accel_dsa.so 00:01:42.401 SYMLINK libspdk_accel_error.so 00:01:42.401 SYMLINK libspdk_accel_ioat.so 00:01:42.401 LIB libspdk_blob_bdev.a 00:01:42.401 SYMLINK libspdk_accel_iaa.so 00:01:42.401 SO libspdk_blob_bdev.so.11.0 00:01:42.401 SYMLINK libspdk_blob_bdev.so 00:01:42.664 LIB libspdk_vfu_device.a 00:01:42.664 CC module/bdev/error/vbdev_error.o 00:01:42.664 CC module/bdev/error/vbdev_error_rpc.o 00:01:42.664 CC module/bdev/split/vbdev_split.o 00:01:42.664 CC module/bdev/split/vbdev_split_rpc.o 00:01:42.664 CC module/blobfs/bdev/blobfs_bdev.o 00:01:42.664 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:42.664 CC module/bdev/passthru/vbdev_passthru.o 00:01:42.664 CC module/bdev/iscsi/bdev_iscsi.o 00:01:42.664 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:42.664 CC module/bdev/lvol/vbdev_lvol.o 00:01:42.664 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:42.664 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:42.664 CC module/bdev/gpt/gpt.o 00:01:42.664 CC module/bdev/gpt/vbdev_gpt.o 00:01:42.664 CC module/bdev/malloc/bdev_malloc.o 00:01:42.664 CC module/bdev/null/bdev_null.o 00:01:42.664 CC module/bdev/null/bdev_null_rpc.o 00:01:42.664 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:42.664 CC module/bdev/delay/vbdev_delay.o 00:01:42.664 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:42.664 CC module/bdev/aio/bdev_aio.o 00:01:42.664 CC module/bdev/raid/bdev_raid.o 00:01:42.664 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:42.664 CC module/bdev/aio/bdev_aio_rpc.o 00:01:42.664 CC module/bdev/raid/bdev_raid_rpc.o 00:01:42.664 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:42.664 CC module/bdev/raid/bdev_raid_sb.o 00:01:42.664 SO libspdk_vfu_device.so.3.0 00:01:42.664 CC module/bdev/nvme/bdev_nvme.o 00:01:42.664 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:42.664 CC module/bdev/ftl/bdev_ftl.o 00:01:42.925 SYMLINK libspdk_vfu_device.so 00:01:42.925 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:42.925 LIB libspdk_sock_posix.a 00:01:43.186 SO libspdk_sock_posix.so.6.0 00:01:43.186 CC module/bdev/raid/raid0.o 00:01:43.186 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:43.186 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:43.186 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:43.186 CC module/bdev/nvme/nvme_rpc.o 00:01:43.186 CC module/bdev/nvme/bdev_mdns_client.o 00:01:43.186 CC module/bdev/raid/raid1.o 00:01:43.186 LIB libspdk_blobfs_bdev.a 00:01:43.186 CC module/bdev/nvme/vbdev_opal.o 00:01:43.186 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:43.186 CC module/bdev/raid/concat.o 00:01:43.186 SO libspdk_blobfs_bdev.so.6.0 00:01:43.186 SYMLINK libspdk_sock_posix.so 00:01:43.186 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:43.186 LIB libspdk_bdev_split.a 00:01:43.186 SO libspdk_bdev_split.so.6.0 00:01:43.186 LIB libspdk_bdev_error.a 00:01:43.186 SYMLINK libspdk_blobfs_bdev.so 00:01:43.186 LIB libspdk_bdev_null.a 00:01:43.186 LIB libspdk_bdev_gpt.a 00:01:43.186 SO libspdk_bdev_error.so.6.0 00:01:43.186 SO libspdk_bdev_null.so.6.0 00:01:43.186 LIB libspdk_bdev_passthru.a 00:01:43.445 SYMLINK libspdk_bdev_split.so 00:01:43.445 SO libspdk_bdev_gpt.so.6.0 00:01:43.445 LIB libspdk_bdev_aio.a 00:01:43.445 SO libspdk_bdev_passthru.so.6.0 00:01:43.445 SYMLINK libspdk_bdev_null.so 00:01:43.445 SYMLINK libspdk_bdev_error.so 00:01:43.445 SO libspdk_bdev_aio.so.6.0 00:01:43.445 LIB libspdk_bdev_zone_block.a 00:01:43.445 LIB libspdk_bdev_iscsi.a 00:01:43.445 LIB libspdk_bdev_malloc.a 00:01:43.445 SYMLINK libspdk_bdev_gpt.so 00:01:43.445 SYMLINK libspdk_bdev_passthru.so 00:01:43.445 SO libspdk_bdev_iscsi.so.6.0 00:01:43.445 SO libspdk_bdev_zone_block.so.6.0 00:01:43.445 LIB libspdk_bdev_delay.a 00:01:43.445 SO libspdk_bdev_malloc.so.6.0 00:01:43.445 LIB libspdk_bdev_ftl.a 00:01:43.445 SO libspdk_bdev_delay.so.6.0 00:01:43.445 SYMLINK libspdk_bdev_aio.so 00:01:43.445 SYMLINK libspdk_bdev_iscsi.so 00:01:43.445 SO libspdk_bdev_ftl.so.6.0 00:01:43.445 SYMLINK libspdk_bdev_malloc.so 00:01:43.445 SYMLINK libspdk_bdev_zone_block.so 00:01:43.445 SYMLINK libspdk_bdev_delay.so 00:01:43.445 SYMLINK libspdk_bdev_ftl.so 00:01:43.704 LIB libspdk_bdev_virtio.a 00:01:43.704 LIB libspdk_bdev_lvol.a 00:01:43.704 SO libspdk_bdev_virtio.so.6.0 00:01:43.704 SO libspdk_bdev_lvol.so.6.0 00:01:43.704 SYMLINK libspdk_bdev_virtio.so 00:01:43.704 SYMLINK libspdk_bdev_lvol.so 00:01:43.962 LIB libspdk_bdev_raid.a 00:01:43.962 SO libspdk_bdev_raid.so.6.0 00:01:44.219 SYMLINK libspdk_bdev_raid.so 00:01:45.155 LIB libspdk_bdev_nvme.a 00:01:45.155 SO libspdk_bdev_nvme.so.7.0 00:01:45.415 SYMLINK libspdk_bdev_nvme.so 00:01:45.674 CC module/event/subsystems/vmd/vmd.o 00:01:45.674 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:45.674 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:45.674 CC module/event/subsystems/iobuf/iobuf.o 00:01:45.674 CC module/event/subsystems/keyring/keyring.o 00:01:45.674 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:45.674 CC module/event/subsystems/sock/sock.o 00:01:45.674 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:01:45.674 CC module/event/subsystems/scheduler/scheduler.o 00:01:45.933 LIB libspdk_event_keyring.a 00:01:45.933 LIB libspdk_event_vhost_blk.a 00:01:45.933 LIB libspdk_event_scheduler.a 00:01:45.933 LIB libspdk_event_vfu_tgt.a 00:01:45.933 LIB libspdk_event_sock.a 00:01:45.933 LIB libspdk_event_vmd.a 00:01:45.933 SO libspdk_event_keyring.so.1.0 00:01:45.933 LIB libspdk_event_iobuf.a 00:01:45.933 SO libspdk_event_vhost_blk.so.3.0 00:01:45.933 SO libspdk_event_scheduler.so.4.0 00:01:45.933 SO libspdk_event_vfu_tgt.so.3.0 00:01:45.933 SO libspdk_event_sock.so.5.0 00:01:45.933 SO libspdk_event_vmd.so.6.0 00:01:45.933 SO libspdk_event_iobuf.so.3.0 00:01:45.933 SYMLINK libspdk_event_keyring.so 00:01:45.933 SYMLINK libspdk_event_vhost_blk.so 00:01:45.933 SYMLINK libspdk_event_scheduler.so 00:01:45.933 SYMLINK libspdk_event_vfu_tgt.so 00:01:45.933 SYMLINK libspdk_event_sock.so 00:01:45.933 SYMLINK libspdk_event_vmd.so 00:01:45.933 SYMLINK libspdk_event_iobuf.so 00:01:46.192 CC module/event/subsystems/accel/accel.o 00:01:46.451 LIB libspdk_event_accel.a 00:01:46.451 SO libspdk_event_accel.so.6.0 00:01:46.451 SYMLINK libspdk_event_accel.so 00:01:46.709 CC module/event/subsystems/bdev/bdev.o 00:01:46.967 LIB libspdk_event_bdev.a 00:01:46.967 SO libspdk_event_bdev.so.6.0 00:01:46.967 SYMLINK libspdk_event_bdev.so 00:01:47.225 CC module/event/subsystems/ublk/ublk.o 00:01:47.225 CC module/event/subsystems/scsi/scsi.o 00:01:47.225 CC module/event/subsystems/nbd/nbd.o 00:01:47.225 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:01:47.225 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:01:47.225 LIB libspdk_event_nbd.a 00:01:47.225 LIB libspdk_event_ublk.a 00:01:47.225 LIB libspdk_event_scsi.a 00:01:47.225 SO libspdk_event_nbd.so.6.0 00:01:47.225 SO libspdk_event_ublk.so.3.0 00:01:47.225 SO libspdk_event_scsi.so.6.0 00:01:47.483 SYMLINK libspdk_event_ublk.so 00:01:47.483 SYMLINK libspdk_event_nbd.so 00:01:47.483 SYMLINK libspdk_event_scsi.so 00:01:47.483 LIB libspdk_event_nvmf.a 00:01:47.483 SO libspdk_event_nvmf.so.6.0 00:01:47.483 SYMLINK libspdk_event_nvmf.so 00:01:47.483 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:01:47.483 CC module/event/subsystems/iscsi/iscsi.o 00:01:47.745 LIB libspdk_event_vhost_scsi.a 00:01:47.745 LIB libspdk_event_iscsi.a 00:01:47.745 SO libspdk_event_vhost_scsi.so.3.0 00:01:47.745 SO libspdk_event_iscsi.so.6.0 00:01:47.745 SYMLINK libspdk_event_vhost_scsi.so 00:01:47.745 SYMLINK libspdk_event_iscsi.so 00:01:48.007 SO libspdk.so.6.0 00:01:48.007 SYMLINK libspdk.so 00:01:48.269 TEST_HEADER include/spdk/accel.h 00:01:48.269 TEST_HEADER include/spdk/accel_module.h 00:01:48.269 TEST_HEADER include/spdk/assert.h 00:01:48.269 CXX app/trace/trace.o 00:01:48.269 TEST_HEADER include/spdk/barrier.h 00:01:48.269 TEST_HEADER include/spdk/base64.h 00:01:48.269 CC app/spdk_lspci/spdk_lspci.o 00:01:48.269 TEST_HEADER include/spdk/bdev.h 00:01:48.269 TEST_HEADER include/spdk/bdev_module.h 00:01:48.269 CC app/spdk_nvme_discover/discovery_aer.o 00:01:48.269 TEST_HEADER include/spdk/bdev_zone.h 00:01:48.269 TEST_HEADER include/spdk/bit_array.h 00:01:48.269 TEST_HEADER include/spdk/bit_pool.h 00:01:48.269 CC app/trace_record/trace_record.o 00:01:48.269 CC app/spdk_nvme_identify/identify.o 00:01:48.269 TEST_HEADER include/spdk/blob_bdev.h 00:01:48.269 CC app/spdk_top/spdk_top.o 00:01:48.269 TEST_HEADER include/spdk/blobfs_bdev.h 00:01:48.269 CC test/rpc_client/rpc_client_test.o 00:01:48.269 TEST_HEADER include/spdk/blobfs.h 00:01:48.269 CC app/spdk_nvme_perf/perf.o 00:01:48.269 TEST_HEADER include/spdk/blob.h 00:01:48.269 TEST_HEADER include/spdk/conf.h 00:01:48.269 TEST_HEADER include/spdk/config.h 00:01:48.269 TEST_HEADER include/spdk/cpuset.h 00:01:48.269 TEST_HEADER include/spdk/crc16.h 00:01:48.269 TEST_HEADER include/spdk/crc32.h 00:01:48.269 TEST_HEADER include/spdk/crc64.h 00:01:48.269 TEST_HEADER include/spdk/dif.h 00:01:48.269 TEST_HEADER include/spdk/dma.h 00:01:48.269 TEST_HEADER include/spdk/endian.h 00:01:48.269 TEST_HEADER include/spdk/env_dpdk.h 00:01:48.269 TEST_HEADER include/spdk/env.h 00:01:48.269 TEST_HEADER include/spdk/event.h 00:01:48.269 TEST_HEADER include/spdk/fd_group.h 00:01:48.269 TEST_HEADER include/spdk/fd.h 00:01:48.269 TEST_HEADER include/spdk/file.h 00:01:48.269 TEST_HEADER include/spdk/ftl.h 00:01:48.269 TEST_HEADER include/spdk/gpt_spec.h 00:01:48.269 TEST_HEADER include/spdk/hexlify.h 00:01:48.269 TEST_HEADER include/spdk/idxd.h 00:01:48.269 TEST_HEADER include/spdk/histogram_data.h 00:01:48.269 TEST_HEADER include/spdk/idxd_spec.h 00:01:48.269 TEST_HEADER include/spdk/init.h 00:01:48.269 TEST_HEADER include/spdk/ioat.h 00:01:48.269 TEST_HEADER include/spdk/ioat_spec.h 00:01:48.269 TEST_HEADER include/spdk/iscsi_spec.h 00:01:48.269 TEST_HEADER include/spdk/json.h 00:01:48.269 TEST_HEADER include/spdk/jsonrpc.h 00:01:48.269 CC examples/interrupt_tgt/interrupt_tgt.o 00:01:48.269 TEST_HEADER include/spdk/keyring.h 00:01:48.269 TEST_HEADER include/spdk/keyring_module.h 00:01:48.269 TEST_HEADER include/spdk/likely.h 00:01:48.269 TEST_HEADER include/spdk/log.h 00:01:48.269 TEST_HEADER include/spdk/lvol.h 00:01:48.269 TEST_HEADER include/spdk/memory.h 00:01:48.269 TEST_HEADER include/spdk/mmio.h 00:01:48.269 CC app/iscsi_tgt/iscsi_tgt.o 00:01:48.269 TEST_HEADER include/spdk/nbd.h 00:01:48.269 CC app/spdk_dd/spdk_dd.o 00:01:48.269 TEST_HEADER include/spdk/notify.h 00:01:48.269 TEST_HEADER include/spdk/net.h 00:01:48.269 CC app/nvmf_tgt/nvmf_main.o 00:01:48.269 TEST_HEADER include/spdk/nvme.h 00:01:48.269 TEST_HEADER include/spdk/nvme_intel.h 00:01:48.269 TEST_HEADER include/spdk/nvme_ocssd.h 00:01:48.269 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:01:48.269 TEST_HEADER include/spdk/nvme_spec.h 00:01:48.269 TEST_HEADER include/spdk/nvme_zns.h 00:01:48.269 TEST_HEADER include/spdk/nvmf_cmd.h 00:01:48.269 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:01:48.269 TEST_HEADER include/spdk/nvmf.h 00:01:48.269 TEST_HEADER include/spdk/nvmf_spec.h 00:01:48.269 TEST_HEADER include/spdk/nvmf_transport.h 00:01:48.269 TEST_HEADER include/spdk/opal.h 00:01:48.269 TEST_HEADER include/spdk/opal_spec.h 00:01:48.269 TEST_HEADER include/spdk/pci_ids.h 00:01:48.269 TEST_HEADER include/spdk/pipe.h 00:01:48.269 TEST_HEADER include/spdk/queue.h 00:01:48.269 TEST_HEADER include/spdk/reduce.h 00:01:48.269 TEST_HEADER include/spdk/rpc.h 00:01:48.269 TEST_HEADER include/spdk/scheduler.h 00:01:48.269 TEST_HEADER include/spdk/scsi.h 00:01:48.269 TEST_HEADER include/spdk/scsi_spec.h 00:01:48.269 TEST_HEADER include/spdk/sock.h 00:01:48.269 TEST_HEADER include/spdk/stdinc.h 00:01:48.269 CC examples/ioat/verify/verify.o 00:01:48.269 CC examples/util/zipf/zipf.o 00:01:48.269 TEST_HEADER include/spdk/string.h 00:01:48.269 CC test/env/vtophys/vtophys.o 00:01:48.269 CC examples/ioat/perf/perf.o 00:01:48.269 TEST_HEADER include/spdk/thread.h 00:01:48.269 TEST_HEADER include/spdk/trace_parser.h 00:01:48.269 CC test/env/pci/pci_ut.o 00:01:48.269 TEST_HEADER include/spdk/trace.h 00:01:48.269 TEST_HEADER include/spdk/tree.h 00:01:48.269 CC app/fio/nvme/fio_plugin.o 00:01:48.269 CC test/env/memory/memory_ut.o 00:01:48.269 TEST_HEADER include/spdk/ublk.h 00:01:48.269 TEST_HEADER include/spdk/util.h 00:01:48.269 CC test/app/jsoncat/jsoncat.o 00:01:48.269 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:01:48.269 TEST_HEADER include/spdk/uuid.h 00:01:48.269 CC test/app/histogram_perf/histogram_perf.o 00:01:48.269 TEST_HEADER include/spdk/vfio_user_pci.h 00:01:48.269 TEST_HEADER include/spdk/version.h 00:01:48.269 TEST_HEADER include/spdk/vfio_user_spec.h 00:01:48.269 CC app/spdk_tgt/spdk_tgt.o 00:01:48.269 TEST_HEADER include/spdk/vhost.h 00:01:48.269 TEST_HEADER include/spdk/vmd.h 00:01:48.269 CC test/thread/poller_perf/poller_perf.o 00:01:48.269 TEST_HEADER include/spdk/xor.h 00:01:48.269 TEST_HEADER include/spdk/zipf.h 00:01:48.269 CC test/app/stub/stub.o 00:01:48.269 CXX test/cpp_headers/accel.o 00:01:48.530 CC test/dma/test_dma/test_dma.o 00:01:48.530 CC test/app/bdev_svc/bdev_svc.o 00:01:48.530 LINK spdk_lspci 00:01:48.530 CC app/fio/bdev/fio_plugin.o 00:01:48.530 CC test/env/mem_callbacks/mem_callbacks.o 00:01:48.530 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:01:48.530 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:01:48.530 LINK spdk_nvme_discover 00:01:48.530 LINK rpc_client_test 00:01:48.530 LINK zipf 00:01:48.797 LINK vtophys 00:01:48.797 LINK jsoncat 00:01:48.797 LINK interrupt_tgt 00:01:48.797 LINK nvmf_tgt 00:01:48.797 LINK histogram_perf 00:01:48.797 LINK spdk_trace_record 00:01:48.797 LINK poller_perf 00:01:48.797 LINK iscsi_tgt 00:01:48.797 LINK env_dpdk_post_init 00:01:48.797 LINK stub 00:01:48.797 CXX test/cpp_headers/accel_module.o 00:01:48.797 LINK ioat_perf 00:01:48.797 LINK verify 00:01:48.797 LINK bdev_svc 00:01:48.797 LINK spdk_tgt 00:01:48.797 CXX test/cpp_headers/assert.o 00:01:48.797 CXX test/cpp_headers/barrier.o 00:01:49.060 CXX test/cpp_headers/base64.o 00:01:49.060 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:01:49.060 CXX test/cpp_headers/bdev.o 00:01:49.060 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:01:49.060 CXX test/cpp_headers/bdev_module.o 00:01:49.060 LINK spdk_dd 00:01:49.060 CXX test/cpp_headers/bdev_zone.o 00:01:49.060 CXX test/cpp_headers/bit_array.o 00:01:49.060 LINK spdk_trace 00:01:49.060 CXX test/cpp_headers/bit_pool.o 00:01:49.060 CXX test/cpp_headers/blob_bdev.o 00:01:49.060 CXX test/cpp_headers/blobfs_bdev.o 00:01:49.060 CXX test/cpp_headers/blobfs.o 00:01:49.060 LINK test_dma 00:01:49.060 CXX test/cpp_headers/blob.o 00:01:49.060 LINK pci_ut 00:01:49.326 CXX test/cpp_headers/conf.o 00:01:49.326 LINK nvme_fuzz 00:01:49.326 CC examples/thread/thread/thread_ex.o 00:01:49.326 CXX test/cpp_headers/config.o 00:01:49.326 LINK spdk_nvme 00:01:49.326 CC test/event/event_perf/event_perf.o 00:01:49.326 CC test/event/reactor/reactor.o 00:01:49.326 CC examples/sock/hello_world/hello_sock.o 00:01:49.326 CXX test/cpp_headers/cpuset.o 00:01:49.326 CXX test/cpp_headers/crc16.o 00:01:49.326 CC test/event/reactor_perf/reactor_perf.o 00:01:49.326 CXX test/cpp_headers/crc32.o 00:01:49.326 CXX test/cpp_headers/crc64.o 00:01:49.611 CXX test/cpp_headers/dif.o 00:01:49.611 CXX test/cpp_headers/dma.o 00:01:49.611 CXX test/cpp_headers/endian.o 00:01:49.611 CC examples/vmd/lsvmd/lsvmd.o 00:01:49.611 CC examples/vmd/led/led.o 00:01:49.611 CC test/event/app_repeat/app_repeat.o 00:01:49.611 CC examples/idxd/perf/perf.o 00:01:49.611 LINK spdk_bdev 00:01:49.611 CC test/event/scheduler/scheduler.o 00:01:49.611 CXX test/cpp_headers/env_dpdk.o 00:01:49.611 CC app/vhost/vhost.o 00:01:49.611 LINK reactor 00:01:49.611 CXX test/cpp_headers/env.o 00:01:49.611 LINK reactor_perf 00:01:49.611 LINK event_perf 00:01:49.611 CXX test/cpp_headers/event.o 00:01:49.900 LINK spdk_nvme_identify 00:01:49.900 LINK lsvmd 00:01:49.900 CXX test/cpp_headers/fd_group.o 00:01:49.900 LINK spdk_top 00:01:49.900 LINK mem_callbacks 00:01:49.900 CXX test/cpp_headers/fd.o 00:01:49.900 CXX test/cpp_headers/file.o 00:01:49.900 CXX test/cpp_headers/ftl.o 00:01:49.900 LINK spdk_nvme_perf 00:01:49.900 LINK app_repeat 00:01:49.900 LINK thread 00:01:49.900 LINK vhost_fuzz 00:01:49.900 CXX test/cpp_headers/gpt_spec.o 00:01:49.900 LINK led 00:01:49.900 CC test/accel/dif/dif.o 00:01:49.900 CC test/blobfs/mkfs/mkfs.o 00:01:49.900 CC test/nvme/aer/aer.o 00:01:49.900 LINK hello_sock 00:01:49.900 CXX test/cpp_headers/hexlify.o 00:01:49.900 CXX test/cpp_headers/histogram_data.o 00:01:49.900 CC test/nvme/reset/reset.o 00:01:49.900 CC test/nvme/sgl/sgl.o 00:01:49.900 CXX test/cpp_headers/idxd.o 00:01:50.174 CC test/lvol/esnap/esnap.o 00:01:50.174 CXX test/cpp_headers/idxd_spec.o 00:01:50.174 LINK vhost 00:01:50.174 CXX test/cpp_headers/init.o 00:01:50.174 LINK scheduler 00:01:50.174 CXX test/cpp_headers/ioat.o 00:01:50.174 CXX test/cpp_headers/ioat_spec.o 00:01:50.174 LINK idxd_perf 00:01:50.174 CXX test/cpp_headers/iscsi_spec.o 00:01:50.174 CXX test/cpp_headers/json.o 00:01:50.174 CC test/nvme/e2edp/nvme_dp.o 00:01:50.174 CC test/nvme/err_injection/err_injection.o 00:01:50.174 CC test/nvme/overhead/overhead.o 00:01:50.174 CC test/nvme/startup/startup.o 00:01:50.174 LINK mkfs 00:01:50.174 CC test/nvme/simple_copy/simple_copy.o 00:01:50.441 CC test/nvme/reserve/reserve.o 00:01:50.441 CC test/nvme/connect_stress/connect_stress.o 00:01:50.441 CC test/nvme/compliance/nvme_compliance.o 00:01:50.441 CC test/nvme/boot_partition/boot_partition.o 00:01:50.441 CC test/nvme/fused_ordering/fused_ordering.o 00:01:50.441 CXX test/cpp_headers/jsonrpc.o 00:01:50.441 CXX test/cpp_headers/keyring.o 00:01:50.441 LINK reset 00:01:50.442 CC test/nvme/doorbell_aers/doorbell_aers.o 00:01:50.442 LINK aer 00:01:50.442 CC examples/nvme/hello_world/hello_world.o 00:01:50.442 CXX test/cpp_headers/keyring_module.o 00:01:50.442 CXX test/cpp_headers/likely.o 00:01:50.442 CC test/nvme/fdp/fdp.o 00:01:50.442 CC examples/nvme/reconnect/reconnect.o 00:01:50.442 CC examples/nvme/nvme_manage/nvme_manage.o 00:01:50.704 LINK sgl 00:01:50.704 LINK startup 00:01:50.704 CC test/nvme/cuse/cuse.o 00:01:50.704 CC examples/nvme/arbitration/arbitration.o 00:01:50.704 CC examples/accel/perf/accel_perf.o 00:01:50.704 LINK err_injection 00:01:50.704 LINK boot_partition 00:01:50.704 LINK reserve 00:01:50.704 LINK connect_stress 00:01:50.704 CC examples/blob/hello_world/hello_blob.o 00:01:50.704 CC examples/blob/cli/blobcli.o 00:01:50.704 LINK fused_ordering 00:01:50.704 LINK nvme_dp 00:01:50.704 LINK overhead 00:01:50.704 LINK memory_ut 00:01:50.704 LINK dif 00:01:50.704 LINK simple_copy 00:01:50.704 CXX test/cpp_headers/log.o 00:01:50.971 CXX test/cpp_headers/lvol.o 00:01:50.971 LINK doorbell_aers 00:01:50.971 CXX test/cpp_headers/memory.o 00:01:50.971 CC examples/nvme/hotplug/hotplug.o 00:01:50.971 CXX test/cpp_headers/mmio.o 00:01:50.971 CXX test/cpp_headers/nbd.o 00:01:50.971 LINK nvme_compliance 00:01:50.971 CC examples/nvme/abort/abort.o 00:01:50.971 CC examples/nvme/cmb_copy/cmb_copy.o 00:01:50.971 CXX test/cpp_headers/net.o 00:01:50.971 CXX test/cpp_headers/notify.o 00:01:50.971 CXX test/cpp_headers/nvme.o 00:01:50.971 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:01:50.971 CXX test/cpp_headers/nvme_intel.o 00:01:50.971 LINK hello_world 00:01:50.971 CXX test/cpp_headers/nvme_ocssd.o 00:01:50.971 CXX test/cpp_headers/nvme_ocssd_spec.o 00:01:50.971 CXX test/cpp_headers/nvme_spec.o 00:01:51.233 CXX test/cpp_headers/nvme_zns.o 00:01:51.233 CXX test/cpp_headers/nvmf_cmd.o 00:01:51.233 CXX test/cpp_headers/nvmf_fc_spec.o 00:01:51.233 CXX test/cpp_headers/nvmf.o 00:01:51.233 CXX test/cpp_headers/nvmf_spec.o 00:01:51.233 CXX test/cpp_headers/nvmf_transport.o 00:01:51.233 LINK reconnect 00:01:51.233 LINK hello_blob 00:01:51.233 CXX test/cpp_headers/opal.o 00:01:51.233 CXX test/cpp_headers/opal_spec.o 00:01:51.233 LINK fdp 00:01:51.233 CXX test/cpp_headers/pci_ids.o 00:01:51.233 CXX test/cpp_headers/pipe.o 00:01:51.233 LINK arbitration 00:01:51.233 LINK hotplug 00:01:51.233 LINK cmb_copy 00:01:51.497 CXX test/cpp_headers/queue.o 00:01:51.497 CXX test/cpp_headers/reduce.o 00:01:51.497 CXX test/cpp_headers/rpc.o 00:01:51.497 LINK pmr_persistence 00:01:51.497 CXX test/cpp_headers/scheduler.o 00:01:51.497 CXX test/cpp_headers/scsi.o 00:01:51.497 CXX test/cpp_headers/scsi_spec.o 00:01:51.497 CXX test/cpp_headers/sock.o 00:01:51.497 LINK nvme_manage 00:01:51.497 CXX test/cpp_headers/stdinc.o 00:01:51.497 CXX test/cpp_headers/string.o 00:01:51.497 CXX test/cpp_headers/thread.o 00:01:51.497 CXX test/cpp_headers/trace.o 00:01:51.497 CXX test/cpp_headers/trace_parser.o 00:01:51.497 CXX test/cpp_headers/tree.o 00:01:51.497 CXX test/cpp_headers/ublk.o 00:01:51.497 CXX test/cpp_headers/util.o 00:01:51.497 LINK iscsi_fuzz 00:01:51.497 LINK accel_perf 00:01:51.497 CXX test/cpp_headers/uuid.o 00:01:51.497 CXX test/cpp_headers/version.o 00:01:51.497 CXX test/cpp_headers/vfio_user_pci.o 00:01:51.497 CXX test/cpp_headers/vfio_user_spec.o 00:01:51.497 CXX test/cpp_headers/vhost.o 00:01:51.755 CC test/bdev/bdevio/bdevio.o 00:01:51.755 CXX test/cpp_headers/vmd.o 00:01:51.755 CXX test/cpp_headers/xor.o 00:01:51.755 CXX test/cpp_headers/zipf.o 00:01:51.755 LINK blobcli 00:01:51.755 LINK abort 00:01:52.014 CC examples/bdev/hello_world/hello_bdev.o 00:01:52.014 CC examples/bdev/bdevperf/bdevperf.o 00:01:52.272 LINK bdevio 00:01:52.272 LINK hello_bdev 00:01:52.530 LINK cuse 00:01:52.789 LINK bdevperf 00:01:53.354 CC examples/nvmf/nvmf/nvmf.o 00:01:53.613 LINK nvmf 00:01:56.140 LINK esnap 00:01:56.140 00:01:56.140 real 0m56.199s 00:01:56.140 user 11m15.059s 00:01:56.140 sys 2m24.290s 00:01:56.141 22:11:21 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:56.141 22:11:21 make -- common/autotest_common.sh@10 -- $ set +x 00:01:56.141 ************************************ 00:01:56.141 END TEST make 00:01:56.141 ************************************ 00:01:56.141 22:11:21 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:01:56.141 22:11:21 -- pm/common@29 -- $ signal_monitor_resources TERM 00:01:56.141 22:11:21 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:01:56.141 22:11:21 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:56.141 22:11:21 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:01:56.141 22:11:21 -- pm/common@44 -- $ pid=3666921 00:01:56.141 22:11:21 -- pm/common@50 -- $ kill -TERM 3666921 00:01:56.141 22:11:21 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:56.141 22:11:21 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:01:56.141 22:11:21 -- pm/common@44 -- $ pid=3666923 00:01:56.141 22:11:21 -- pm/common@50 -- $ kill -TERM 3666923 00:01:56.141 22:11:21 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:56.141 22:11:21 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:01:56.141 22:11:21 -- pm/common@44 -- $ pid=3666925 00:01:56.141 22:11:21 -- pm/common@50 -- $ kill -TERM 3666925 00:01:56.141 22:11:21 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:56.141 22:11:21 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:01:56.141 22:11:21 -- pm/common@44 -- $ pid=3666954 00:01:56.141 22:11:21 -- pm/common@50 -- $ sudo -E kill -TERM 3666954 00:01:56.400 22:11:21 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:01:56.400 22:11:21 -- nvmf/common.sh@7 -- # uname -s 00:01:56.400 22:11:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:01:56.400 22:11:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:01:56.400 22:11:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:01:56.400 22:11:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:01:56.400 22:11:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:01:56.400 22:11:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:01:56.400 22:11:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:01:56.400 22:11:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:01:56.400 22:11:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:01:56.400 22:11:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:01:56.400 22:11:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:01:56.400 22:11:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:01:56.400 22:11:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:01:56.400 22:11:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:01:56.400 22:11:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:01:56.400 22:11:21 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:01:56.400 22:11:21 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:56.400 22:11:21 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:01:56.400 22:11:21 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:56.400 22:11:21 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:56.400 22:11:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:56.400 22:11:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:56.400 22:11:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:56.400 22:11:21 -- paths/export.sh@5 -- # export PATH 00:01:56.400 22:11:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:56.400 22:11:21 -- nvmf/common.sh@47 -- # : 0 00:01:56.400 22:11:21 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:01:56.400 22:11:21 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:01:56.400 22:11:21 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:01:56.400 22:11:21 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:01:56.400 22:11:21 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:01:56.400 22:11:21 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:01:56.400 22:11:21 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:01:56.400 22:11:21 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:01:56.400 22:11:21 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:01:56.400 22:11:21 -- spdk/autotest.sh@32 -- # uname -s 00:01:56.400 22:11:21 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:01:56.400 22:11:21 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:01:56.400 22:11:21 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:56.400 22:11:21 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:01:56.400 22:11:21 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:56.400 22:11:21 -- spdk/autotest.sh@44 -- # modprobe nbd 00:01:56.400 22:11:21 -- spdk/autotest.sh@46 -- # type -P udevadm 00:01:56.400 22:11:21 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:01:56.400 22:11:21 -- spdk/autotest.sh@48 -- # udevadm_pid=3721226 00:01:56.400 22:11:21 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:01:56.400 22:11:21 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:01:56.400 22:11:21 -- pm/common@17 -- # local monitor 00:01:56.400 22:11:21 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:56.400 22:11:21 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:56.400 22:11:21 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:56.400 22:11:21 -- pm/common@21 -- # date +%s 00:01:56.400 22:11:21 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:56.400 22:11:21 -- pm/common@21 -- # date +%s 00:01:56.400 22:11:21 -- pm/common@25 -- # sleep 1 00:01:56.400 22:11:21 -- pm/common@21 -- # date +%s 00:01:56.400 22:11:21 -- pm/common@21 -- # date +%s 00:01:56.400 22:11:21 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721851881 00:01:56.400 22:11:21 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721851881 00:01:56.400 22:11:21 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721851881 00:01:56.400 22:11:21 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721851881 00:01:56.400 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721851881_collect-vmstat.pm.log 00:01:56.400 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721851881_collect-cpu-load.pm.log 00:01:56.400 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721851881_collect-cpu-temp.pm.log 00:01:56.400 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721851881_collect-bmc-pm.bmc.pm.log 00:01:57.336 22:11:22 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:01:57.336 22:11:22 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:01:57.336 22:11:22 -- common/autotest_common.sh@722 -- # xtrace_disable 00:01:57.336 22:11:22 -- common/autotest_common.sh@10 -- # set +x 00:01:57.336 22:11:22 -- spdk/autotest.sh@59 -- # create_test_list 00:01:57.336 22:11:22 -- common/autotest_common.sh@746 -- # xtrace_disable 00:01:57.336 22:11:22 -- common/autotest_common.sh@10 -- # set +x 00:01:57.336 22:11:22 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:01:57.336 22:11:22 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:57.336 22:11:22 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:57.336 22:11:22 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:57.336 22:11:22 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:57.336 22:11:22 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:01:57.336 22:11:22 -- common/autotest_common.sh@1453 -- # uname 00:01:57.336 22:11:22 -- common/autotest_common.sh@1453 -- # '[' Linux = FreeBSD ']' 00:01:57.336 22:11:22 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:01:57.336 22:11:22 -- common/autotest_common.sh@1473 -- # uname 00:01:57.336 22:11:22 -- common/autotest_common.sh@1473 -- # [[ Linux = FreeBSD ]] 00:01:57.336 22:11:22 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:01:57.336 22:11:22 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:01:57.336 22:11:22 -- spdk/autotest.sh@72 -- # hash lcov 00:01:57.336 22:11:22 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:01:57.336 22:11:22 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:01:57.336 --rc lcov_branch_coverage=1 00:01:57.336 --rc lcov_function_coverage=1 00:01:57.336 --rc genhtml_branch_coverage=1 00:01:57.336 --rc genhtml_function_coverage=1 00:01:57.336 --rc genhtml_legend=1 00:01:57.336 --rc geninfo_all_blocks=1 00:01:57.336 ' 00:01:57.336 22:11:22 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:01:57.336 --rc lcov_branch_coverage=1 00:01:57.336 --rc lcov_function_coverage=1 00:01:57.336 --rc genhtml_branch_coverage=1 00:01:57.336 --rc genhtml_function_coverage=1 00:01:57.336 --rc genhtml_legend=1 00:01:57.336 --rc geninfo_all_blocks=1 00:01:57.336 ' 00:01:57.336 22:11:22 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:01:57.336 --rc lcov_branch_coverage=1 00:01:57.336 --rc lcov_function_coverage=1 00:01:57.336 --rc genhtml_branch_coverage=1 00:01:57.336 --rc genhtml_function_coverage=1 00:01:57.337 --rc genhtml_legend=1 00:01:57.337 --rc geninfo_all_blocks=1 00:01:57.337 --no-external' 00:01:57.337 22:11:22 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:01:57.337 --rc lcov_branch_coverage=1 00:01:57.337 --rc lcov_function_coverage=1 00:01:57.337 --rc genhtml_branch_coverage=1 00:01:57.337 --rc genhtml_function_coverage=1 00:01:57.337 --rc genhtml_legend=1 00:01:57.337 --rc geninfo_all_blocks=1 00:01:57.337 --no-external' 00:01:57.337 22:11:22 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:01:57.595 lcov: LCOV version 1.14 00:01:57.595 22:11:23 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:01:59.509 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:01:59.509 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:01:59.509 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:01:59.509 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:01:59.509 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:01:59.509 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:01:59.509 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:01:59.509 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:01:59.509 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:01:59.509 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:01:59.509 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:01:59.509 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:01:59.509 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:01:59.509 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:01:59.509 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:01:59.509 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:01:59.509 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:01:59.509 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:01:59.509 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:01:59.509 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:01:59.509 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:01:59.509 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:01:59.509 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:01:59.509 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:01:59.509 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:01:59.509 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:01:59.509 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:01:59.509 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:01:59.509 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:01:59.509 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:01:59.509 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:01:59.510 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:01:59.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:01:59.510 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:01:59.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:01:59.510 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:01:59.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:01:59.510 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:01:59.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:01:59.510 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:01:59.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:01:59.510 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:01:59.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:01:59.510 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:01:59.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:01:59.510 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:01:59.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:01:59.510 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:01:59.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:01:59.510 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:01:59.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:01:59.510 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:01:59.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:01:59.510 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:01:59.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:01:59.510 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:01:59.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:01:59.510 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:01:59.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:01:59.510 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:01:59.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:01:59.510 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:01:59.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:01:59.510 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:01:59.770 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:01:59.770 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:01:59.770 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:01:59.770 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:01:59.770 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:01:59.770 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:01:59.770 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:01:59.770 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:01:59.770 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:01:59.770 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:01:59.770 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:01:59.770 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:01:59.770 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:01:59.770 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:01:59.770 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:01:59.770 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:01:59.770 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:01:59.770 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:01:59.770 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:01:59.770 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:01:59.770 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:01:59.770 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:01:59.770 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:01:59.770 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:01:59.770 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:01:59.770 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:01:59.770 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:01:59.770 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:01:59.770 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:01:59.770 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:01:59.770 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:01:59.770 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:01:59.770 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:01:59.770 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:01:59.770 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:01:59.770 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:01:59.770 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:01:59.770 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:01:59.770 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:01:59.770 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:01:59.770 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:01:59.770 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:01:59.770 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:01:59.770 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:01:59.770 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:01:59.770 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:01:59.770 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:01:59.770 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:01:59.770 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:01:59.771 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:01:59.771 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:01:59.771 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:01:59.771 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:01:59.771 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:01:59.771 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:01:59.771 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:01:59.771 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:01:59.771 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:01:59.771 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:01:59.771 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:01:59.771 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:01:59.771 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:01:59.771 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:01:59.771 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:01:59.771 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:01:59.771 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:01:59.771 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:01:59.771 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:01:59.771 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:01:59.771 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:01:59.771 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:01:59.771 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:01:59.771 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:01:59.771 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:01:59.771 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:01:59.771 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:01:59.771 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:01:59.771 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:01:59.771 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:01:59.771 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:01:59.771 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:01:59.771 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:01:59.771 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:01:59.771 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:01:59.771 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:01:59.771 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:01:59.771 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:01:59.771 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:01:59.771 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:01:59.771 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:01:59.771 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:01:59.771 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:01:59.771 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:01:59.771 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:01:59.771 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:01:59.771 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:01:59.771 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:01:59.771 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:01:59.771 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:01:59.771 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:00.030 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:00.030 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:00.030 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:00.030 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:00.030 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:00.030 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:00.030 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:00.030 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:00.030 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:00.030 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:00.030 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:00.030 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:00.030 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:00.030 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:18.127 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:18.127 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:33.029 22:11:57 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:33.029 22:11:57 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:33.029 22:11:57 -- common/autotest_common.sh@10 -- # set +x 00:02:33.029 22:11:57 -- spdk/autotest.sh@91 -- # rm -f 00:02:33.029 22:11:57 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:33.029 0000:84:00.0 (8086 0a54): Already using the nvme driver 00:02:33.029 0000:00:04.7 (8086 3c27): Already using the ioatdma driver 00:02:33.029 0000:00:04.6 (8086 3c26): Already using the ioatdma driver 00:02:33.029 0000:00:04.5 (8086 3c25): Already using the ioatdma driver 00:02:33.029 0000:00:04.4 (8086 3c24): Already using the ioatdma driver 00:02:33.029 0000:00:04.3 (8086 3c23): Already using the ioatdma driver 00:02:33.029 0000:00:04.2 (8086 3c22): Already using the ioatdma driver 00:02:33.029 0000:00:04.1 (8086 3c21): Already using the ioatdma driver 00:02:33.029 0000:00:04.0 (8086 3c20): Already using the ioatdma driver 00:02:33.029 0000:80:04.7 (8086 3c27): Already using the ioatdma driver 00:02:33.029 0000:80:04.6 (8086 3c26): Already using the ioatdma driver 00:02:33.029 0000:80:04.5 (8086 3c25): Already using the ioatdma driver 00:02:33.029 0000:80:04.4 (8086 3c24): Already using the ioatdma driver 00:02:33.029 0000:80:04.3 (8086 3c23): Already using the ioatdma driver 00:02:33.029 0000:80:04.2 (8086 3c22): Already using the ioatdma driver 00:02:33.029 0000:80:04.1 (8086 3c21): Already using the ioatdma driver 00:02:33.029 0000:80:04.0 (8086 3c20): Already using the ioatdma driver 00:02:33.029 22:11:58 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:33.029 22:11:58 -- common/autotest_common.sh@1667 -- # zoned_devs=() 00:02:33.029 22:11:58 -- common/autotest_common.sh@1667 -- # local -gA zoned_devs 00:02:33.029 22:11:58 -- common/autotest_common.sh@1668 -- # local nvme bdf 00:02:33.029 22:11:58 -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:02:33.029 22:11:58 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:02:33.029 22:11:58 -- common/autotest_common.sh@1660 -- # local device=nvme0n1 00:02:33.029 22:11:58 -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:33.029 22:11:58 -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:02:33.029 22:11:58 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:33.030 22:11:58 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:33.030 22:11:58 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:33.030 22:11:58 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:33.030 22:11:58 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:33.030 22:11:58 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:33.030 No valid GPT data, bailing 00:02:33.030 22:11:58 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:33.289 22:11:58 -- scripts/common.sh@391 -- # pt= 00:02:33.289 22:11:58 -- scripts/common.sh@392 -- # return 1 00:02:33.289 22:11:58 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:33.289 1+0 records in 00:02:33.289 1+0 records out 00:02:33.289 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0021949 s, 478 MB/s 00:02:33.289 22:11:58 -- spdk/autotest.sh@118 -- # sync 00:02:33.289 22:11:58 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:33.289 22:11:58 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:33.289 22:11:58 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:34.671 22:12:00 -- spdk/autotest.sh@124 -- # uname -s 00:02:34.671 22:12:00 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:34.671 22:12:00 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:34.671 22:12:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:34.671 22:12:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:34.671 22:12:00 -- common/autotest_common.sh@10 -- # set +x 00:02:34.671 ************************************ 00:02:34.671 START TEST setup.sh 00:02:34.671 ************************************ 00:02:34.671 22:12:00 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:34.930 * Looking for test storage... 00:02:34.930 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:34.930 22:12:00 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:34.930 22:12:00 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:34.930 22:12:00 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:34.930 22:12:00 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:34.930 22:12:00 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:34.930 22:12:00 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:34.930 ************************************ 00:02:34.930 START TEST acl 00:02:34.930 ************************************ 00:02:34.930 22:12:00 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:34.930 * Looking for test storage... 00:02:34.930 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:34.930 22:12:00 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:34.930 22:12:00 setup.sh.acl -- common/autotest_common.sh@1667 -- # zoned_devs=() 00:02:34.930 22:12:00 setup.sh.acl -- common/autotest_common.sh@1667 -- # local -gA zoned_devs 00:02:34.930 22:12:00 setup.sh.acl -- common/autotest_common.sh@1668 -- # local nvme bdf 00:02:34.930 22:12:00 setup.sh.acl -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:02:34.930 22:12:00 setup.sh.acl -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:02:34.930 22:12:00 setup.sh.acl -- common/autotest_common.sh@1660 -- # local device=nvme0n1 00:02:34.930 22:12:00 setup.sh.acl -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:34.930 22:12:00 setup.sh.acl -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:02:34.930 22:12:00 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:34.930 22:12:00 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:34.930 22:12:00 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:34.930 22:12:00 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:34.930 22:12:00 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:34.930 22:12:00 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:34.930 22:12:00 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:36.309 22:12:01 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:36.309 22:12:01 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:36.309 22:12:01 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:36.309 22:12:01 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:36.309 22:12:01 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:36.309 22:12:01 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:37.245 Hugepages 00:02:37.245 node hugesize free / total 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:37.245 00:02:37.245 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:84:00.0 == *:*:*.* ]] 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\4\:\0\0\.\0* ]] 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:37.245 22:12:02 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:37.245 22:12:02 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:37.245 22:12:02 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:37.245 22:12:02 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:37.245 ************************************ 00:02:37.245 START TEST denied 00:02:37.245 ************************************ 00:02:37.245 22:12:02 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:02:37.245 22:12:02 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:84:00.0' 00:02:37.245 22:12:02 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:37.245 22:12:02 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:37.245 22:12:02 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:84:00.0' 00:02:37.245 22:12:02 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:38.625 0000:84:00.0 (8086 0a54): Skipping denied controller at 0000:84:00.0 00:02:38.625 22:12:04 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:84:00.0 00:02:38.625 22:12:04 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:02:38.625 22:12:04 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:02:38.625 22:12:04 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:84:00.0 ]] 00:02:38.625 22:12:04 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:84:00.0/driver 00:02:38.625 22:12:04 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:38.625 22:12:04 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:38.625 22:12:04 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:02:38.625 22:12:04 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:38.625 22:12:04 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:40.530 00:02:40.530 real 0m3.357s 00:02:40.530 user 0m0.998s 00:02:40.530 sys 0m1.580s 00:02:40.530 22:12:06 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:40.530 22:12:06 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:02:40.530 ************************************ 00:02:40.530 END TEST denied 00:02:40.530 ************************************ 00:02:40.530 22:12:06 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:40.530 22:12:06 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:40.530 22:12:06 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:40.530 22:12:06 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:40.530 ************************************ 00:02:40.530 START TEST allowed 00:02:40.530 ************************************ 00:02:40.530 22:12:06 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:02:40.530 22:12:06 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:84:00.0 00:02:40.530 22:12:06 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:02:40.530 22:12:06 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:02:40.530 22:12:06 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:84:00.0 .*: nvme -> .*' 00:02:40.530 22:12:06 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:43.096 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:02:43.096 22:12:08 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:02:43.096 22:12:08 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:02:43.096 22:12:08 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:02:43.096 22:12:08 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:43.096 22:12:08 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:44.033 00:02:44.033 real 0m3.288s 00:02:44.033 user 0m0.885s 00:02:44.033 sys 0m1.398s 00:02:44.033 22:12:09 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:44.033 22:12:09 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:02:44.033 ************************************ 00:02:44.033 END TEST allowed 00:02:44.033 ************************************ 00:02:44.033 00:02:44.033 real 0m9.040s 00:02:44.033 user 0m2.851s 00:02:44.033 sys 0m4.527s 00:02:44.033 22:12:09 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:44.033 22:12:09 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:44.033 ************************************ 00:02:44.033 END TEST acl 00:02:44.033 ************************************ 00:02:44.033 22:12:09 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:44.033 22:12:09 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:44.033 22:12:09 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:44.033 22:12:09 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:44.033 ************************************ 00:02:44.033 START TEST hugepages 00:02:44.033 ************************************ 00:02:44.033 22:12:09 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:44.033 * Looking for test storage... 00:02:44.033 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:44.033 22:12:09 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 31028664 kB' 'MemAvailable: 34993832 kB' 'Buffers: 2704 kB' 'Cached: 14700420 kB' 'SwapCached: 0 kB' 'Active: 11544996 kB' 'Inactive: 3701476 kB' 'Active(anon): 11080208 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 546600 kB' 'Mapped: 179416 kB' 'Shmem: 10536860 kB' 'KReclaimable: 410568 kB' 'Slab: 706492 kB' 'SReclaimable: 410568 kB' 'SUnreclaim: 295924 kB' 'KernelStack: 10208 kB' 'PageTables: 8516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 32437040 kB' 'Committed_AS: 12087752 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190464 kB' 'VmallocChunk: 0 kB' 'Percpu: 25728 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3488036 kB' 'DirectMap2M: 29988864 kB' 'DirectMap1G: 27262976 kB' 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.034 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:44.035 22:12:09 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:44.036 22:12:09 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:44.036 22:12:09 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:44.036 22:12:09 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:44.036 22:12:09 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:44.036 22:12:09 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:44.036 22:12:09 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:44.036 22:12:09 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:44.036 22:12:09 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:44.036 22:12:09 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:44.036 22:12:09 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:02:44.036 22:12:09 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:44.036 22:12:09 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:44.036 22:12:09 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:44.036 ************************************ 00:02:44.036 START TEST default_setup 00:02:44.036 ************************************ 00:02:44.036 22:12:09 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:02:44.036 22:12:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:02:44.036 22:12:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:02:44.036 22:12:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:44.036 22:12:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:02:44.036 22:12:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:44.036 22:12:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:02:44.036 22:12:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:44.036 22:12:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:44.036 22:12:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:44.036 22:12:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:44.036 22:12:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:02:44.036 22:12:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:44.036 22:12:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:44.036 22:12:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:44.036 22:12:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:44.036 22:12:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:44.036 22:12:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:44.036 22:12:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:44.036 22:12:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:02:44.036 22:12:09 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:02:44.036 22:12:09 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:02:44.036 22:12:09 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:44.973 0000:00:04.7 (8086 3c27): ioatdma -> vfio-pci 00:02:44.973 0000:00:04.6 (8086 3c26): ioatdma -> vfio-pci 00:02:44.973 0000:00:04.5 (8086 3c25): ioatdma -> vfio-pci 00:02:44.973 0000:00:04.4 (8086 3c24): ioatdma -> vfio-pci 00:02:44.973 0000:00:04.3 (8086 3c23): ioatdma -> vfio-pci 00:02:44.973 0000:00:04.2 (8086 3c22): ioatdma -> vfio-pci 00:02:44.973 0000:00:04.1 (8086 3c21): ioatdma -> vfio-pci 00:02:44.973 0000:00:04.0 (8086 3c20): ioatdma -> vfio-pci 00:02:44.973 0000:80:04.7 (8086 3c27): ioatdma -> vfio-pci 00:02:45.234 0000:80:04.6 (8086 3c26): ioatdma -> vfio-pci 00:02:45.234 0000:80:04.5 (8086 3c25): ioatdma -> vfio-pci 00:02:45.234 0000:80:04.4 (8086 3c24): ioatdma -> vfio-pci 00:02:45.234 0000:80:04.3 (8086 3c23): ioatdma -> vfio-pci 00:02:45.234 0000:80:04.2 (8086 3c22): ioatdma -> vfio-pci 00:02:45.234 0000:80:04.1 (8086 3c21): ioatdma -> vfio-pci 00:02:45.234 0000:80:04.0 (8086 3c20): ioatdma -> vfio-pci 00:02:46.181 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 33128024 kB' 'MemAvailable: 37093176 kB' 'Buffers: 2704 kB' 'Cached: 14700504 kB' 'SwapCached: 0 kB' 'Active: 11559852 kB' 'Inactive: 3701476 kB' 'Active(anon): 11095064 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 561872 kB' 'Mapped: 179116 kB' 'Shmem: 10536944 kB' 'KReclaimable: 410552 kB' 'Slab: 706356 kB' 'SReclaimable: 410552 kB' 'SUnreclaim: 295804 kB' 'KernelStack: 10128 kB' 'PageTables: 8764 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12102176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190464 kB' 'VmallocChunk: 0 kB' 'Percpu: 25728 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3488036 kB' 'DirectMap2M: 29988864 kB' 'DirectMap1G: 27262976 kB' 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.181 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.182 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 33128068 kB' 'MemAvailable: 37093220 kB' 'Buffers: 2704 kB' 'Cached: 14700504 kB' 'SwapCached: 0 kB' 'Active: 11560308 kB' 'Inactive: 3701476 kB' 'Active(anon): 11095520 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 561884 kB' 'Mapped: 179116 kB' 'Shmem: 10536944 kB' 'KReclaimable: 410552 kB' 'Slab: 706356 kB' 'SReclaimable: 410552 kB' 'SUnreclaim: 295804 kB' 'KernelStack: 10128 kB' 'PageTables: 8764 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12102196 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190384 kB' 'VmallocChunk: 0 kB' 'Percpu: 25728 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3488036 kB' 'DirectMap2M: 29988864 kB' 'DirectMap1G: 27262976 kB' 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.183 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.184 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 33133056 kB' 'MemAvailable: 37098208 kB' 'Buffers: 2704 kB' 'Cached: 14700524 kB' 'SwapCached: 0 kB' 'Active: 11558948 kB' 'Inactive: 3701476 kB' 'Active(anon): 11094160 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560476 kB' 'Mapped: 179084 kB' 'Shmem: 10536964 kB' 'KReclaimable: 410552 kB' 'Slab: 706444 kB' 'SReclaimable: 410552 kB' 'SUnreclaim: 295892 kB' 'KernelStack: 9952 kB' 'PageTables: 7592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12099856 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190256 kB' 'VmallocChunk: 0 kB' 'Percpu: 25728 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3488036 kB' 'DirectMap2M: 29988864 kB' 'DirectMap1G: 27262976 kB' 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.185 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:46.186 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:46.187 nr_hugepages=1024 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:46.187 resv_hugepages=0 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:46.187 surplus_hugepages=0 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:46.187 anon_hugepages=0 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 33132844 kB' 'MemAvailable: 37097996 kB' 'Buffers: 2704 kB' 'Cached: 14700548 kB' 'SwapCached: 0 kB' 'Active: 11558880 kB' 'Inactive: 3701476 kB' 'Active(anon): 11094092 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560472 kB' 'Mapped: 179072 kB' 'Shmem: 10536988 kB' 'KReclaimable: 410552 kB' 'Slab: 706444 kB' 'SReclaimable: 410552 kB' 'SUnreclaim: 295892 kB' 'KernelStack: 10064 kB' 'PageTables: 7896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12099880 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190240 kB' 'VmallocChunk: 0 kB' 'Percpu: 25728 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3488036 kB' 'DirectMap2M: 29988864 kB' 'DirectMap1G: 27262976 kB' 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.187 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:46.188 22:12:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32881748 kB' 'MemFree: 19666780 kB' 'MemUsed: 13214968 kB' 'SwapCached: 0 kB' 'Active: 6802608 kB' 'Inactive: 3397596 kB' 'Active(anon): 6591200 kB' 'Inactive(anon): 0 kB' 'Active(file): 211408 kB' 'Inactive(file): 3397596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9872460 kB' 'Mapped: 100852 kB' 'AnonPages: 330920 kB' 'Shmem: 6263456 kB' 'KernelStack: 5816 kB' 'PageTables: 4040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 271072 kB' 'Slab: 421756 kB' 'SReclaimable: 271072 kB' 'SUnreclaim: 150684 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.189 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.190 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.190 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.190 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.190 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.190 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.190 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.190 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.190 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.190 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.190 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.190 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.190 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.190 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.190 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.190 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.190 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.190 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.190 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.190 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.190 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.190 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.190 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.190 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.190 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.190 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.190 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.190 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.190 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.190 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.190 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.190 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.190 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.190 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.190 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.190 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.190 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.190 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.190 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.190 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.190 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.190 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.190 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.190 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.190 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.190 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.190 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.190 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.190 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.190 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.190 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.190 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.190 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.190 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.190 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.190 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.190 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:46.190 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:46.190 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:46.190 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.190 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:46.190 22:12:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:46.190 22:12:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:46.190 22:12:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:46.190 22:12:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:46.190 22:12:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:46.190 22:12:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:46.190 node0=1024 expecting 1024 00:02:46.190 22:12:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:46.190 00:02:46.190 real 0m2.140s 00:02:46.190 user 0m0.577s 00:02:46.190 sys 0m0.742s 00:02:46.190 22:12:11 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:46.190 22:12:11 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:02:46.190 ************************************ 00:02:46.190 END TEST default_setup 00:02:46.190 ************************************ 00:02:46.190 22:12:11 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:02:46.190 22:12:11 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:46.190 22:12:11 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:46.190 22:12:11 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:46.190 ************************************ 00:02:46.190 START TEST per_node_1G_alloc 00:02:46.190 ************************************ 00:02:46.190 22:12:11 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:02:46.190 22:12:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:02:46.190 22:12:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:02:46.190 22:12:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:02:46.190 22:12:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:02:46.190 22:12:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:02:46.190 22:12:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:02:46.190 22:12:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:02:46.190 22:12:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:46.190 22:12:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:46.190 22:12:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:02:46.190 22:12:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:02:46.190 22:12:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:46.190 22:12:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:46.190 22:12:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:46.190 22:12:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:46.190 22:12:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:46.190 22:12:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:02:46.190 22:12:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:46.190 22:12:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:46.190 22:12:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:46.191 22:12:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:46.191 22:12:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:02:46.191 22:12:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:02:46.191 22:12:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:02:46.191 22:12:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:02:46.191 22:12:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:46.191 22:12:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:47.590 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:02:47.590 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:47.590 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:02:47.590 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:02:47.590 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:02:47.590 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:02:47.590 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:02:47.590 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:02:47.590 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:02:47.590 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:02:47.590 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:02:47.590 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:02:47.590 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:02:47.590 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:02:47.590 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:02:47.590 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:02:47.590 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:02:47.590 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:02:47.590 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:02:47.590 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:02:47.590 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:47.590 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:47.590 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:47.590 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:47.590 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:47.590 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:47.590 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:47.590 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:47.590 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:47.590 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:47.590 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:47.590 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:47.590 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:47.590 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:47.590 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:47.590 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:47.590 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.590 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.590 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 33139376 kB' 'MemAvailable: 37104528 kB' 'Buffers: 2704 kB' 'Cached: 14700612 kB' 'SwapCached: 0 kB' 'Active: 11559412 kB' 'Inactive: 3701476 kB' 'Active(anon): 11094624 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560772 kB' 'Mapped: 179196 kB' 'Shmem: 10537052 kB' 'KReclaimable: 410552 kB' 'Slab: 706264 kB' 'SReclaimable: 410552 kB' 'SUnreclaim: 295712 kB' 'KernelStack: 10032 kB' 'PageTables: 7840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12100184 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190368 kB' 'VmallocChunk: 0 kB' 'Percpu: 25728 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3488036 kB' 'DirectMap2M: 29988864 kB' 'DirectMap1G: 27262976 kB' 00:02:47.590 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.590 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.590 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.590 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.590 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.590 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.590 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.590 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.590 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.590 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.590 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.590 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.590 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.590 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.590 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.590 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.590 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.590 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.590 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.590 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.590 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.590 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.590 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.590 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.590 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.590 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.590 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.590 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.590 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.590 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.590 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.590 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.590 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.590 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.591 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 33138596 kB' 'MemAvailable: 37103748 kB' 'Buffers: 2704 kB' 'Cached: 14700620 kB' 'SwapCached: 0 kB' 'Active: 11559228 kB' 'Inactive: 3701476 kB' 'Active(anon): 11094440 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560580 kB' 'Mapped: 179084 kB' 'Shmem: 10537060 kB' 'KReclaimable: 410552 kB' 'Slab: 706272 kB' 'SReclaimable: 410552 kB' 'SUnreclaim: 295720 kB' 'KernelStack: 10064 kB' 'PageTables: 7892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12100204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190336 kB' 'VmallocChunk: 0 kB' 'Percpu: 25728 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3488036 kB' 'DirectMap2M: 29988864 kB' 'DirectMap1G: 27262976 kB' 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.592 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.593 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.593 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.593 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.593 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.593 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.593 22:12:12 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.593 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 33138596 kB' 'MemAvailable: 37103748 kB' 'Buffers: 2704 kB' 'Cached: 14700636 kB' 'SwapCached: 0 kB' 'Active: 11559284 kB' 'Inactive: 3701476 kB' 'Active(anon): 11094496 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560580 kB' 'Mapped: 179084 kB' 'Shmem: 10537076 kB' 'KReclaimable: 410552 kB' 'Slab: 706272 kB' 'SReclaimable: 410552 kB' 'SUnreclaim: 295720 kB' 'KernelStack: 10064 kB' 'PageTables: 7892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12100224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190320 kB' 'VmallocChunk: 0 kB' 'Percpu: 25728 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3488036 kB' 'DirectMap2M: 29988864 kB' 'DirectMap1G: 27262976 kB' 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.594 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.595 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:47.596 nr_hugepages=1024 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:47.596 resv_hugepages=0 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:47.596 surplus_hugepages=0 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:47.596 anon_hugepages=0 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 33138596 kB' 'MemAvailable: 37103748 kB' 'Buffers: 2704 kB' 'Cached: 14700660 kB' 'SwapCached: 0 kB' 'Active: 11559308 kB' 'Inactive: 3701476 kB' 'Active(anon): 11094520 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560584 kB' 'Mapped: 179084 kB' 'Shmem: 10537100 kB' 'KReclaimable: 410552 kB' 'Slab: 706272 kB' 'SReclaimable: 410552 kB' 'SUnreclaim: 295720 kB' 'KernelStack: 10064 kB' 'PageTables: 7892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12100248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190320 kB' 'VmallocChunk: 0 kB' 'Percpu: 25728 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3488036 kB' 'DirectMap2M: 29988864 kB' 'DirectMap1G: 27262976 kB' 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.596 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.597 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32881748 kB' 'MemFree: 20718128 kB' 'MemUsed: 12163620 kB' 'SwapCached: 0 kB' 'Active: 6802532 kB' 'Inactive: 3397596 kB' 'Active(anon): 6591124 kB' 'Inactive(anon): 0 kB' 'Active(file): 211408 kB' 'Inactive(file): 3397596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9872500 kB' 'Mapped: 100864 kB' 'AnonPages: 330756 kB' 'Shmem: 6263496 kB' 'KernelStack: 5784 kB' 'PageTables: 4036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 271072 kB' 'Slab: 421600 kB' 'SReclaimable: 271072 kB' 'SUnreclaim: 150528 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.598 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.599 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.600 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.600 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.600 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.600 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.600 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.600 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.600 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.600 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.600 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.600 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.600 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.600 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.600 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.600 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.600 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.600 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.600 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.600 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.600 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.600 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.600 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.600 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.600 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:47.600 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:47.600 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:47.600 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:47.600 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:47.600 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:47.600 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:47.600 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:02:47.600 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:47.600 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:47.600 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:47.600 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:47.600 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:47.600 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:47.600 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:47.600 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.600 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.600 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19409432 kB' 'MemFree: 12420904 kB' 'MemUsed: 6988528 kB' 'SwapCached: 0 kB' 'Active: 4757132 kB' 'Inactive: 303880 kB' 'Active(anon): 4503752 kB' 'Inactive(anon): 0 kB' 'Active(file): 253380 kB' 'Inactive(file): 303880 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4830904 kB' 'Mapped: 78220 kB' 'AnonPages: 230116 kB' 'Shmem: 4273644 kB' 'KernelStack: 4280 kB' 'PageTables: 3856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 139480 kB' 'Slab: 284672 kB' 'SReclaimable: 139480 kB' 'SUnreclaim: 145192 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:47.600 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.600 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.600 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.600 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.600 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.600 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.600 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.600 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.600 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.600 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.600 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.600 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.600 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.600 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.600 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.600 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.600 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.600 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.600 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.600 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.600 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.600 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.600 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.600 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.601 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.602 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.602 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.602 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.602 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.602 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:47.602 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:47.602 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:47.602 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:47.602 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:47.602 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:47.602 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:47.602 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:47.602 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:47.602 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:47.602 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:47.602 node0=512 expecting 512 00:02:47.602 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:47.602 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:47.602 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:47.602 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:47.602 node1=512 expecting 512 00:02:47.602 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:47.602 00:02:47.602 real 0m1.271s 00:02:47.602 user 0m0.596s 00:02:47.602 sys 0m0.708s 00:02:47.602 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:47.602 22:12:13 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:47.602 ************************************ 00:02:47.602 END TEST per_node_1G_alloc 00:02:47.602 ************************************ 00:02:47.602 22:12:13 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:02:47.602 22:12:13 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:47.602 22:12:13 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:47.602 22:12:13 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:47.602 ************************************ 00:02:47.602 START TEST even_2G_alloc 00:02:47.602 ************************************ 00:02:47.602 22:12:13 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:02:47.602 22:12:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:02:47.602 22:12:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:47.602 22:12:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:47.602 22:12:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:47.602 22:12:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:47.602 22:12:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:47.602 22:12:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:47.602 22:12:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:47.602 22:12:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:47.602 22:12:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:47.602 22:12:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:47.602 22:12:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:47.602 22:12:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:47.602 22:12:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:47.602 22:12:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:47.602 22:12:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:47.602 22:12:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:02:47.602 22:12:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:47.602 22:12:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:47.602 22:12:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:47.602 22:12:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:47.602 22:12:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:47.602 22:12:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:47.602 22:12:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:02:47.602 22:12:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:02:47.602 22:12:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:02:47.602 22:12:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:47.602 22:12:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:48.541 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:02:48.541 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:48.541 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:02:48.541 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:02:48.541 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:02:48.541 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:02:48.541 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:02:48.541 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:02:48.541 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:02:48.541 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:02:48.541 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:02:48.541 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:02:48.541 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:02:48.541 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:02:48.541 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:02:48.541 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:02:48.541 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:02:48.541 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:02:48.541 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:02:48.541 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:48.541 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:48.541 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:48.541 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:48.541 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:48.541 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:48.541 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:48.541 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:48.541 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:48.541 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:48.541 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:48.541 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.541 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:48.541 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:48.541 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.541 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.541 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.541 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 33153680 kB' 'MemAvailable: 37118832 kB' 'Buffers: 2704 kB' 'Cached: 14700744 kB' 'SwapCached: 0 kB' 'Active: 11560120 kB' 'Inactive: 3701476 kB' 'Active(anon): 11095332 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560948 kB' 'Mapped: 179212 kB' 'Shmem: 10537184 kB' 'KReclaimable: 410552 kB' 'Slab: 706400 kB' 'SReclaimable: 410552 kB' 'SUnreclaim: 295848 kB' 'KernelStack: 10048 kB' 'PageTables: 7860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12100488 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190512 kB' 'VmallocChunk: 0 kB' 'Percpu: 25728 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3488036 kB' 'DirectMap2M: 29988864 kB' 'DirectMap1G: 27262976 kB' 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.542 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 33156480 kB' 'MemAvailable: 37121632 kB' 'Buffers: 2704 kB' 'Cached: 14700744 kB' 'SwapCached: 0 kB' 'Active: 11559548 kB' 'Inactive: 3701476 kB' 'Active(anon): 11094760 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560780 kB' 'Mapped: 179212 kB' 'Shmem: 10537184 kB' 'KReclaimable: 410552 kB' 'Slab: 706392 kB' 'SReclaimable: 410552 kB' 'SUnreclaim: 295840 kB' 'KernelStack: 10064 kB' 'PageTables: 7876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12100508 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190496 kB' 'VmallocChunk: 0 kB' 'Percpu: 25728 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3488036 kB' 'DirectMap2M: 29988864 kB' 'DirectMap1G: 27262976 kB' 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.543 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.544 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 33156352 kB' 'MemAvailable: 37121504 kB' 'Buffers: 2704 kB' 'Cached: 14700764 kB' 'SwapCached: 0 kB' 'Active: 11559764 kB' 'Inactive: 3701476 kB' 'Active(anon): 11094976 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560932 kB' 'Mapped: 179100 kB' 'Shmem: 10537204 kB' 'KReclaimable: 410552 kB' 'Slab: 706392 kB' 'SReclaimable: 410552 kB' 'SUnreclaim: 295840 kB' 'KernelStack: 10064 kB' 'PageTables: 7884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12100528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190464 kB' 'VmallocChunk: 0 kB' 'Percpu: 25728 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3488036 kB' 'DirectMap2M: 29988864 kB' 'DirectMap1G: 27262976 kB' 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.545 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.546 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.546 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.546 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.546 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.546 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.546 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.546 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.546 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.546 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.546 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.546 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.546 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.546 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.546 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.546 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.546 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.546 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.546 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.546 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.546 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.546 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.546 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.546 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.546 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.546 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.546 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.546 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.546 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.546 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.546 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.546 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.546 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.546 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.546 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.546 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.546 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.546 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.546 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.546 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.546 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.546 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.546 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.546 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.546 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.546 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.546 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.546 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.546 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.546 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.546 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.546 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.546 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.546 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.546 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.546 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.546 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.546 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.546 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.546 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.546 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.546 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.546 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.546 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.546 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.546 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.546 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.546 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.546 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.546 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.546 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.546 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.547 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:48.810 nr_hugepages=1024 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:48.810 resv_hugepages=0 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:48.810 surplus_hugepages=0 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:48.810 anon_hugepages=0 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 33157128 kB' 'MemAvailable: 37122280 kB' 'Buffers: 2704 kB' 'Cached: 14700788 kB' 'SwapCached: 0 kB' 'Active: 11559476 kB' 'Inactive: 3701476 kB' 'Active(anon): 11094688 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560644 kB' 'Mapped: 179100 kB' 'Shmem: 10537228 kB' 'KReclaimable: 410552 kB' 'Slab: 706392 kB' 'SReclaimable: 410552 kB' 'SUnreclaim: 295840 kB' 'KernelStack: 10064 kB' 'PageTables: 7884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12100552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190464 kB' 'VmallocChunk: 0 kB' 'Percpu: 25728 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3488036 kB' 'DirectMap2M: 29988864 kB' 'DirectMap1G: 27262976 kB' 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.810 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.811 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32881748 kB' 'MemFree: 20730632 kB' 'MemUsed: 12151116 kB' 'SwapCached: 0 kB' 'Active: 6802512 kB' 'Inactive: 3397596 kB' 'Active(anon): 6591104 kB' 'Inactive(anon): 0 kB' 'Active(file): 211408 kB' 'Inactive(file): 3397596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9872560 kB' 'Mapped: 100880 kB' 'AnonPages: 330728 kB' 'Shmem: 6263556 kB' 'KernelStack: 5784 kB' 'PageTables: 4032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 271072 kB' 'Slab: 421728 kB' 'SReclaimable: 271072 kB' 'SUnreclaim: 150656 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.812 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.813 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19409432 kB' 'MemFree: 12426496 kB' 'MemUsed: 6982936 kB' 'SwapCached: 0 kB' 'Active: 4756960 kB' 'Inactive: 303880 kB' 'Active(anon): 4503580 kB' 'Inactive(anon): 0 kB' 'Active(file): 253380 kB' 'Inactive(file): 303880 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4830952 kB' 'Mapped: 78220 kB' 'AnonPages: 229912 kB' 'Shmem: 4273692 kB' 'KernelStack: 4280 kB' 'PageTables: 3852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 139480 kB' 'Slab: 284664 kB' 'SReclaimable: 139480 kB' 'SUnreclaim: 145184 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.814 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:48.815 node0=512 expecting 512 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:48.815 node1=512 expecting 512 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:48.815 00:02:48.815 real 0m1.145s 00:02:48.815 user 0m0.521s 00:02:48.815 sys 0m0.657s 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:48.815 22:12:14 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:48.815 ************************************ 00:02:48.815 END TEST even_2G_alloc 00:02:48.815 ************************************ 00:02:48.815 22:12:14 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:02:48.815 22:12:14 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:48.815 22:12:14 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:48.815 22:12:14 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:48.815 ************************************ 00:02:48.815 START TEST odd_alloc 00:02:48.815 ************************************ 00:02:48.815 22:12:14 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:02:48.815 22:12:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:02:48.815 22:12:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:02:48.815 22:12:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:48.815 22:12:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:48.816 22:12:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:02:48.816 22:12:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:48.816 22:12:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:48.816 22:12:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:48.816 22:12:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:02:48.816 22:12:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:48.816 22:12:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:48.816 22:12:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:48.816 22:12:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:48.816 22:12:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:48.816 22:12:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:48.816 22:12:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:48.816 22:12:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:02:48.816 22:12:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:48.816 22:12:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:48.816 22:12:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:02:48.816 22:12:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:48.816 22:12:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:48.816 22:12:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:48.816 22:12:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:02:48.816 22:12:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:02:48.816 22:12:14 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:02:48.816 22:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:48.816 22:12:14 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:49.756 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:02:49.756 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:49.756 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:02:49.756 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:02:49.756 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:02:49.756 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:02:49.756 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:02:49.756 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:02:49.756 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:02:49.756 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:02:49.756 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:02:49.756 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:02:49.756 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:02:49.756 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:02:49.756 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:02:49.756 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:02:49.756 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:02:49.756 22:12:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:02:49.756 22:12:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:02:49.756 22:12:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:49.756 22:12:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:49.756 22:12:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:49.756 22:12:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:49.756 22:12:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:49.756 22:12:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:49.756 22:12:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:49.756 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:49.756 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:49.756 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:49.756 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:49.756 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:49.756 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:49.756 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:49.756 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:49.756 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:49.756 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.756 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.756 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 33148316 kB' 'MemAvailable: 37113460 kB' 'Buffers: 2704 kB' 'Cached: 14700876 kB' 'SwapCached: 0 kB' 'Active: 11555508 kB' 'Inactive: 3701476 kB' 'Active(anon): 11090720 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556564 kB' 'Mapped: 178044 kB' 'Shmem: 10537316 kB' 'KReclaimable: 410544 kB' 'Slab: 706220 kB' 'SReclaimable: 410544 kB' 'SUnreclaim: 295676 kB' 'KernelStack: 9968 kB' 'PageTables: 7476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33484592 kB' 'Committed_AS: 12085436 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190352 kB' 'VmallocChunk: 0 kB' 'Percpu: 25728 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3488036 kB' 'DirectMap2M: 29988864 kB' 'DirectMap1G: 27262976 kB' 00:02:49.756 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.756 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.756 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.756 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.756 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.756 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.756 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.756 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.756 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.756 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.756 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.756 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.756 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.756 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.756 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.756 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.756 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.756 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.756 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.756 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.756 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.756 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.756 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.756 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.756 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.756 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.756 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.756 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.756 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.756 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.756 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.756 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.756 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.756 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.756 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.756 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.756 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.756 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.756 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.756 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.756 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.756 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.756 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.756 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.756 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.756 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.756 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.756 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:49.757 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 33148448 kB' 'MemAvailable: 37113592 kB' 'Buffers: 2704 kB' 'Cached: 14700880 kB' 'SwapCached: 0 kB' 'Active: 11555752 kB' 'Inactive: 3701476 kB' 'Active(anon): 11090964 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556828 kB' 'Mapped: 178088 kB' 'Shmem: 10537320 kB' 'KReclaimable: 410544 kB' 'Slab: 706228 kB' 'SReclaimable: 410544 kB' 'SUnreclaim: 295684 kB' 'KernelStack: 9936 kB' 'PageTables: 7392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33484592 kB' 'Committed_AS: 12085456 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190320 kB' 'VmallocChunk: 0 kB' 'Percpu: 25728 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3488036 kB' 'DirectMap2M: 29988864 kB' 'DirectMap1G: 27262976 kB' 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.758 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.759 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 33147692 kB' 'MemAvailable: 37112836 kB' 'Buffers: 2704 kB' 'Cached: 14700896 kB' 'SwapCached: 0 kB' 'Active: 11555856 kB' 'Inactive: 3701476 kB' 'Active(anon): 11091068 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556936 kB' 'Mapped: 178020 kB' 'Shmem: 10537336 kB' 'KReclaimable: 410544 kB' 'Slab: 706260 kB' 'SReclaimable: 410544 kB' 'SUnreclaim: 295716 kB' 'KernelStack: 10000 kB' 'PageTables: 7568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33484592 kB' 'Committed_AS: 12085476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190320 kB' 'VmallocChunk: 0 kB' 'Percpu: 25728 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3488036 kB' 'DirectMap2M: 29988864 kB' 'DirectMap1G: 27262976 kB' 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.760 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.761 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:02:49.762 nr_hugepages=1025 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:49.762 resv_hugepages=0 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:49.762 surplus_hugepages=0 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:49.762 anon_hugepages=0 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 33150124 kB' 'MemAvailable: 37115268 kB' 'Buffers: 2704 kB' 'Cached: 14700896 kB' 'SwapCached: 0 kB' 'Active: 11555816 kB' 'Inactive: 3701476 kB' 'Active(anon): 11091028 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556896 kB' 'Mapped: 177840 kB' 'Shmem: 10537336 kB' 'KReclaimable: 410544 kB' 'Slab: 706260 kB' 'SReclaimable: 410544 kB' 'SUnreclaim: 295716 kB' 'KernelStack: 9984 kB' 'PageTables: 7520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33484592 kB' 'Committed_AS: 12085256 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190288 kB' 'VmallocChunk: 0 kB' 'Percpu: 25728 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3488036 kB' 'DirectMap2M: 29988864 kB' 'DirectMap1G: 27262976 kB' 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.762 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.763 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.763 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.763 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.763 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.763 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.763 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.763 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.763 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.763 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.763 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.763 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.763 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.763 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.763 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.763 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.763 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.763 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.763 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.763 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.763 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.763 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.763 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.763 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.763 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.763 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.763 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.763 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.763 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.763 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.763 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.763 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.763 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.763 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.763 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.763 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.763 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.763 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.763 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.763 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:49.763 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:49.763 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:49.763 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:49.763 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.026 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.026 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.026 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.026 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.026 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.026 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.026 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.026 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.026 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.026 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.026 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.026 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.026 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.026 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.026 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.026 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.026 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.026 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.026 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.026 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.026 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.026 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.026 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.026 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.026 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.026 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.026 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.026 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.026 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.026 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.026 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.026 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.026 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.026 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.026 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.026 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.026 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.026 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.026 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.026 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.026 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.026 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.026 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.026 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.026 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.026 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.026 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.026 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.026 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.026 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.026 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.026 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.026 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.026 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.026 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.026 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.026 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.026 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.026 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.026 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.026 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.026 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.026 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.026 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.026 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.026 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.026 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.026 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.026 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.026 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32881748 kB' 'MemFree: 20725880 kB' 'MemUsed: 12155868 kB' 'SwapCached: 0 kB' 'Active: 6801476 kB' 'Inactive: 3397596 kB' 'Active(anon): 6590068 kB' 'Inactive(anon): 0 kB' 'Active(file): 211408 kB' 'Inactive(file): 3397596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9872684 kB' 'Mapped: 100396 kB' 'AnonPages: 329560 kB' 'Shmem: 6263680 kB' 'KernelStack: 5784 kB' 'PageTables: 4096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 271072 kB' 'Slab: 421660 kB' 'SReclaimable: 271072 kB' 'SUnreclaim: 150588 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.027 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19409432 kB' 'MemFree: 12423896 kB' 'MemUsed: 6985536 kB' 'SwapCached: 0 kB' 'Active: 4754812 kB' 'Inactive: 303880 kB' 'Active(anon): 4501432 kB' 'Inactive(anon): 0 kB' 'Active(file): 253380 kB' 'Inactive(file): 303880 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4830964 kB' 'Mapped: 77624 kB' 'AnonPages: 227828 kB' 'Shmem: 4273704 kB' 'KernelStack: 4216 kB' 'PageTables: 3480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 139472 kB' 'Slab: 284588 kB' 'SReclaimable: 139472 kB' 'SUnreclaim: 145116 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.028 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:02:50.029 node0=512 expecting 513 00:02:50.029 22:12:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:50.030 22:12:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:50.030 22:12:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:50.030 22:12:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:02:50.030 node1=513 expecting 512 00:02:50.030 22:12:15 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:02:50.030 00:02:50.030 real 0m1.135s 00:02:50.030 user 0m0.507s 00:02:50.030 sys 0m0.660s 00:02:50.030 22:12:15 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:50.030 22:12:15 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:50.030 ************************************ 00:02:50.030 END TEST odd_alloc 00:02:50.030 ************************************ 00:02:50.030 22:12:15 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:02:50.030 22:12:15 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:50.030 22:12:15 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:50.030 22:12:15 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:50.030 ************************************ 00:02:50.030 START TEST custom_alloc 00:02:50.030 ************************************ 00:02:50.030 22:12:15 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:02:50.030 22:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:02:50.030 22:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:02:50.030 22:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:02:50.030 22:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:02:50.030 22:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:02:50.030 22:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:02:50.030 22:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:02:50.030 22:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:50.030 22:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:50.030 22:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:50.030 22:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:50.030 22:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:50.030 22:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:50.030 22:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:50.030 22:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:50.030 22:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:50.030 22:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:50.030 22:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:50.030 22:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:50.030 22:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:50.030 22:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:50.030 22:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:02:50.030 22:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:50.030 22:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:50.030 22:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:50.030 22:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:50.030 22:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:50.030 22:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:50.030 22:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:02:50.030 22:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:02:50.030 22:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:02:50.030 22:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:50.030 22:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:50.030 22:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:50.030 22:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:50.030 22:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:50.030 22:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:50.030 22:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:50.030 22:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:50.030 22:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:50.030 22:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:50.030 22:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:50.030 22:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:50.030 22:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:02:50.030 22:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:50.030 22:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:50.030 22:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:02:50.030 22:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:02:50.030 22:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:50.030 22:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:50.030 22:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:50.030 22:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:50.030 22:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:50.030 22:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:50.030 22:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:02:50.030 22:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:50.030 22:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:50.030 22:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:50.030 22:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:50.030 22:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:50.030 22:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:50.030 22:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:50.030 22:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:02:50.030 22:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:50.030 22:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:50.030 22:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:50.030 22:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:02:50.030 22:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:02:50.030 22:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:02:50.030 22:12:15 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:02:50.030 22:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:50.030 22:12:15 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:50.970 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:02:50.970 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:50.970 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:02:50.970 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:02:50.970 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:02:50.970 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:02:50.970 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:02:50.970 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:02:50.970 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:02:50.970 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:02:50.970 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:02:50.970 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:02:50.970 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:02:50.970 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:02:50.970 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:02:50.970 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:02:50.970 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:02:51.235 22:12:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:02:51.235 22:12:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:02:51.235 22:12:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:02:51.235 22:12:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:51.235 22:12:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:51.235 22:12:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:51.235 22:12:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:51.235 22:12:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:51.235 22:12:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:51.235 22:12:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:51.235 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:51.235 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:51.235 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:51.235 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:51.235 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.235 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:51.235 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:51.235 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.235 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.235 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.235 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.235 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32104036 kB' 'MemAvailable: 36069180 kB' 'Buffers: 2704 kB' 'Cached: 14701008 kB' 'SwapCached: 0 kB' 'Active: 11555988 kB' 'Inactive: 3701476 kB' 'Active(anon): 11091200 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556972 kB' 'Mapped: 178156 kB' 'Shmem: 10537448 kB' 'KReclaimable: 410544 kB' 'Slab: 706220 kB' 'SReclaimable: 410544 kB' 'SUnreclaim: 295676 kB' 'KernelStack: 10016 kB' 'PageTables: 7608 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 32961328 kB' 'Committed_AS: 12085856 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190352 kB' 'VmallocChunk: 0 kB' 'Percpu: 25728 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3488036 kB' 'DirectMap2M: 29988864 kB' 'DirectMap1G: 27262976 kB' 00:02:51.235 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.235 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.235 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.235 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.235 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.235 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.235 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.235 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.235 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.235 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.235 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.235 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.235 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.235 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.235 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.235 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.235 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.235 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.235 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.235 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.235 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.235 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.235 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.235 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.235 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.235 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.235 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.235 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.235 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.235 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.235 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.235 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.235 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.235 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.235 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.235 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.235 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.235 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.236 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32104284 kB' 'MemAvailable: 36069428 kB' 'Buffers: 2704 kB' 'Cached: 14701008 kB' 'SwapCached: 0 kB' 'Active: 11556476 kB' 'Inactive: 3701476 kB' 'Active(anon): 11091688 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557488 kB' 'Mapped: 178116 kB' 'Shmem: 10537448 kB' 'KReclaimable: 410544 kB' 'Slab: 706220 kB' 'SReclaimable: 410544 kB' 'SUnreclaim: 295676 kB' 'KernelStack: 10000 kB' 'PageTables: 7568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 32961328 kB' 'Committed_AS: 12085876 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190288 kB' 'VmallocChunk: 0 kB' 'Percpu: 25728 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3488036 kB' 'DirectMap2M: 29988864 kB' 'DirectMap1G: 27262976 kB' 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.237 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.238 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32104284 kB' 'MemAvailable: 36069428 kB' 'Buffers: 2704 kB' 'Cached: 14701028 kB' 'SwapCached: 0 kB' 'Active: 11556040 kB' 'Inactive: 3701476 kB' 'Active(anon): 11091252 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557056 kB' 'Mapped: 178044 kB' 'Shmem: 10537468 kB' 'KReclaimable: 410544 kB' 'Slab: 706236 kB' 'SReclaimable: 410544 kB' 'SUnreclaim: 295692 kB' 'KernelStack: 10000 kB' 'PageTables: 7568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 32961328 kB' 'Committed_AS: 12085896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190288 kB' 'VmallocChunk: 0 kB' 'Percpu: 25728 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3488036 kB' 'DirectMap2M: 29988864 kB' 'DirectMap1G: 27262976 kB' 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.239 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.240 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:02:51.241 nr_hugepages=1536 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:51.241 resv_hugepages=0 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:51.241 surplus_hugepages=0 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:51.241 anon_hugepages=0 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 32104032 kB' 'MemAvailable: 36069176 kB' 'Buffers: 2704 kB' 'Cached: 14701048 kB' 'SwapCached: 0 kB' 'Active: 11556052 kB' 'Inactive: 3701476 kB' 'Active(anon): 11091264 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557056 kB' 'Mapped: 178044 kB' 'Shmem: 10537488 kB' 'KReclaimable: 410544 kB' 'Slab: 706236 kB' 'SReclaimable: 410544 kB' 'SUnreclaim: 295692 kB' 'KernelStack: 10000 kB' 'PageTables: 7568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 32961328 kB' 'Committed_AS: 12085916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190288 kB' 'VmallocChunk: 0 kB' 'Percpu: 25728 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3488036 kB' 'DirectMap2M: 29988864 kB' 'DirectMap1G: 27262976 kB' 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.241 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.242 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.243 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32881748 kB' 'MemFree: 20732764 kB' 'MemUsed: 12148984 kB' 'SwapCached: 0 kB' 'Active: 6801528 kB' 'Inactive: 3397596 kB' 'Active(anon): 6590120 kB' 'Inactive(anon): 0 kB' 'Active(file): 211408 kB' 'Inactive(file): 3397596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9872792 kB' 'Mapped: 100424 kB' 'AnonPages: 329524 kB' 'Shmem: 6263788 kB' 'KernelStack: 5768 kB' 'PageTables: 4048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 271072 kB' 'Slab: 421652 kB' 'SReclaimable: 271072 kB' 'SUnreclaim: 150580 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.244 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.245 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.245 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.245 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.245 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.245 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.245 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.245 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.245 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.245 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.245 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.245 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.245 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.245 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.245 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.245 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.245 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.245 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.245 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.245 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.245 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.245 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.245 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.245 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.245 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.245 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.245 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.245 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.245 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.245 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.245 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.245 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.245 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.245 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.245 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.245 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.245 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.245 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.245 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.245 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.245 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:51.245 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:51.245 22:12:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:51.245 22:12:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:51.245 22:12:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:51.245 22:12:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:51.245 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:51.245 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:02:51.245 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:51.245 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:51.245 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:51.245 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:51.245 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:51.245 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:51.245 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:51.245 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.245 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.245 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19409432 kB' 'MemFree: 11371268 kB' 'MemUsed: 8038164 kB' 'SwapCached: 0 kB' 'Active: 4754564 kB' 'Inactive: 303880 kB' 'Active(anon): 4501184 kB' 'Inactive(anon): 0 kB' 'Active(file): 253380 kB' 'Inactive(file): 303880 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 4830984 kB' 'Mapped: 77620 kB' 'AnonPages: 227508 kB' 'Shmem: 4273724 kB' 'KernelStack: 4232 kB' 'PageTables: 3472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 139472 kB' 'Slab: 284584 kB' 'SReclaimable: 139472 kB' 'SUnreclaim: 145112 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:51.245 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.245 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.245 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.245 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.246 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.246 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.246 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.246 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.246 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.246 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.246 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.246 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.246 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.246 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.246 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.246 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.246 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.246 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.246 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.246 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.246 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.246 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.246 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.246 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.246 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.246 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.246 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.246 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.246 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.246 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.246 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.246 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.246 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.246 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.246 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.246 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.246 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.246 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.246 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.246 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.246 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.246 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.246 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.246 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:51.247 node0=512 expecting 512 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:51.247 22:12:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:51.248 22:12:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:02:51.248 node1=1024 expecting 1024 00:02:51.248 22:12:16 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:02:51.248 00:02:51.248 real 0m1.280s 00:02:51.248 user 0m0.608s 00:02:51.248 sys 0m0.706s 00:02:51.248 22:12:16 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:51.248 22:12:16 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:51.248 ************************************ 00:02:51.248 END TEST custom_alloc 00:02:51.248 ************************************ 00:02:51.248 22:12:16 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:02:51.248 22:12:16 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:51.248 22:12:16 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:51.248 22:12:16 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:51.248 ************************************ 00:02:51.248 START TEST no_shrink_alloc 00:02:51.248 ************************************ 00:02:51.248 22:12:16 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:02:51.248 22:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:02:51.248 22:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:51.248 22:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:51.248 22:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:02:51.248 22:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:51.248 22:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:02:51.248 22:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:51.248 22:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:51.248 22:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:51.248 22:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:51.248 22:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:51.248 22:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:51.248 22:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:51.248 22:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:51.248 22:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:51.248 22:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:51.248 22:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:51.248 22:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:51.248 22:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:02:51.248 22:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:02:51.248 22:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:51.248 22:12:16 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:52.188 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:02:52.188 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:52.188 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:02:52.188 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:02:52.188 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:02:52.188 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:02:52.188 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:02:52.188 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:02:52.188 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:02:52.188 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:02:52.188 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:02:52.188 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:02:52.188 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:02:52.188 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:02:52.188 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:02:52.188 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:02:52.188 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:02:52.453 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:02:52.453 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:02:52.453 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:52.453 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:52.453 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:52.453 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:52.453 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:52.453 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:52.453 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:52.453 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:52.453 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:52.453 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:52.453 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:52.453 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:52.453 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:52.453 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:52.453 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:52.453 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:52.453 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.453 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.453 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 33145628 kB' 'MemAvailable: 37110772 kB' 'Buffers: 2704 kB' 'Cached: 14701132 kB' 'SwapCached: 0 kB' 'Active: 11556356 kB' 'Inactive: 3701476 kB' 'Active(anon): 11091568 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556888 kB' 'Mapped: 178180 kB' 'Shmem: 10537572 kB' 'KReclaimable: 410544 kB' 'Slab: 706244 kB' 'SReclaimable: 410544 kB' 'SUnreclaim: 295700 kB' 'KernelStack: 9968 kB' 'PageTables: 7508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12086108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190368 kB' 'VmallocChunk: 0 kB' 'Percpu: 25728 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3488036 kB' 'DirectMap2M: 29988864 kB' 'DirectMap1G: 27262976 kB' 00:02:52.453 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.453 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.454 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 33145376 kB' 'MemAvailable: 37110520 kB' 'Buffers: 2704 kB' 'Cached: 14701132 kB' 'SwapCached: 0 kB' 'Active: 11556772 kB' 'Inactive: 3701476 kB' 'Active(anon): 11091984 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557268 kB' 'Mapped: 178132 kB' 'Shmem: 10537572 kB' 'KReclaimable: 410544 kB' 'Slab: 706244 kB' 'SReclaimable: 410544 kB' 'SUnreclaim: 295700 kB' 'KernelStack: 10000 kB' 'PageTables: 7592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12086124 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190336 kB' 'VmallocChunk: 0 kB' 'Percpu: 25728 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3488036 kB' 'DirectMap2M: 29988864 kB' 'DirectMap1G: 27262976 kB' 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.455 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.456 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 33145936 kB' 'MemAvailable: 37111080 kB' 'Buffers: 2704 kB' 'Cached: 14701152 kB' 'SwapCached: 0 kB' 'Active: 11556280 kB' 'Inactive: 3701476 kB' 'Active(anon): 11091492 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557120 kB' 'Mapped: 178056 kB' 'Shmem: 10537592 kB' 'KReclaimable: 410544 kB' 'Slab: 706232 kB' 'SReclaimable: 410544 kB' 'SUnreclaim: 295688 kB' 'KernelStack: 10000 kB' 'PageTables: 7568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12086148 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190336 kB' 'VmallocChunk: 0 kB' 'Percpu: 25728 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3488036 kB' 'DirectMap2M: 29988864 kB' 'DirectMap1G: 27262976 kB' 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.457 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.458 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.458 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.458 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.458 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.458 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.458 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.458 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.458 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.458 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.458 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.458 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.458 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.458 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.458 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.458 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.458 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.458 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.458 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.458 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.458 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.458 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.458 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.458 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.458 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.458 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.458 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.458 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.458 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.458 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.458 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.458 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.458 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.458 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.458 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.458 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.458 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.458 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.458 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.458 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.458 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.458 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.458 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.458 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.458 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.458 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.458 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.458 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.458 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.458 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.458 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.458 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.458 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.458 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.458 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.458 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.458 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.458 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.458 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.458 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.458 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.458 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.458 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.458 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.458 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.458 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.458 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.458 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.458 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.458 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.458 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.458 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.458 22:12:17 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.458 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.458 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.458 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.458 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.458 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.458 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.458 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.458 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.458 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.458 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.458 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.458 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.458 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.458 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.458 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.458 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.458 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.458 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.458 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.458 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:52.459 nr_hugepages=1024 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:52.459 resv_hugepages=0 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:52.459 surplus_hugepages=0 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:52.459 anon_hugepages=0 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.459 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 33145936 kB' 'MemAvailable: 37111080 kB' 'Buffers: 2704 kB' 'Cached: 14701172 kB' 'SwapCached: 0 kB' 'Active: 11556272 kB' 'Inactive: 3701476 kB' 'Active(anon): 11091484 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557120 kB' 'Mapped: 178056 kB' 'Shmem: 10537612 kB' 'KReclaimable: 410544 kB' 'Slab: 706232 kB' 'SReclaimable: 410544 kB' 'SUnreclaim: 295688 kB' 'KernelStack: 10000 kB' 'PageTables: 7568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12086168 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190336 kB' 'VmallocChunk: 0 kB' 'Percpu: 25728 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3488036 kB' 'DirectMap2M: 29988864 kB' 'DirectMap1G: 27262976 kB' 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.460 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:52.461 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32881748 kB' 'MemFree: 19686904 kB' 'MemUsed: 13194844 kB' 'SwapCached: 0 kB' 'Active: 6801304 kB' 'Inactive: 3397596 kB' 'Active(anon): 6589896 kB' 'Inactive(anon): 0 kB' 'Active(file): 211408 kB' 'Inactive(file): 3397596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9872888 kB' 'Mapped: 100436 kB' 'AnonPages: 329192 kB' 'Shmem: 6263884 kB' 'KernelStack: 5736 kB' 'PageTables: 4008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 271072 kB' 'Slab: 421592 kB' 'SReclaimable: 271072 kB' 'SUnreclaim: 150520 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.462 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.463 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.463 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.463 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.463 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.463 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.463 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.463 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.463 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.463 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.463 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.463 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.463 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.463 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.463 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.463 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.463 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.463 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.463 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.463 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.463 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.463 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.463 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.463 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.463 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.463 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.463 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.463 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.463 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.463 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.463 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.463 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.463 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.463 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.463 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.463 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.463 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.463 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.463 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:52.463 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:52.463 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:52.463 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:52.463 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:52.463 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:52.463 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:52.463 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:52.463 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:52.463 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:52.463 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:52.463 node0=1024 expecting 1024 00:02:52.463 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:52.463 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:02:52.463 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:02:52.463 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:02:52.463 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:52.463 22:12:18 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:53.403 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:02:53.403 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:53.403 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:02:53.403 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:02:53.403 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:02:53.403 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:02:53.403 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:02:53.403 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:02:53.403 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:02:53.403 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:02:53.403 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:02:53.403 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:02:53.403 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:02:53.403 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:02:53.403 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:02:53.403 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:02:53.403 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:02:53.403 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:02:53.403 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:02:53.403 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:02:53.403 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:53.403 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:53.403 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:53.403 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:53.403 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:53.403 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:53.403 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:53.403 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:53.403 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:53.403 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:53.403 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:53.403 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:53.403 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:53.403 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:53.403 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:53.403 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:53.403 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.403 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.403 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 33150340 kB' 'MemAvailable: 37115484 kB' 'Buffers: 2704 kB' 'Cached: 14701236 kB' 'SwapCached: 0 kB' 'Active: 11556280 kB' 'Inactive: 3701476 kB' 'Active(anon): 11091492 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556980 kB' 'Mapped: 178260 kB' 'Shmem: 10537676 kB' 'KReclaimable: 410544 kB' 'Slab: 706192 kB' 'SReclaimable: 410544 kB' 'SUnreclaim: 295648 kB' 'KernelStack: 9984 kB' 'PageTables: 7520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12086216 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190384 kB' 'VmallocChunk: 0 kB' 'Percpu: 25728 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3488036 kB' 'DirectMap2M: 29988864 kB' 'DirectMap1G: 27262976 kB' 00:02:53.403 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.403 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.403 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.403 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.403 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.403 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.403 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.403 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.403 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.403 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.403 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.403 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.403 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.403 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.403 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.403 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.403 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.403 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.403 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.403 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.403 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.403 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.403 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.403 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.403 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.403 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.403 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.403 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.403 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.403 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.403 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.404 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.404 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.404 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.668 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.668 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.668 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.668 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.668 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.668 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.668 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.668 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.668 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.668 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.668 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.668 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.668 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.668 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.668 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.668 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.668 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.668 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.668 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.668 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.668 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.668 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.668 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.668 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.668 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.668 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.668 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.668 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.668 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.668 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.668 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.668 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.668 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.668 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.668 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.668 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.668 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.668 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.668 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.668 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.668 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.668 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.668 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.668 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.668 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.668 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.668 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.668 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.668 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.668 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.668 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.668 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.668 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.668 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.668 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.668 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 33149224 kB' 'MemAvailable: 37114368 kB' 'Buffers: 2704 kB' 'Cached: 14701240 kB' 'SwapCached: 0 kB' 'Active: 11556956 kB' 'Inactive: 3701476 kB' 'Active(anon): 11092168 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557652 kB' 'Mapped: 178136 kB' 'Shmem: 10537680 kB' 'KReclaimable: 410544 kB' 'Slab: 706248 kB' 'SReclaimable: 410544 kB' 'SUnreclaim: 295704 kB' 'KernelStack: 10016 kB' 'PageTables: 7632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12086232 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190368 kB' 'VmallocChunk: 0 kB' 'Percpu: 25728 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3488036 kB' 'DirectMap2M: 29988864 kB' 'DirectMap1G: 27262976 kB' 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.669 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.670 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 33149576 kB' 'MemAvailable: 37114720 kB' 'Buffers: 2704 kB' 'Cached: 14701260 kB' 'SwapCached: 0 kB' 'Active: 11556276 kB' 'Inactive: 3701476 kB' 'Active(anon): 11091488 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556928 kB' 'Mapped: 178064 kB' 'Shmem: 10537700 kB' 'KReclaimable: 410544 kB' 'Slab: 706248 kB' 'SReclaimable: 410544 kB' 'SUnreclaim: 295704 kB' 'KernelStack: 9984 kB' 'PageTables: 7512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12086256 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190336 kB' 'VmallocChunk: 0 kB' 'Percpu: 25728 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3488036 kB' 'DirectMap2M: 29988864 kB' 'DirectMap1G: 27262976 kB' 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.671 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.672 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:53.673 nr_hugepages=1024 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:53.673 resv_hugepages=0 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:53.673 surplus_hugepages=0 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:53.673 anon_hugepages=0 00:02:53.673 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 52291180 kB' 'MemFree: 33148820 kB' 'MemAvailable: 37113964 kB' 'Buffers: 2704 kB' 'Cached: 14701276 kB' 'SwapCached: 0 kB' 'Active: 11556308 kB' 'Inactive: 3701476 kB' 'Active(anon): 11091520 kB' 'Inactive(anon): 0 kB' 'Active(file): 464788 kB' 'Inactive(file): 3701476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556912 kB' 'Mapped: 178064 kB' 'Shmem: 10537716 kB' 'KReclaimable: 410544 kB' 'Slab: 706248 kB' 'SReclaimable: 410544 kB' 'SUnreclaim: 295704 kB' 'KernelStack: 9984 kB' 'PageTables: 7512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 33485616 kB' 'Committed_AS: 12086276 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 190336 kB' 'VmallocChunk: 0 kB' 'Percpu: 25728 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3488036 kB' 'DirectMap2M: 29988864 kB' 'DirectMap1G: 27262976 kB' 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.674 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:53.675 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32881748 kB' 'MemFree: 19682260 kB' 'MemUsed: 13199488 kB' 'SwapCached: 0 kB' 'Active: 6801868 kB' 'Inactive: 3397596 kB' 'Active(anon): 6590460 kB' 'Inactive(anon): 0 kB' 'Active(file): 211408 kB' 'Inactive(file): 3397596 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9872996 kB' 'Mapped: 100444 kB' 'AnonPages: 329568 kB' 'Shmem: 6263992 kB' 'KernelStack: 5784 kB' 'PageTables: 4048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 271072 kB' 'Slab: 421536 kB' 'SReclaimable: 271072 kB' 'SUnreclaim: 150464 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.676 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.677 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.677 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.677 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.677 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.677 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.677 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.677 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.677 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.677 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.677 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.677 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.677 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.677 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.677 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.677 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.677 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.677 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.677 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.677 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.677 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.677 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.677 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.677 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.677 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.677 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.677 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.677 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.677 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.677 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.677 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.677 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.677 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.677 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.677 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.677 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.677 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.677 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.677 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.677 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.677 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.677 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.677 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.677 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.677 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.677 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.677 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.677 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.677 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.677 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.677 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.677 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:53.677 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:53.677 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:53.677 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:53.677 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:53.677 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:53.677 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:53.677 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:53.677 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:53.677 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:53.677 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:53.677 node0=1024 expecting 1024 00:02:53.677 22:12:19 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:53.677 00:02:53.677 real 0m2.338s 00:02:53.677 user 0m1.038s 00:02:53.677 sys 0m1.365s 00:02:53.677 22:12:19 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:53.677 22:12:19 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:53.677 ************************************ 00:02:53.677 END TEST no_shrink_alloc 00:02:53.677 ************************************ 00:02:53.677 22:12:19 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:02:53.677 22:12:19 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:02:53.677 22:12:19 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:53.677 22:12:19 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:53.677 22:12:19 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:53.677 22:12:19 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:53.677 22:12:19 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:53.677 22:12:19 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:53.677 22:12:19 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:53.677 22:12:19 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:53.677 22:12:19 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:53.677 22:12:19 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:53.677 22:12:19 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:53.677 22:12:19 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:53.677 00:02:53.677 real 0m9.733s 00:02:53.677 user 0m4.013s 00:02:53.677 sys 0m5.118s 00:02:53.677 22:12:19 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:53.677 22:12:19 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:53.677 ************************************ 00:02:53.677 END TEST hugepages 00:02:53.677 ************************************ 00:02:53.677 22:12:19 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:02:53.677 22:12:19 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:53.677 22:12:19 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:53.677 22:12:19 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:53.677 ************************************ 00:02:53.677 START TEST driver 00:02:53.677 ************************************ 00:02:53.677 22:12:19 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:02:53.935 * Looking for test storage... 00:02:53.935 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:53.935 22:12:19 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:02:53.935 22:12:19 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:53.935 22:12:19 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:56.475 22:12:21 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:02:56.475 22:12:21 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:56.475 22:12:21 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:56.475 22:12:21 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:02:56.475 ************************************ 00:02:56.475 START TEST guess_driver 00:02:56.475 ************************************ 00:02:56.475 22:12:21 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:02:56.475 22:12:21 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:02:56.475 22:12:21 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:02:56.475 22:12:21 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:02:56.475 22:12:21 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:02:56.475 22:12:21 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:02:56.475 22:12:21 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:02:56.475 22:12:21 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:02:56.475 22:12:21 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:02:56.475 22:12:21 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:02:56.475 22:12:21 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 102 > 0 )) 00:02:56.475 22:12:21 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:02:56.475 22:12:21 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:02:56.475 22:12:21 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:02:56.475 22:12:21 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:02:56.475 22:12:21 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:02:56.475 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:02:56.475 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:02:56.475 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:02:56.475 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:02:56.475 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:02:56.475 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:02:56.475 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:02:56.475 22:12:21 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:02:56.475 22:12:21 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:02:56.475 22:12:21 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:02:56.475 22:12:21 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:02:56.475 22:12:21 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:02:56.475 Looking for driver=vfio-pci 00:02:56.475 22:12:21 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:56.475 22:12:21 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:02:56.475 22:12:21 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:02:56.475 22:12:21 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:57.044 22:12:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:57.044 22:12:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:57.044 22:12:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:57.044 22:12:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:57.044 22:12:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:57.044 22:12:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:57.044 22:12:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:57.044 22:12:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:57.044 22:12:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:57.044 22:12:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:57.044 22:12:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:57.044 22:12:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:57.044 22:12:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:57.044 22:12:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:57.044 22:12:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:57.044 22:12:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:57.044 22:12:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:57.044 22:12:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:57.044 22:12:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:57.044 22:12:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:57.044 22:12:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:57.044 22:12:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:57.044 22:12:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:57.044 22:12:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:57.044 22:12:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:57.044 22:12:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:57.044 22:12:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:57.044 22:12:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:57.044 22:12:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:57.044 22:12:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:57.044 22:12:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:57.044 22:12:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:57.044 22:12:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:57.044 22:12:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:57.044 22:12:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:57.044 22:12:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:57.044 22:12:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:57.044 22:12:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:57.044 22:12:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:57.044 22:12:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:57.044 22:12:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:57.044 22:12:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:57.044 22:12:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:57.044 22:12:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:57.044 22:12:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:57.044 22:12:22 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:57.044 22:12:22 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:57.044 22:12:22 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:57.983 22:12:23 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:57.983 22:12:23 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:57.983 22:12:23 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:57.983 22:12:23 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:02:57.983 22:12:23 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:02:57.983 22:12:23 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:57.983 22:12:23 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:00.517 00:03:00.517 real 0m4.170s 00:03:00.517 user 0m0.899s 00:03:00.517 sys 0m1.553s 00:03:00.517 22:12:25 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:00.517 22:12:25 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:00.517 ************************************ 00:03:00.517 END TEST guess_driver 00:03:00.518 ************************************ 00:03:00.518 00:03:00.518 real 0m6.477s 00:03:00.518 user 0m1.396s 00:03:00.518 sys 0m2.472s 00:03:00.518 22:12:25 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:00.518 22:12:25 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:00.518 ************************************ 00:03:00.518 END TEST driver 00:03:00.518 ************************************ 00:03:00.518 22:12:25 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:00.518 22:12:25 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:00.518 22:12:25 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:00.518 22:12:25 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:00.518 ************************************ 00:03:00.518 START TEST devices 00:03:00.518 ************************************ 00:03:00.518 22:12:25 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:00.518 * Looking for test storage... 00:03:00.518 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:00.518 22:12:25 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:00.518 22:12:25 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:00.518 22:12:25 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:00.518 22:12:25 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:01.457 22:12:27 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:01.457 22:12:27 setup.sh.devices -- common/autotest_common.sh@1667 -- # zoned_devs=() 00:03:01.457 22:12:27 setup.sh.devices -- common/autotest_common.sh@1667 -- # local -gA zoned_devs 00:03:01.457 22:12:27 setup.sh.devices -- common/autotest_common.sh@1668 -- # local nvme bdf 00:03:01.457 22:12:27 setup.sh.devices -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:03:01.457 22:12:27 setup.sh.devices -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:03:01.457 22:12:27 setup.sh.devices -- common/autotest_common.sh@1660 -- # local device=nvme0n1 00:03:01.457 22:12:27 setup.sh.devices -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:01.457 22:12:27 setup.sh.devices -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:03:01.457 22:12:27 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:01.457 22:12:27 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:01.457 22:12:27 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:01.457 22:12:27 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:01.457 22:12:27 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:01.457 22:12:27 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:01.457 22:12:27 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:01.457 22:12:27 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:01.457 22:12:27 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:84:00.0 00:03:01.457 22:12:27 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\4\:\0\0\.\0* ]] 00:03:01.457 22:12:27 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:01.457 22:12:27 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:01.457 22:12:27 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:01.457 No valid GPT data, bailing 00:03:01.457 22:12:27 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:01.457 22:12:27 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:01.457 22:12:27 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:01.457 22:12:27 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:01.457 22:12:27 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:01.457 22:12:27 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:01.457 22:12:27 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:03:01.457 22:12:27 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:01.457 22:12:27 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:01.457 22:12:27 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:84:00.0 00:03:01.457 22:12:27 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:01.457 22:12:27 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:01.457 22:12:27 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:01.457 22:12:27 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:01.457 22:12:27 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:01.457 22:12:27 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:01.718 ************************************ 00:03:01.718 START TEST nvme_mount 00:03:01.718 ************************************ 00:03:01.718 22:12:27 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:03:01.718 22:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:01.718 22:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:01.718 22:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:01.718 22:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:01.718 22:12:27 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:01.718 22:12:27 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:01.718 22:12:27 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:01.718 22:12:27 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:01.718 22:12:27 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:01.718 22:12:27 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:01.718 22:12:27 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:01.718 22:12:27 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:01.718 22:12:27 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:01.718 22:12:27 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:01.718 22:12:27 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:01.718 22:12:27 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:01.718 22:12:27 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:01.718 22:12:27 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:01.718 22:12:27 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:02.659 Creating new GPT entries in memory. 00:03:02.659 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:02.659 other utilities. 00:03:02.659 22:12:28 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:02.659 22:12:28 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:02.659 22:12:28 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:02.659 22:12:28 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:02.659 22:12:28 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:03.599 Creating new GPT entries in memory. 00:03:03.599 The operation has completed successfully. 00:03:03.599 22:12:29 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:03.599 22:12:29 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:03.599 22:12:29 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 3737597 00:03:03.599 22:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:03.599 22:12:29 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:03.599 22:12:29 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:03.599 22:12:29 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:03.599 22:12:29 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:03.599 22:12:29 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:03.599 22:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:84:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:03.599 22:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:84:00.0 00:03:03.599 22:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:03.599 22:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:03.599 22:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:03.599 22:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:03.599 22:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:03.599 22:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:03.599 22:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:03.599 22:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:84:00.0 00:03:03.599 22:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:03.599 22:12:29 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:03.599 22:12:29 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:03.599 22:12:29 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:04.537 22:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:84:00.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:04.537 22:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:04.537 22:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:04.537 22:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.537 22:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:04.537 22:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.537 22:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:04.537 22:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.537 22:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:04.537 22:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.537 22:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:04.537 22:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.537 22:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:04.537 22:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.537 22:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:04.537 22:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.537 22:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:04.537 22:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.537 22:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:04.538 22:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.538 22:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:04.538 22:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.538 22:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:04.538 22:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.538 22:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:04.538 22:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.538 22:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:04.538 22:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.538 22:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:04.538 22:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.538 22:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:04.538 22:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.538 22:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:04.538 22:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.538 22:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:04.538 22:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:04.798 22:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:04.798 22:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:04.798 22:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:04.798 22:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:04.798 22:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:04.798 22:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:04.798 22:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:04.798 22:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:04.798 22:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:04.798 22:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:04.798 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:04.798 22:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:04.798 22:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:05.058 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:05.058 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:05.058 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:05.058 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:05.058 22:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:05.058 22:12:30 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:05.058 22:12:30 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:05.058 22:12:30 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:05.058 22:12:30 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:05.058 22:12:30 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:05.058 22:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:84:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:05.058 22:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:84:00.0 00:03:05.058 22:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:05.058 22:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:05.058 22:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:05.058 22:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:05.058 22:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:05.058 22:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:05.058 22:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:05.058 22:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:05.058 22:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:84:00.0 00:03:05.058 22:12:30 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:05.058 22:12:30 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:05.058 22:12:30 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:06.014 22:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:84:00.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:06.014 22:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:06.014 22:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:06.014 22:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.014 22:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:06.014 22:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.014 22:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:06.014 22:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.014 22:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:06.014 22:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.014 22:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:06.014 22:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.014 22:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:06.014 22:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.014 22:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:06.014 22:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.014 22:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:06.014 22:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.014 22:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:06.014 22:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.014 22:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:06.014 22:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.014 22:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:06.014 22:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.015 22:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:06.015 22:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.015 22:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:06.015 22:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.015 22:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:06.015 22:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.015 22:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:06.015 22:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.015 22:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:06.015 22:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.015 22:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:06.015 22:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.274 22:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:06.274 22:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:06.274 22:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:06.274 22:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:06.274 22:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:06.274 22:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:06.274 22:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:84:00.0 data@nvme0n1 '' '' 00:03:06.274 22:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:84:00.0 00:03:06.274 22:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:06.274 22:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:06.274 22:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:06.274 22:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:06.274 22:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:06.274 22:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:06.274 22:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.274 22:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:84:00.0 00:03:06.274 22:12:31 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:06.274 22:12:31 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:06.274 22:12:31 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:07.211 22:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:84:00.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:07.211 22:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:07.211 22:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:07.211 22:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.211 22:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:07.211 22:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.211 22:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:07.211 22:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.211 22:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:07.211 22:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.211 22:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:07.211 22:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.211 22:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:07.211 22:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.211 22:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:07.211 22:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.211 22:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:07.211 22:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.211 22:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:07.211 22:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.211 22:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:07.211 22:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.211 22:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:07.211 22:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.211 22:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:07.211 22:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.211 22:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:07.211 22:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.211 22:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:07.211 22:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.211 22:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:07.211 22:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.211 22:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:07.211 22:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.211 22:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:07.211 22:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:07.211 22:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:07.211 22:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:07.211 22:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:07.211 22:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:07.211 22:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:07.211 22:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:07.211 22:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:07.211 22:12:32 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:07.211 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:07.211 00:03:07.211 real 0m5.721s 00:03:07.211 user 0m1.335s 00:03:07.211 sys 0m2.141s 00:03:07.211 22:12:32 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:07.211 22:12:32 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:07.211 ************************************ 00:03:07.211 END TEST nvme_mount 00:03:07.211 ************************************ 00:03:07.470 22:12:32 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:07.470 22:12:32 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:07.470 22:12:32 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:07.470 22:12:32 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:07.470 ************************************ 00:03:07.470 START TEST dm_mount 00:03:07.470 ************************************ 00:03:07.470 22:12:32 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:03:07.470 22:12:32 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:07.470 22:12:32 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:07.470 22:12:32 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:07.470 22:12:32 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:07.470 22:12:32 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:07.470 22:12:32 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:07.470 22:12:32 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:07.470 22:12:32 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:07.470 22:12:32 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:07.470 22:12:32 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:07.470 22:12:32 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:07.470 22:12:32 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:07.470 22:12:32 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:07.470 22:12:32 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:07.470 22:12:32 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:07.470 22:12:32 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:07.470 22:12:32 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:07.470 22:12:32 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:07.470 22:12:32 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:07.470 22:12:32 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:07.470 22:12:32 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:08.405 Creating new GPT entries in memory. 00:03:08.405 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:08.405 other utilities. 00:03:08.405 22:12:33 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:08.405 22:12:33 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:08.405 22:12:33 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:08.405 22:12:33 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:08.405 22:12:33 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:09.343 Creating new GPT entries in memory. 00:03:09.343 The operation has completed successfully. 00:03:09.343 22:12:35 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:09.343 22:12:35 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:09.343 22:12:35 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:09.343 22:12:35 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:09.343 22:12:35 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:10.720 The operation has completed successfully. 00:03:10.720 22:12:36 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:10.720 22:12:36 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:10.720 22:12:36 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 3739381 00:03:10.720 22:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:10.720 22:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:10.720 22:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:10.720 22:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:10.720 22:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:10.720 22:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:10.720 22:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:10.720 22:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:10.720 22:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:10.720 22:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:10.720 22:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:03:10.720 22:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:10.720 22:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:10.720 22:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:10.720 22:12:36 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:03:10.720 22:12:36 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:10.720 22:12:36 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:10.720 22:12:36 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:10.720 22:12:36 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:10.720 22:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:84:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:10.720 22:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:84:00.0 00:03:10.720 22:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:10.720 22:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:10.720 22:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:10.721 22:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:10.721 22:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:10.721 22:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:10.721 22:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:10.721 22:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:10.721 22:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:84:00.0 00:03:10.721 22:12:36 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:10.721 22:12:36 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:10.721 22:12:36 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:11.652 22:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:84:00.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:11.652 22:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:11.652 22:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:11.652 22:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.652 22:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:11.652 22:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.652 22:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:11.652 22:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.652 22:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:11.652 22:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.652 22:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:11.652 22:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.652 22:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:11.652 22:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.652 22:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:11.652 22:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.652 22:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:11.652 22:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.652 22:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:11.652 22:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.652 22:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:11.652 22:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.652 22:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:11.652 22:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.652 22:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:11.652 22:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.652 22:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:11.652 22:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.652 22:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:11.652 22:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.652 22:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:11.652 22:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.652 22:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:11.652 22:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.652 22:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:11.652 22:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.652 22:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:11.652 22:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:11.652 22:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:11.652 22:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:11.652 22:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:11.652 22:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:11.652 22:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:84:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:11.652 22:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:84:00.0 00:03:11.652 22:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:11.652 22:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:11.652 22:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:03:11.652 22:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:11.652 22:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:11.652 22:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:11.652 22:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.652 22:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:84:00.0 00:03:11.652 22:12:37 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:11.652 22:12:37 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:11.652 22:12:37 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:12.590 22:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:84:00.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:12.590 22:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:12.590 22:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:12.590 22:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.590 22:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:12.590 22:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.590 22:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:12.590 22:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.590 22:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:12.590 22:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.590 22:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:12.590 22:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.590 22:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:12.590 22:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.590 22:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:12.590 22:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.590 22:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:12.590 22:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.590 22:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:12.590 22:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.590 22:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:12.590 22:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.590 22:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:12.590 22:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.590 22:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:12.590 22:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.590 22:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:12.590 22:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.590 22:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:12.590 22:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.590 22:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:12.590 22:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.590 22:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:12.590 22:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.590 22:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\4\:\0\0\.\0 ]] 00:03:12.590 22:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.590 22:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:12.590 22:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:12.590 22:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:03:12.590 22:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:03:12.590 22:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:12.590 22:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:12.590 22:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:12.590 22:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:12.590 22:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:12.590 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:12.590 22:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:12.590 22:12:38 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:12.590 00:03:12.590 real 0m5.247s 00:03:12.590 user 0m0.820s 00:03:12.590 sys 0m1.354s 00:03:12.590 22:12:38 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:12.590 22:12:38 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:03:12.590 ************************************ 00:03:12.590 END TEST dm_mount 00:03:12.590 ************************************ 00:03:12.590 22:12:38 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:03:12.590 22:12:38 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:03:12.590 22:12:38 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:12.590 22:12:38 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:12.590 22:12:38 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:12.590 22:12:38 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:12.590 22:12:38 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:12.850 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:12.851 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:12.851 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:12.851 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:12.851 22:12:38 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:03:12.851 22:12:38 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:12.851 22:12:38 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:12.851 22:12:38 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:12.851 22:12:38 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:12.851 22:12:38 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:12.851 22:12:38 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:12.851 00:03:12.851 real 0m12.662s 00:03:12.851 user 0m2.716s 00:03:12.851 sys 0m4.444s 00:03:12.851 22:12:38 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:12.851 22:12:38 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:12.851 ************************************ 00:03:12.851 END TEST devices 00:03:12.851 ************************************ 00:03:12.851 00:03:12.851 real 0m38.166s 00:03:12.851 user 0m11.070s 00:03:12.851 sys 0m16.736s 00:03:12.851 22:12:38 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:12.851 22:12:38 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:12.851 ************************************ 00:03:12.851 END TEST setup.sh 00:03:12.851 ************************************ 00:03:13.111 22:12:38 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:14.051 Hugepages 00:03:14.051 node hugesize free / total 00:03:14.051 node0 1048576kB 0 / 0 00:03:14.051 node0 2048kB 2048 / 2048 00:03:14.051 node1 1048576kB 0 / 0 00:03:14.051 node1 2048kB 0 / 0 00:03:14.051 00:03:14.051 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:14.051 I/OAT 0000:00:04.0 8086 3c20 0 ioatdma - - 00:03:14.051 I/OAT 0000:00:04.1 8086 3c21 0 ioatdma - - 00:03:14.051 I/OAT 0000:00:04.2 8086 3c22 0 ioatdma - - 00:03:14.051 I/OAT 0000:00:04.3 8086 3c23 0 ioatdma - - 00:03:14.051 I/OAT 0000:00:04.4 8086 3c24 0 ioatdma - - 00:03:14.051 I/OAT 0000:00:04.5 8086 3c25 0 ioatdma - - 00:03:14.051 I/OAT 0000:00:04.6 8086 3c26 0 ioatdma - - 00:03:14.051 I/OAT 0000:00:04.7 8086 3c27 0 ioatdma - - 00:03:14.051 I/OAT 0000:80:04.0 8086 3c20 1 ioatdma - - 00:03:14.051 I/OAT 0000:80:04.1 8086 3c21 1 ioatdma - - 00:03:14.051 I/OAT 0000:80:04.2 8086 3c22 1 ioatdma - - 00:03:14.051 I/OAT 0000:80:04.3 8086 3c23 1 ioatdma - - 00:03:14.051 I/OAT 0000:80:04.4 8086 3c24 1 ioatdma - - 00:03:14.051 I/OAT 0000:80:04.5 8086 3c25 1 ioatdma - - 00:03:14.051 I/OAT 0000:80:04.6 8086 3c26 1 ioatdma - - 00:03:14.051 I/OAT 0000:80:04.7 8086 3c27 1 ioatdma - - 00:03:14.051 NVMe 0000:84:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:14.051 22:12:39 -- spdk/autotest.sh@130 -- # uname -s 00:03:14.051 22:12:39 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:14.051 22:12:39 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:14.051 22:12:39 -- common/autotest_common.sh@1529 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:14.989 0000:00:04.7 (8086 3c27): ioatdma -> vfio-pci 00:03:14.989 0000:00:04.6 (8086 3c26): ioatdma -> vfio-pci 00:03:14.989 0000:00:04.5 (8086 3c25): ioatdma -> vfio-pci 00:03:14.989 0000:00:04.4 (8086 3c24): ioatdma -> vfio-pci 00:03:14.989 0000:00:04.3 (8086 3c23): ioatdma -> vfio-pci 00:03:14.989 0000:00:04.2 (8086 3c22): ioatdma -> vfio-pci 00:03:14.989 0000:00:04.1 (8086 3c21): ioatdma -> vfio-pci 00:03:14.989 0000:00:04.0 (8086 3c20): ioatdma -> vfio-pci 00:03:15.250 0000:80:04.7 (8086 3c27): ioatdma -> vfio-pci 00:03:15.250 0000:80:04.6 (8086 3c26): ioatdma -> vfio-pci 00:03:15.250 0000:80:04.5 (8086 3c25): ioatdma -> vfio-pci 00:03:15.250 0000:80:04.4 (8086 3c24): ioatdma -> vfio-pci 00:03:15.250 0000:80:04.3 (8086 3c23): ioatdma -> vfio-pci 00:03:15.250 0000:80:04.2 (8086 3c22): ioatdma -> vfio-pci 00:03:15.250 0000:80:04.1 (8086 3c21): ioatdma -> vfio-pci 00:03:15.250 0000:80:04.0 (8086 3c20): ioatdma -> vfio-pci 00:03:16.192 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:03:16.192 22:12:41 -- common/autotest_common.sh@1530 -- # sleep 1 00:03:17.131 22:12:42 -- common/autotest_common.sh@1531 -- # bdfs=() 00:03:17.131 22:12:42 -- common/autotest_common.sh@1531 -- # local bdfs 00:03:17.131 22:12:42 -- common/autotest_common.sh@1532 -- # bdfs=($(get_nvme_bdfs)) 00:03:17.131 22:12:42 -- common/autotest_common.sh@1532 -- # get_nvme_bdfs 00:03:17.131 22:12:42 -- common/autotest_common.sh@1511 -- # bdfs=() 00:03:17.131 22:12:42 -- common/autotest_common.sh@1511 -- # local bdfs 00:03:17.131 22:12:42 -- common/autotest_common.sh@1512 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:17.131 22:12:42 -- common/autotest_common.sh@1512 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:17.131 22:12:42 -- common/autotest_common.sh@1512 -- # jq -r '.config[].params.traddr' 00:03:17.131 22:12:42 -- common/autotest_common.sh@1513 -- # (( 1 == 0 )) 00:03:17.131 22:12:42 -- common/autotest_common.sh@1517 -- # printf '%s\n' 0000:84:00.0 00:03:17.131 22:12:42 -- common/autotest_common.sh@1534 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:18.070 Waiting for block devices as requested 00:03:18.070 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:03:18.330 0000:00:04.7 (8086 3c27): vfio-pci -> ioatdma 00:03:18.330 0000:00:04.6 (8086 3c26): vfio-pci -> ioatdma 00:03:18.330 0000:00:04.5 (8086 3c25): vfio-pci -> ioatdma 00:03:18.592 0000:00:04.4 (8086 3c24): vfio-pci -> ioatdma 00:03:18.592 0000:00:04.3 (8086 3c23): vfio-pci -> ioatdma 00:03:18.592 0000:00:04.2 (8086 3c22): vfio-pci -> ioatdma 00:03:18.592 0000:00:04.1 (8086 3c21): vfio-pci -> ioatdma 00:03:18.852 0000:00:04.0 (8086 3c20): vfio-pci -> ioatdma 00:03:18.852 0000:80:04.7 (8086 3c27): vfio-pci -> ioatdma 00:03:18.852 0000:80:04.6 (8086 3c26): vfio-pci -> ioatdma 00:03:18.852 0000:80:04.5 (8086 3c25): vfio-pci -> ioatdma 00:03:19.113 0000:80:04.4 (8086 3c24): vfio-pci -> ioatdma 00:03:19.113 0000:80:04.3 (8086 3c23): vfio-pci -> ioatdma 00:03:19.113 0000:80:04.2 (8086 3c22): vfio-pci -> ioatdma 00:03:19.373 0000:80:04.1 (8086 3c21): vfio-pci -> ioatdma 00:03:19.373 0000:80:04.0 (8086 3c20): vfio-pci -> ioatdma 00:03:19.373 22:12:44 -- common/autotest_common.sh@1536 -- # for bdf in "${bdfs[@]}" 00:03:19.373 22:12:44 -- common/autotest_common.sh@1537 -- # get_nvme_ctrlr_from_bdf 0000:84:00.0 00:03:19.373 22:12:44 -- common/autotest_common.sh@1500 -- # readlink -f /sys/class/nvme/nvme0 00:03:19.373 22:12:44 -- common/autotest_common.sh@1500 -- # grep 0000:84:00.0/nvme/nvme 00:03:19.373 22:12:44 -- common/autotest_common.sh@1500 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:84:00.0/nvme/nvme0 00:03:19.373 22:12:44 -- common/autotest_common.sh@1501 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:84:00.0/nvme/nvme0 ]] 00:03:19.373 22:12:44 -- common/autotest_common.sh@1505 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:84:00.0/nvme/nvme0 00:03:19.373 22:12:44 -- common/autotest_common.sh@1505 -- # printf '%s\n' nvme0 00:03:19.373 22:12:44 -- common/autotest_common.sh@1537 -- # nvme_ctrlr=/dev/nvme0 00:03:19.373 22:12:44 -- common/autotest_common.sh@1538 -- # [[ -z /dev/nvme0 ]] 00:03:19.373 22:12:44 -- common/autotest_common.sh@1543 -- # nvme id-ctrl /dev/nvme0 00:03:19.373 22:12:44 -- common/autotest_common.sh@1543 -- # grep oacs 00:03:19.373 22:12:44 -- common/autotest_common.sh@1543 -- # cut -d: -f2 00:03:19.373 22:12:44 -- common/autotest_common.sh@1543 -- # oacs=' 0xf' 00:03:19.373 22:12:44 -- common/autotest_common.sh@1544 -- # oacs_ns_manage=8 00:03:19.373 22:12:44 -- common/autotest_common.sh@1546 -- # [[ 8 -ne 0 ]] 00:03:19.373 22:12:44 -- common/autotest_common.sh@1552 -- # nvme id-ctrl /dev/nvme0 00:03:19.373 22:12:44 -- common/autotest_common.sh@1552 -- # grep unvmcap 00:03:19.373 22:12:44 -- common/autotest_common.sh@1552 -- # cut -d: -f2 00:03:19.373 22:12:44 -- common/autotest_common.sh@1552 -- # unvmcap=' 0' 00:03:19.373 22:12:44 -- common/autotest_common.sh@1553 -- # [[ 0 -eq 0 ]] 00:03:19.373 22:12:44 -- common/autotest_common.sh@1555 -- # continue 00:03:19.373 22:12:44 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:03:19.373 22:12:44 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:19.373 22:12:44 -- common/autotest_common.sh@10 -- # set +x 00:03:19.373 22:12:44 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:03:19.373 22:12:44 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:19.373 22:12:44 -- common/autotest_common.sh@10 -- # set +x 00:03:19.373 22:12:44 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:20.313 0000:00:04.7 (8086 3c27): ioatdma -> vfio-pci 00:03:20.313 0000:00:04.6 (8086 3c26): ioatdma -> vfio-pci 00:03:20.313 0000:00:04.5 (8086 3c25): ioatdma -> vfio-pci 00:03:20.313 0000:00:04.4 (8086 3c24): ioatdma -> vfio-pci 00:03:20.313 0000:00:04.3 (8086 3c23): ioatdma -> vfio-pci 00:03:20.313 0000:00:04.2 (8086 3c22): ioatdma -> vfio-pci 00:03:20.313 0000:00:04.1 (8086 3c21): ioatdma -> vfio-pci 00:03:20.313 0000:00:04.0 (8086 3c20): ioatdma -> vfio-pci 00:03:20.313 0000:80:04.7 (8086 3c27): ioatdma -> vfio-pci 00:03:20.313 0000:80:04.6 (8086 3c26): ioatdma -> vfio-pci 00:03:20.573 0000:80:04.5 (8086 3c25): ioatdma -> vfio-pci 00:03:20.573 0000:80:04.4 (8086 3c24): ioatdma -> vfio-pci 00:03:20.573 0000:80:04.3 (8086 3c23): ioatdma -> vfio-pci 00:03:20.573 0000:80:04.2 (8086 3c22): ioatdma -> vfio-pci 00:03:20.573 0000:80:04.1 (8086 3c21): ioatdma -> vfio-pci 00:03:20.573 0000:80:04.0 (8086 3c20): ioatdma -> vfio-pci 00:03:21.508 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:03:21.508 22:12:46 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:03:21.508 22:12:46 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:21.508 22:12:46 -- common/autotest_common.sh@10 -- # set +x 00:03:21.508 22:12:47 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:03:21.508 22:12:47 -- common/autotest_common.sh@1589 -- # mapfile -t bdfs 00:03:21.508 22:12:47 -- common/autotest_common.sh@1589 -- # get_nvme_bdfs_by_id 0x0a54 00:03:21.508 22:12:47 -- common/autotest_common.sh@1575 -- # bdfs=() 00:03:21.508 22:12:47 -- common/autotest_common.sh@1575 -- # local bdfs 00:03:21.508 22:12:47 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs 00:03:21.508 22:12:47 -- common/autotest_common.sh@1511 -- # bdfs=() 00:03:21.508 22:12:47 -- common/autotest_common.sh@1511 -- # local bdfs 00:03:21.508 22:12:47 -- common/autotest_common.sh@1512 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:21.508 22:12:47 -- common/autotest_common.sh@1512 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:21.509 22:12:47 -- common/autotest_common.sh@1512 -- # jq -r '.config[].params.traddr' 00:03:21.509 22:12:47 -- common/autotest_common.sh@1513 -- # (( 1 == 0 )) 00:03:21.509 22:12:47 -- common/autotest_common.sh@1517 -- # printf '%s\n' 0000:84:00.0 00:03:21.509 22:12:47 -- common/autotest_common.sh@1577 -- # for bdf in $(get_nvme_bdfs) 00:03:21.509 22:12:47 -- common/autotest_common.sh@1578 -- # cat /sys/bus/pci/devices/0000:84:00.0/device 00:03:21.509 22:12:47 -- common/autotest_common.sh@1578 -- # device=0x0a54 00:03:21.509 22:12:47 -- common/autotest_common.sh@1579 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:21.509 22:12:47 -- common/autotest_common.sh@1580 -- # bdfs+=($bdf) 00:03:21.509 22:12:47 -- common/autotest_common.sh@1584 -- # printf '%s\n' 0000:84:00.0 00:03:21.509 22:12:47 -- common/autotest_common.sh@1590 -- # [[ -z 0000:84:00.0 ]] 00:03:21.509 22:12:47 -- common/autotest_common.sh@1595 -- # spdk_tgt_pid=3743389 00:03:21.509 22:12:47 -- common/autotest_common.sh@1594 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:21.509 22:12:47 -- common/autotest_common.sh@1596 -- # waitforlisten 3743389 00:03:21.509 22:12:47 -- common/autotest_common.sh@829 -- # '[' -z 3743389 ']' 00:03:21.509 22:12:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:21.509 22:12:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:21.509 22:12:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:21.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:21.509 22:12:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:21.509 22:12:47 -- common/autotest_common.sh@10 -- # set +x 00:03:21.509 [2024-07-24 22:12:47.122758] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:03:21.509 [2024-07-24 22:12:47.122861] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3743389 ] 00:03:21.509 EAL: No free 2048 kB hugepages reported on node 1 00:03:21.509 [2024-07-24 22:12:47.181701] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:21.766 [2024-07-24 22:12:47.298964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:22.023 22:12:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:22.023 22:12:47 -- common/autotest_common.sh@862 -- # return 0 00:03:22.023 22:12:47 -- common/autotest_common.sh@1598 -- # bdf_id=0 00:03:22.023 22:12:47 -- common/autotest_common.sh@1599 -- # for bdf in "${bdfs[@]}" 00:03:22.023 22:12:47 -- common/autotest_common.sh@1600 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:84:00.0 00:03:25.309 nvme0n1 00:03:25.309 22:12:50 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:25.309 [2024-07-24 22:12:50.924583] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:03:25.309 [2024-07-24 22:12:50.924627] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:03:25.309 request: 00:03:25.309 { 00:03:25.309 "nvme_ctrlr_name": "nvme0", 00:03:25.309 "password": "test", 00:03:25.309 "method": "bdev_nvme_opal_revert", 00:03:25.309 "req_id": 1 00:03:25.309 } 00:03:25.309 Got JSON-RPC error response 00:03:25.309 response: 00:03:25.309 { 00:03:25.309 "code": -32603, 00:03:25.309 "message": "Internal error" 00:03:25.309 } 00:03:25.309 22:12:50 -- common/autotest_common.sh@1602 -- # true 00:03:25.309 22:12:50 -- common/autotest_common.sh@1603 -- # (( ++bdf_id )) 00:03:25.309 22:12:50 -- common/autotest_common.sh@1606 -- # killprocess 3743389 00:03:25.309 22:12:50 -- common/autotest_common.sh@948 -- # '[' -z 3743389 ']' 00:03:25.309 22:12:50 -- common/autotest_common.sh@952 -- # kill -0 3743389 00:03:25.309 22:12:50 -- common/autotest_common.sh@953 -- # uname 00:03:25.309 22:12:50 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:25.309 22:12:50 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3743389 00:03:25.309 22:12:50 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:25.309 22:12:50 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:25.309 22:12:50 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3743389' 00:03:25.309 killing process with pid 3743389 00:03:25.309 22:12:50 -- common/autotest_common.sh@967 -- # kill 3743389 00:03:25.309 22:12:50 -- common/autotest_common.sh@972 -- # wait 3743389 00:03:27.208 22:12:52 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:03:27.208 22:12:52 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:03:27.208 22:12:52 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:27.208 22:12:52 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:27.208 22:12:52 -- spdk/autotest.sh@162 -- # timing_enter lib 00:03:27.208 22:12:52 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:27.208 22:12:52 -- common/autotest_common.sh@10 -- # set +x 00:03:27.208 22:12:52 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:03:27.208 22:12:52 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:27.208 22:12:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:27.208 22:12:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:27.208 22:12:52 -- common/autotest_common.sh@10 -- # set +x 00:03:27.208 ************************************ 00:03:27.208 START TEST env 00:03:27.208 ************************************ 00:03:27.208 22:12:52 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:27.208 * Looking for test storage... 00:03:27.208 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:27.208 22:12:52 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:27.208 22:12:52 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:27.208 22:12:52 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:27.208 22:12:52 env -- common/autotest_common.sh@10 -- # set +x 00:03:27.208 ************************************ 00:03:27.208 START TEST env_memory 00:03:27.208 ************************************ 00:03:27.208 22:12:52 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:27.208 00:03:27.208 00:03:27.208 CUnit - A unit testing framework for C - Version 2.1-3 00:03:27.208 http://cunit.sourceforge.net/ 00:03:27.208 00:03:27.208 00:03:27.208 Suite: memory 00:03:27.208 Test: alloc and free memory map ...[2024-07-24 22:12:52.804716] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:27.208 passed 00:03:27.208 Test: mem map translation ...[2024-07-24 22:12:52.836339] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:27.208 [2024-07-24 22:12:52.836369] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:27.208 [2024-07-24 22:12:52.836421] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:27.208 [2024-07-24 22:12:52.836436] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:27.208 passed 00:03:27.208 Test: mem map registration ...[2024-07-24 22:12:52.899825] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:03:27.208 [2024-07-24 22:12:52.899850] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:03:27.468 passed 00:03:27.468 Test: mem map adjacent registrations ...passed 00:03:27.468 00:03:27.468 Run Summary: Type Total Ran Passed Failed Inactive 00:03:27.468 suites 1 1 n/a 0 0 00:03:27.468 tests 4 4 4 0 0 00:03:27.468 asserts 152 152 152 0 n/a 00:03:27.468 00:03:27.468 Elapsed time = 0.221 seconds 00:03:27.468 00:03:27.468 real 0m0.230s 00:03:27.468 user 0m0.220s 00:03:27.468 sys 0m0.009s 00:03:27.468 22:12:52 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:27.468 22:12:52 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:27.468 ************************************ 00:03:27.468 END TEST env_memory 00:03:27.468 ************************************ 00:03:27.468 22:12:53 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:27.468 22:12:53 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:27.468 22:12:53 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:27.468 22:12:53 env -- common/autotest_common.sh@10 -- # set +x 00:03:27.468 ************************************ 00:03:27.468 START TEST env_vtophys 00:03:27.468 ************************************ 00:03:27.468 22:12:53 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:27.468 EAL: lib.eal log level changed from notice to debug 00:03:27.468 EAL: Detected lcore 0 as core 0 on socket 0 00:03:27.468 EAL: Detected lcore 1 as core 1 on socket 0 00:03:27.468 EAL: Detected lcore 2 as core 2 on socket 0 00:03:27.468 EAL: Detected lcore 3 as core 3 on socket 0 00:03:27.468 EAL: Detected lcore 4 as core 4 on socket 0 00:03:27.468 EAL: Detected lcore 5 as core 5 on socket 0 00:03:27.468 EAL: Detected lcore 6 as core 6 on socket 0 00:03:27.468 EAL: Detected lcore 7 as core 7 on socket 0 00:03:27.468 EAL: Detected lcore 8 as core 0 on socket 1 00:03:27.468 EAL: Detected lcore 9 as core 1 on socket 1 00:03:27.468 EAL: Detected lcore 10 as core 2 on socket 1 00:03:27.468 EAL: Detected lcore 11 as core 3 on socket 1 00:03:27.468 EAL: Detected lcore 12 as core 4 on socket 1 00:03:27.468 EAL: Detected lcore 13 as core 5 on socket 1 00:03:27.468 EAL: Detected lcore 14 as core 6 on socket 1 00:03:27.468 EAL: Detected lcore 15 as core 7 on socket 1 00:03:27.468 EAL: Detected lcore 16 as core 0 on socket 0 00:03:27.468 EAL: Detected lcore 17 as core 1 on socket 0 00:03:27.468 EAL: Detected lcore 18 as core 2 on socket 0 00:03:27.468 EAL: Detected lcore 19 as core 3 on socket 0 00:03:27.468 EAL: Detected lcore 20 as core 4 on socket 0 00:03:27.468 EAL: Detected lcore 21 as core 5 on socket 0 00:03:27.468 EAL: Detected lcore 22 as core 6 on socket 0 00:03:27.468 EAL: Detected lcore 23 as core 7 on socket 0 00:03:27.468 EAL: Detected lcore 24 as core 0 on socket 1 00:03:27.468 EAL: Detected lcore 25 as core 1 on socket 1 00:03:27.468 EAL: Detected lcore 26 as core 2 on socket 1 00:03:27.468 EAL: Detected lcore 27 as core 3 on socket 1 00:03:27.468 EAL: Detected lcore 28 as core 4 on socket 1 00:03:27.468 EAL: Detected lcore 29 as core 5 on socket 1 00:03:27.468 EAL: Detected lcore 30 as core 6 on socket 1 00:03:27.468 EAL: Detected lcore 31 as core 7 on socket 1 00:03:27.468 EAL: Maximum logical cores by configuration: 128 00:03:27.468 EAL: Detected CPU lcores: 32 00:03:27.468 EAL: Detected NUMA nodes: 2 00:03:27.468 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:27.468 EAL: Detected shared linkage of DPDK 00:03:27.468 EAL: No shared files mode enabled, IPC will be disabled 00:03:27.468 EAL: Bus pci wants IOVA as 'DC' 00:03:27.468 EAL: Buses did not request a specific IOVA mode. 00:03:27.468 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:27.468 EAL: Selected IOVA mode 'VA' 00:03:27.468 EAL: No free 2048 kB hugepages reported on node 1 00:03:27.468 EAL: Probing VFIO support... 00:03:27.468 EAL: IOMMU type 1 (Type 1) is supported 00:03:27.468 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:27.468 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:27.468 EAL: VFIO support initialized 00:03:27.468 EAL: Ask a virtual area of 0x2e000 bytes 00:03:27.468 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:27.468 EAL: Setting up physically contiguous memory... 00:03:27.468 EAL: Setting maximum number of open files to 524288 00:03:27.469 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:27.469 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:27.469 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:27.469 EAL: Ask a virtual area of 0x61000 bytes 00:03:27.469 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:27.469 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:27.469 EAL: Ask a virtual area of 0x400000000 bytes 00:03:27.469 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:27.469 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:27.469 EAL: Ask a virtual area of 0x61000 bytes 00:03:27.469 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:27.469 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:27.469 EAL: Ask a virtual area of 0x400000000 bytes 00:03:27.469 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:27.469 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:27.469 EAL: Ask a virtual area of 0x61000 bytes 00:03:27.469 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:27.469 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:27.469 EAL: Ask a virtual area of 0x400000000 bytes 00:03:27.469 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:27.469 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:27.469 EAL: Ask a virtual area of 0x61000 bytes 00:03:27.469 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:27.469 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:27.469 EAL: Ask a virtual area of 0x400000000 bytes 00:03:27.469 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:27.469 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:27.469 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:27.469 EAL: Ask a virtual area of 0x61000 bytes 00:03:27.469 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:27.469 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:27.469 EAL: Ask a virtual area of 0x400000000 bytes 00:03:27.469 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:27.469 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:27.469 EAL: Ask a virtual area of 0x61000 bytes 00:03:27.469 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:27.469 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:27.469 EAL: Ask a virtual area of 0x400000000 bytes 00:03:27.469 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:27.469 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:27.469 EAL: Ask a virtual area of 0x61000 bytes 00:03:27.469 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:27.469 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:27.469 EAL: Ask a virtual area of 0x400000000 bytes 00:03:27.469 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:27.469 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:27.469 EAL: Ask a virtual area of 0x61000 bytes 00:03:27.469 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:27.469 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:27.469 EAL: Ask a virtual area of 0x400000000 bytes 00:03:27.469 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:27.469 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:27.469 EAL: Hugepages will be freed exactly as allocated. 00:03:27.469 EAL: No shared files mode enabled, IPC is disabled 00:03:27.469 EAL: No shared files mode enabled, IPC is disabled 00:03:27.469 EAL: TSC frequency is ~2700000 KHz 00:03:27.469 EAL: Main lcore 0 is ready (tid=7fb5bf63ca00;cpuset=[0]) 00:03:27.469 EAL: Trying to obtain current memory policy. 00:03:27.469 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:27.469 EAL: Restoring previous memory policy: 0 00:03:27.469 EAL: request: mp_malloc_sync 00:03:27.469 EAL: No shared files mode enabled, IPC is disabled 00:03:27.469 EAL: Heap on socket 0 was expanded by 2MB 00:03:27.469 EAL: No shared files mode enabled, IPC is disabled 00:03:27.469 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:27.469 EAL: Mem event callback 'spdk:(nil)' registered 00:03:27.469 00:03:27.469 00:03:27.469 CUnit - A unit testing framework for C - Version 2.1-3 00:03:27.469 http://cunit.sourceforge.net/ 00:03:27.469 00:03:27.469 00:03:27.469 Suite: components_suite 00:03:27.469 Test: vtophys_malloc_test ...passed 00:03:27.469 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:27.469 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:27.469 EAL: Restoring previous memory policy: 4 00:03:27.469 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.469 EAL: request: mp_malloc_sync 00:03:27.469 EAL: No shared files mode enabled, IPC is disabled 00:03:27.469 EAL: Heap on socket 0 was expanded by 4MB 00:03:27.469 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.469 EAL: request: mp_malloc_sync 00:03:27.469 EAL: No shared files mode enabled, IPC is disabled 00:03:27.469 EAL: Heap on socket 0 was shrunk by 4MB 00:03:27.469 EAL: Trying to obtain current memory policy. 00:03:27.469 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:27.469 EAL: Restoring previous memory policy: 4 00:03:27.469 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.469 EAL: request: mp_malloc_sync 00:03:27.469 EAL: No shared files mode enabled, IPC is disabled 00:03:27.469 EAL: Heap on socket 0 was expanded by 6MB 00:03:27.469 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.469 EAL: request: mp_malloc_sync 00:03:27.469 EAL: No shared files mode enabled, IPC is disabled 00:03:27.469 EAL: Heap on socket 0 was shrunk by 6MB 00:03:27.469 EAL: Trying to obtain current memory policy. 00:03:27.469 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:27.469 EAL: Restoring previous memory policy: 4 00:03:27.469 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.469 EAL: request: mp_malloc_sync 00:03:27.469 EAL: No shared files mode enabled, IPC is disabled 00:03:27.469 EAL: Heap on socket 0 was expanded by 10MB 00:03:27.469 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.469 EAL: request: mp_malloc_sync 00:03:27.469 EAL: No shared files mode enabled, IPC is disabled 00:03:27.469 EAL: Heap on socket 0 was shrunk by 10MB 00:03:27.469 EAL: Trying to obtain current memory policy. 00:03:27.469 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:27.469 EAL: Restoring previous memory policy: 4 00:03:27.469 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.469 EAL: request: mp_malloc_sync 00:03:27.469 EAL: No shared files mode enabled, IPC is disabled 00:03:27.469 EAL: Heap on socket 0 was expanded by 18MB 00:03:27.469 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.469 EAL: request: mp_malloc_sync 00:03:27.469 EAL: No shared files mode enabled, IPC is disabled 00:03:27.469 EAL: Heap on socket 0 was shrunk by 18MB 00:03:27.470 EAL: Trying to obtain current memory policy. 00:03:27.470 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:27.470 EAL: Restoring previous memory policy: 4 00:03:27.470 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.470 EAL: request: mp_malloc_sync 00:03:27.470 EAL: No shared files mode enabled, IPC is disabled 00:03:27.470 EAL: Heap on socket 0 was expanded by 34MB 00:03:27.470 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.470 EAL: request: mp_malloc_sync 00:03:27.470 EAL: No shared files mode enabled, IPC is disabled 00:03:27.470 EAL: Heap on socket 0 was shrunk by 34MB 00:03:27.470 EAL: Trying to obtain current memory policy. 00:03:27.470 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:27.470 EAL: Restoring previous memory policy: 4 00:03:27.470 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.470 EAL: request: mp_malloc_sync 00:03:27.470 EAL: No shared files mode enabled, IPC is disabled 00:03:27.470 EAL: Heap on socket 0 was expanded by 66MB 00:03:27.470 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.728 EAL: request: mp_malloc_sync 00:03:27.728 EAL: No shared files mode enabled, IPC is disabled 00:03:27.728 EAL: Heap on socket 0 was shrunk by 66MB 00:03:27.728 EAL: Trying to obtain current memory policy. 00:03:27.728 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:27.728 EAL: Restoring previous memory policy: 4 00:03:27.728 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.728 EAL: request: mp_malloc_sync 00:03:27.728 EAL: No shared files mode enabled, IPC is disabled 00:03:27.728 EAL: Heap on socket 0 was expanded by 130MB 00:03:27.728 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.728 EAL: request: mp_malloc_sync 00:03:27.728 EAL: No shared files mode enabled, IPC is disabled 00:03:27.728 EAL: Heap on socket 0 was shrunk by 130MB 00:03:27.728 EAL: Trying to obtain current memory policy. 00:03:27.728 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:27.728 EAL: Restoring previous memory policy: 4 00:03:27.728 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.728 EAL: request: mp_malloc_sync 00:03:27.728 EAL: No shared files mode enabled, IPC is disabled 00:03:27.728 EAL: Heap on socket 0 was expanded by 258MB 00:03:27.728 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.728 EAL: request: mp_malloc_sync 00:03:27.728 EAL: No shared files mode enabled, IPC is disabled 00:03:27.728 EAL: Heap on socket 0 was shrunk by 258MB 00:03:27.728 EAL: Trying to obtain current memory policy. 00:03:27.728 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:27.985 EAL: Restoring previous memory policy: 4 00:03:27.985 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.985 EAL: request: mp_malloc_sync 00:03:27.985 EAL: No shared files mode enabled, IPC is disabled 00:03:27.985 EAL: Heap on socket 0 was expanded by 514MB 00:03:27.985 EAL: Calling mem event callback 'spdk:(nil)' 00:03:27.985 EAL: request: mp_malloc_sync 00:03:27.985 EAL: No shared files mode enabled, IPC is disabled 00:03:27.985 EAL: Heap on socket 0 was shrunk by 514MB 00:03:27.985 EAL: Trying to obtain current memory policy. 00:03:27.985 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:28.244 EAL: Restoring previous memory policy: 4 00:03:28.244 EAL: Calling mem event callback 'spdk:(nil)' 00:03:28.244 EAL: request: mp_malloc_sync 00:03:28.244 EAL: No shared files mode enabled, IPC is disabled 00:03:28.244 EAL: Heap on socket 0 was expanded by 1026MB 00:03:28.503 EAL: Calling mem event callback 'spdk:(nil)' 00:03:28.503 EAL: request: mp_malloc_sync 00:03:28.503 EAL: No shared files mode enabled, IPC is disabled 00:03:28.503 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:28.503 passed 00:03:28.503 00:03:28.503 Run Summary: Type Total Ran Passed Failed Inactive 00:03:28.503 suites 1 1 n/a 0 0 00:03:28.503 tests 2 2 2 0 0 00:03:28.503 asserts 497 497 497 0 n/a 00:03:28.503 00:03:28.503 Elapsed time = 0.950 seconds 00:03:28.503 EAL: Calling mem event callback 'spdk:(nil)' 00:03:28.503 EAL: request: mp_malloc_sync 00:03:28.503 EAL: No shared files mode enabled, IPC is disabled 00:03:28.503 EAL: Heap on socket 0 was shrunk by 2MB 00:03:28.503 EAL: No shared files mode enabled, IPC is disabled 00:03:28.503 EAL: No shared files mode enabled, IPC is disabled 00:03:28.503 EAL: No shared files mode enabled, IPC is disabled 00:03:28.503 00:03:28.503 real 0m1.060s 00:03:28.503 user 0m0.514s 00:03:28.503 sys 0m0.516s 00:03:28.503 22:12:54 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:28.503 22:12:54 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:28.503 ************************************ 00:03:28.503 END TEST env_vtophys 00:03:28.503 ************************************ 00:03:28.503 22:12:54 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:28.503 22:12:54 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:28.503 22:12:54 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:28.503 22:12:54 env -- common/autotest_common.sh@10 -- # set +x 00:03:28.503 ************************************ 00:03:28.503 START TEST env_pci 00:03:28.504 ************************************ 00:03:28.504 22:12:54 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:28.504 00:03:28.504 00:03:28.504 CUnit - A unit testing framework for C - Version 2.1-3 00:03:28.504 http://cunit.sourceforge.net/ 00:03:28.504 00:03:28.504 00:03:28.504 Suite: pci 00:03:28.504 Test: pci_hook ...[2024-07-24 22:12:54.170538] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3744080 has claimed it 00:03:28.504 EAL: Cannot find device (10000:00:01.0) 00:03:28.504 EAL: Failed to attach device on primary process 00:03:28.504 passed 00:03:28.504 00:03:28.504 Run Summary: Type Total Ran Passed Failed Inactive 00:03:28.504 suites 1 1 n/a 0 0 00:03:28.504 tests 1 1 1 0 0 00:03:28.504 asserts 25 25 25 0 n/a 00:03:28.504 00:03:28.504 Elapsed time = 0.018 seconds 00:03:28.504 00:03:28.504 real 0m0.032s 00:03:28.504 user 0m0.015s 00:03:28.504 sys 0m0.016s 00:03:28.504 22:12:54 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:28.504 22:12:54 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:28.504 ************************************ 00:03:28.504 END TEST env_pci 00:03:28.504 ************************************ 00:03:28.764 22:12:54 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:28.764 22:12:54 env -- env/env.sh@15 -- # uname 00:03:28.764 22:12:54 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:28.764 22:12:54 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:28.764 22:12:54 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:28.764 22:12:54 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:03:28.764 22:12:54 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:28.764 22:12:54 env -- common/autotest_common.sh@10 -- # set +x 00:03:28.764 ************************************ 00:03:28.764 START TEST env_dpdk_post_init 00:03:28.764 ************************************ 00:03:28.764 22:12:54 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:28.764 EAL: Detected CPU lcores: 32 00:03:28.764 EAL: Detected NUMA nodes: 2 00:03:28.764 EAL: Detected shared linkage of DPDK 00:03:28.764 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:28.764 EAL: Selected IOVA mode 'VA' 00:03:28.764 EAL: No free 2048 kB hugepages reported on node 1 00:03:28.764 EAL: VFIO support initialized 00:03:28.764 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:28.764 EAL: Using IOMMU type 1 (Type 1) 00:03:28.764 EAL: Probe PCI driver: spdk_ioat (8086:3c20) device: 0000:00:04.0 (socket 0) 00:03:28.764 EAL: Probe PCI driver: spdk_ioat (8086:3c21) device: 0000:00:04.1 (socket 0) 00:03:28.764 EAL: Probe PCI driver: spdk_ioat (8086:3c22) device: 0000:00:04.2 (socket 0) 00:03:28.764 EAL: Probe PCI driver: spdk_ioat (8086:3c23) device: 0000:00:04.3 (socket 0) 00:03:28.764 EAL: Probe PCI driver: spdk_ioat (8086:3c24) device: 0000:00:04.4 (socket 0) 00:03:28.764 EAL: Probe PCI driver: spdk_ioat (8086:3c25) device: 0000:00:04.5 (socket 0) 00:03:28.764 EAL: Probe PCI driver: spdk_ioat (8086:3c26) device: 0000:00:04.6 (socket 0) 00:03:28.764 EAL: Probe PCI driver: spdk_ioat (8086:3c27) device: 0000:00:04.7 (socket 0) 00:03:28.764 EAL: Probe PCI driver: spdk_ioat (8086:3c20) device: 0000:80:04.0 (socket 1) 00:03:29.024 EAL: Probe PCI driver: spdk_ioat (8086:3c21) device: 0000:80:04.1 (socket 1) 00:03:29.024 EAL: Probe PCI driver: spdk_ioat (8086:3c22) device: 0000:80:04.2 (socket 1) 00:03:29.024 EAL: Probe PCI driver: spdk_ioat (8086:3c23) device: 0000:80:04.3 (socket 1) 00:03:29.024 EAL: Probe PCI driver: spdk_ioat (8086:3c24) device: 0000:80:04.4 (socket 1) 00:03:29.024 EAL: Probe PCI driver: spdk_ioat (8086:3c25) device: 0000:80:04.5 (socket 1) 00:03:29.024 EAL: Probe PCI driver: spdk_ioat (8086:3c26) device: 0000:80:04.6 (socket 1) 00:03:29.024 EAL: Probe PCI driver: spdk_ioat (8086:3c27) device: 0000:80:04.7 (socket 1) 00:03:29.593 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:84:00.0 (socket 1) 00:03:32.873 EAL: Releasing PCI mapped resource for 0000:84:00.0 00:03:32.873 EAL: Calling pci_unmap_resource for 0000:84:00.0 at 0x202001040000 00:03:33.132 Starting DPDK initialization... 00:03:33.132 Starting SPDK post initialization... 00:03:33.132 SPDK NVMe probe 00:03:33.132 Attaching to 0000:84:00.0 00:03:33.132 Attached to 0000:84:00.0 00:03:33.132 Cleaning up... 00:03:33.132 00:03:33.132 real 0m4.378s 00:03:33.132 user 0m3.262s 00:03:33.132 sys 0m0.174s 00:03:33.132 22:12:58 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:33.132 22:12:58 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:33.132 ************************************ 00:03:33.132 END TEST env_dpdk_post_init 00:03:33.132 ************************************ 00:03:33.132 22:12:58 env -- env/env.sh@26 -- # uname 00:03:33.132 22:12:58 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:33.132 22:12:58 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:33.132 22:12:58 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:33.132 22:12:58 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:33.132 22:12:58 env -- common/autotest_common.sh@10 -- # set +x 00:03:33.132 ************************************ 00:03:33.132 START TEST env_mem_callbacks 00:03:33.132 ************************************ 00:03:33.132 22:12:58 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:33.132 EAL: Detected CPU lcores: 32 00:03:33.132 EAL: Detected NUMA nodes: 2 00:03:33.132 EAL: Detected shared linkage of DPDK 00:03:33.132 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:33.132 EAL: Selected IOVA mode 'VA' 00:03:33.132 EAL: No free 2048 kB hugepages reported on node 1 00:03:33.132 EAL: VFIO support initialized 00:03:33.132 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:33.132 00:03:33.132 00:03:33.132 CUnit - A unit testing framework for C - Version 2.1-3 00:03:33.132 http://cunit.sourceforge.net/ 00:03:33.132 00:03:33.132 00:03:33.132 Suite: memory 00:03:33.132 Test: test ... 00:03:33.132 register 0x200000200000 2097152 00:03:33.132 malloc 3145728 00:03:33.132 register 0x200000400000 4194304 00:03:33.132 buf 0x200000500000 len 3145728 PASSED 00:03:33.132 malloc 64 00:03:33.132 buf 0x2000004fff40 len 64 PASSED 00:03:33.132 malloc 4194304 00:03:33.132 register 0x200000800000 6291456 00:03:33.132 buf 0x200000a00000 len 4194304 PASSED 00:03:33.132 free 0x200000500000 3145728 00:03:33.132 free 0x2000004fff40 64 00:03:33.132 unregister 0x200000400000 4194304 PASSED 00:03:33.132 free 0x200000a00000 4194304 00:03:33.132 unregister 0x200000800000 6291456 PASSED 00:03:33.132 malloc 8388608 00:03:33.132 register 0x200000400000 10485760 00:03:33.132 buf 0x200000600000 len 8388608 PASSED 00:03:33.132 free 0x200000600000 8388608 00:03:33.132 unregister 0x200000400000 10485760 PASSED 00:03:33.132 passed 00:03:33.132 00:03:33.132 Run Summary: Type Total Ran Passed Failed Inactive 00:03:33.132 suites 1 1 n/a 0 0 00:03:33.132 tests 1 1 1 0 0 00:03:33.132 asserts 15 15 15 0 n/a 00:03:33.132 00:03:33.132 Elapsed time = 0.005 seconds 00:03:33.132 00:03:33.132 real 0m0.045s 00:03:33.132 user 0m0.012s 00:03:33.132 sys 0m0.033s 00:03:33.132 22:12:58 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:33.132 22:12:58 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:33.132 ************************************ 00:03:33.132 END TEST env_mem_callbacks 00:03:33.132 ************************************ 00:03:33.132 00:03:33.132 real 0m6.072s 00:03:33.132 user 0m4.148s 00:03:33.132 sys 0m0.968s 00:03:33.132 22:12:58 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:33.132 22:12:58 env -- common/autotest_common.sh@10 -- # set +x 00:03:33.132 ************************************ 00:03:33.133 END TEST env 00:03:33.133 ************************************ 00:03:33.133 22:12:58 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:33.133 22:12:58 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:33.133 22:12:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:33.133 22:12:58 -- common/autotest_common.sh@10 -- # set +x 00:03:33.133 ************************************ 00:03:33.133 START TEST rpc 00:03:33.133 ************************************ 00:03:33.133 22:12:58 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:33.391 * Looking for test storage... 00:03:33.391 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:33.391 22:12:58 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3744610 00:03:33.391 22:12:58 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:33.391 22:12:58 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:33.391 22:12:58 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3744610 00:03:33.391 22:12:58 rpc -- common/autotest_common.sh@829 -- # '[' -z 3744610 ']' 00:03:33.391 22:12:58 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:33.391 22:12:58 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:33.391 22:12:58 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:33.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:33.391 22:12:58 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:33.391 22:12:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:33.391 [2024-07-24 22:12:58.910992] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:03:33.391 [2024-07-24 22:12:58.911094] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3744610 ] 00:03:33.391 EAL: No free 2048 kB hugepages reported on node 1 00:03:33.391 [2024-07-24 22:12:58.970740] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:33.391 [2024-07-24 22:12:59.087447] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:33.391 [2024-07-24 22:12:59.087517] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3744610' to capture a snapshot of events at runtime. 00:03:33.391 [2024-07-24 22:12:59.087533] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:33.391 [2024-07-24 22:12:59.087546] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:33.391 [2024-07-24 22:12:59.087558] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3744610 for offline analysis/debug. 00:03:33.391 [2024-07-24 22:12:59.087596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:33.649 22:12:59 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:33.649 22:12:59 rpc -- common/autotest_common.sh@862 -- # return 0 00:03:33.649 22:12:59 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:33.649 22:12:59 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:33.649 22:12:59 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:33.649 22:12:59 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:33.649 22:12:59 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:33.649 22:12:59 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:33.649 22:12:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:33.649 ************************************ 00:03:33.649 START TEST rpc_integrity 00:03:33.649 ************************************ 00:03:33.649 22:12:59 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:03:33.650 22:12:59 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:33.650 22:12:59 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:33.650 22:12:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:33.908 22:12:59 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:33.908 22:12:59 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:33.908 22:12:59 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:33.908 22:12:59 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:33.908 22:12:59 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:33.908 22:12:59 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:33.908 22:12:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:33.908 22:12:59 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:33.908 22:12:59 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:33.908 22:12:59 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:33.908 22:12:59 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:33.908 22:12:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:33.908 22:12:59 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:33.908 22:12:59 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:33.908 { 00:03:33.908 "name": "Malloc0", 00:03:33.908 "aliases": [ 00:03:33.908 "bafb6da7-7d09-4a9d-99ea-6e895e4666fd" 00:03:33.908 ], 00:03:33.908 "product_name": "Malloc disk", 00:03:33.908 "block_size": 512, 00:03:33.908 "num_blocks": 16384, 00:03:33.908 "uuid": "bafb6da7-7d09-4a9d-99ea-6e895e4666fd", 00:03:33.908 "assigned_rate_limits": { 00:03:33.908 "rw_ios_per_sec": 0, 00:03:33.908 "rw_mbytes_per_sec": 0, 00:03:33.908 "r_mbytes_per_sec": 0, 00:03:33.908 "w_mbytes_per_sec": 0 00:03:33.908 }, 00:03:33.908 "claimed": false, 00:03:33.908 "zoned": false, 00:03:33.908 "supported_io_types": { 00:03:33.908 "read": true, 00:03:33.908 "write": true, 00:03:33.908 "unmap": true, 00:03:33.908 "flush": true, 00:03:33.908 "reset": true, 00:03:33.908 "nvme_admin": false, 00:03:33.908 "nvme_io": false, 00:03:33.908 "nvme_io_md": false, 00:03:33.908 "write_zeroes": true, 00:03:33.908 "zcopy": true, 00:03:33.908 "get_zone_info": false, 00:03:33.908 "zone_management": false, 00:03:33.908 "zone_append": false, 00:03:33.908 "compare": false, 00:03:33.908 "compare_and_write": false, 00:03:33.908 "abort": true, 00:03:33.908 "seek_hole": false, 00:03:33.908 "seek_data": false, 00:03:33.908 "copy": true, 00:03:33.908 "nvme_iov_md": false 00:03:33.908 }, 00:03:33.908 "memory_domains": [ 00:03:33.908 { 00:03:33.908 "dma_device_id": "system", 00:03:33.908 "dma_device_type": 1 00:03:33.908 }, 00:03:33.908 { 00:03:33.908 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:33.908 "dma_device_type": 2 00:03:33.908 } 00:03:33.908 ], 00:03:33.908 "driver_specific": {} 00:03:33.908 } 00:03:33.908 ]' 00:03:33.908 22:12:59 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:33.908 22:12:59 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:33.908 22:12:59 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:33.908 22:12:59 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:33.908 22:12:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:33.908 [2024-07-24 22:12:59.461991] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:33.908 [2024-07-24 22:12:59.462038] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:33.908 [2024-07-24 22:12:59.462061] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc60380 00:03:33.908 [2024-07-24 22:12:59.462075] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:33.908 [2024-07-24 22:12:59.463613] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:33.908 [2024-07-24 22:12:59.463639] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:33.908 Passthru0 00:03:33.908 22:12:59 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:33.908 22:12:59 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:33.908 22:12:59 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:33.908 22:12:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:33.908 22:12:59 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:33.908 22:12:59 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:33.908 { 00:03:33.908 "name": "Malloc0", 00:03:33.908 "aliases": [ 00:03:33.908 "bafb6da7-7d09-4a9d-99ea-6e895e4666fd" 00:03:33.908 ], 00:03:33.908 "product_name": "Malloc disk", 00:03:33.908 "block_size": 512, 00:03:33.908 "num_blocks": 16384, 00:03:33.908 "uuid": "bafb6da7-7d09-4a9d-99ea-6e895e4666fd", 00:03:33.908 "assigned_rate_limits": { 00:03:33.908 "rw_ios_per_sec": 0, 00:03:33.908 "rw_mbytes_per_sec": 0, 00:03:33.908 "r_mbytes_per_sec": 0, 00:03:33.908 "w_mbytes_per_sec": 0 00:03:33.908 }, 00:03:33.908 "claimed": true, 00:03:33.908 "claim_type": "exclusive_write", 00:03:33.908 "zoned": false, 00:03:33.908 "supported_io_types": { 00:03:33.908 "read": true, 00:03:33.908 "write": true, 00:03:33.908 "unmap": true, 00:03:33.908 "flush": true, 00:03:33.908 "reset": true, 00:03:33.908 "nvme_admin": false, 00:03:33.908 "nvme_io": false, 00:03:33.908 "nvme_io_md": false, 00:03:33.908 "write_zeroes": true, 00:03:33.908 "zcopy": true, 00:03:33.908 "get_zone_info": false, 00:03:33.908 "zone_management": false, 00:03:33.908 "zone_append": false, 00:03:33.908 "compare": false, 00:03:33.908 "compare_and_write": false, 00:03:33.908 "abort": true, 00:03:33.908 "seek_hole": false, 00:03:33.908 "seek_data": false, 00:03:33.908 "copy": true, 00:03:33.908 "nvme_iov_md": false 00:03:33.908 }, 00:03:33.908 "memory_domains": [ 00:03:33.908 { 00:03:33.908 "dma_device_id": "system", 00:03:33.908 "dma_device_type": 1 00:03:33.908 }, 00:03:33.908 { 00:03:33.908 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:33.908 "dma_device_type": 2 00:03:33.908 } 00:03:33.908 ], 00:03:33.908 "driver_specific": {} 00:03:33.908 }, 00:03:33.908 { 00:03:33.908 "name": "Passthru0", 00:03:33.908 "aliases": [ 00:03:33.908 "53b7fdc9-bf28-56bd-a5dd-8556799960b5" 00:03:33.908 ], 00:03:33.908 "product_name": "passthru", 00:03:33.908 "block_size": 512, 00:03:33.908 "num_blocks": 16384, 00:03:33.908 "uuid": "53b7fdc9-bf28-56bd-a5dd-8556799960b5", 00:03:33.908 "assigned_rate_limits": { 00:03:33.908 "rw_ios_per_sec": 0, 00:03:33.908 "rw_mbytes_per_sec": 0, 00:03:33.908 "r_mbytes_per_sec": 0, 00:03:33.908 "w_mbytes_per_sec": 0 00:03:33.908 }, 00:03:33.908 "claimed": false, 00:03:33.909 "zoned": false, 00:03:33.909 "supported_io_types": { 00:03:33.909 "read": true, 00:03:33.909 "write": true, 00:03:33.909 "unmap": true, 00:03:33.909 "flush": true, 00:03:33.909 "reset": true, 00:03:33.909 "nvme_admin": false, 00:03:33.909 "nvme_io": false, 00:03:33.909 "nvme_io_md": false, 00:03:33.909 "write_zeroes": true, 00:03:33.909 "zcopy": true, 00:03:33.909 "get_zone_info": false, 00:03:33.909 "zone_management": false, 00:03:33.909 "zone_append": false, 00:03:33.909 "compare": false, 00:03:33.909 "compare_and_write": false, 00:03:33.909 "abort": true, 00:03:33.909 "seek_hole": false, 00:03:33.909 "seek_data": false, 00:03:33.909 "copy": true, 00:03:33.909 "nvme_iov_md": false 00:03:33.909 }, 00:03:33.909 "memory_domains": [ 00:03:33.909 { 00:03:33.909 "dma_device_id": "system", 00:03:33.909 "dma_device_type": 1 00:03:33.909 }, 00:03:33.909 { 00:03:33.909 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:33.909 "dma_device_type": 2 00:03:33.909 } 00:03:33.909 ], 00:03:33.909 "driver_specific": { 00:03:33.909 "passthru": { 00:03:33.909 "name": "Passthru0", 00:03:33.909 "base_bdev_name": "Malloc0" 00:03:33.909 } 00:03:33.909 } 00:03:33.909 } 00:03:33.909 ]' 00:03:33.909 22:12:59 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:33.909 22:12:59 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:33.909 22:12:59 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:33.909 22:12:59 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:33.909 22:12:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:33.909 22:12:59 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:33.909 22:12:59 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:33.909 22:12:59 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:33.909 22:12:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:33.909 22:12:59 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:33.909 22:12:59 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:33.909 22:12:59 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:33.909 22:12:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:33.909 22:12:59 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:33.909 22:12:59 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:33.909 22:12:59 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:33.909 22:12:59 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:33.909 00:03:33.909 real 0m0.240s 00:03:33.909 user 0m0.160s 00:03:33.909 sys 0m0.025s 00:03:33.909 22:12:59 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:33.909 22:12:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:33.909 ************************************ 00:03:33.909 END TEST rpc_integrity 00:03:33.909 ************************************ 00:03:33.909 22:12:59 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:33.909 22:12:59 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:33.909 22:12:59 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:33.909 22:12:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:34.167 ************************************ 00:03:34.167 START TEST rpc_plugins 00:03:34.167 ************************************ 00:03:34.167 22:12:59 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:03:34.167 22:12:59 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:34.167 22:12:59 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:34.167 22:12:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:34.167 22:12:59 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:34.167 22:12:59 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:34.167 22:12:59 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:34.167 22:12:59 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:34.167 22:12:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:34.167 22:12:59 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:34.167 22:12:59 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:34.167 { 00:03:34.167 "name": "Malloc1", 00:03:34.167 "aliases": [ 00:03:34.167 "6cc7df24-64a3-41a0-92f6-4e4332ca4f78" 00:03:34.167 ], 00:03:34.167 "product_name": "Malloc disk", 00:03:34.167 "block_size": 4096, 00:03:34.167 "num_blocks": 256, 00:03:34.167 "uuid": "6cc7df24-64a3-41a0-92f6-4e4332ca4f78", 00:03:34.167 "assigned_rate_limits": { 00:03:34.167 "rw_ios_per_sec": 0, 00:03:34.167 "rw_mbytes_per_sec": 0, 00:03:34.167 "r_mbytes_per_sec": 0, 00:03:34.167 "w_mbytes_per_sec": 0 00:03:34.167 }, 00:03:34.167 "claimed": false, 00:03:34.167 "zoned": false, 00:03:34.167 "supported_io_types": { 00:03:34.167 "read": true, 00:03:34.168 "write": true, 00:03:34.168 "unmap": true, 00:03:34.168 "flush": true, 00:03:34.168 "reset": true, 00:03:34.168 "nvme_admin": false, 00:03:34.168 "nvme_io": false, 00:03:34.168 "nvme_io_md": false, 00:03:34.168 "write_zeroes": true, 00:03:34.168 "zcopy": true, 00:03:34.168 "get_zone_info": false, 00:03:34.168 "zone_management": false, 00:03:34.168 "zone_append": false, 00:03:34.168 "compare": false, 00:03:34.168 "compare_and_write": false, 00:03:34.168 "abort": true, 00:03:34.168 "seek_hole": false, 00:03:34.168 "seek_data": false, 00:03:34.168 "copy": true, 00:03:34.168 "nvme_iov_md": false 00:03:34.168 }, 00:03:34.168 "memory_domains": [ 00:03:34.168 { 00:03:34.168 "dma_device_id": "system", 00:03:34.168 "dma_device_type": 1 00:03:34.168 }, 00:03:34.168 { 00:03:34.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:34.168 "dma_device_type": 2 00:03:34.168 } 00:03:34.168 ], 00:03:34.168 "driver_specific": {} 00:03:34.168 } 00:03:34.168 ]' 00:03:34.168 22:12:59 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:34.168 22:12:59 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:34.168 22:12:59 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:34.168 22:12:59 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:34.168 22:12:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:34.168 22:12:59 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:34.168 22:12:59 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:34.168 22:12:59 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:34.168 22:12:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:34.168 22:12:59 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:34.168 22:12:59 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:34.168 22:12:59 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:34.168 22:12:59 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:34.168 00:03:34.168 real 0m0.126s 00:03:34.168 user 0m0.075s 00:03:34.168 sys 0m0.016s 00:03:34.168 22:12:59 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:34.168 22:12:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:34.168 ************************************ 00:03:34.168 END TEST rpc_plugins 00:03:34.168 ************************************ 00:03:34.168 22:12:59 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:34.168 22:12:59 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:34.168 22:12:59 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:34.168 22:12:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:34.168 ************************************ 00:03:34.168 START TEST rpc_trace_cmd_test 00:03:34.168 ************************************ 00:03:34.168 22:12:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:03:34.168 22:12:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:34.168 22:12:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:34.168 22:12:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:34.168 22:12:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:34.168 22:12:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:34.168 22:12:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:34.168 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3744610", 00:03:34.168 "tpoint_group_mask": "0x8", 00:03:34.168 "iscsi_conn": { 00:03:34.168 "mask": "0x2", 00:03:34.168 "tpoint_mask": "0x0" 00:03:34.168 }, 00:03:34.168 "scsi": { 00:03:34.168 "mask": "0x4", 00:03:34.168 "tpoint_mask": "0x0" 00:03:34.168 }, 00:03:34.168 "bdev": { 00:03:34.168 "mask": "0x8", 00:03:34.168 "tpoint_mask": "0xffffffffffffffff" 00:03:34.168 }, 00:03:34.168 "nvmf_rdma": { 00:03:34.168 "mask": "0x10", 00:03:34.168 "tpoint_mask": "0x0" 00:03:34.168 }, 00:03:34.168 "nvmf_tcp": { 00:03:34.168 "mask": "0x20", 00:03:34.168 "tpoint_mask": "0x0" 00:03:34.168 }, 00:03:34.168 "ftl": { 00:03:34.168 "mask": "0x40", 00:03:34.168 "tpoint_mask": "0x0" 00:03:34.168 }, 00:03:34.168 "blobfs": { 00:03:34.168 "mask": "0x80", 00:03:34.168 "tpoint_mask": "0x0" 00:03:34.168 }, 00:03:34.168 "dsa": { 00:03:34.168 "mask": "0x200", 00:03:34.168 "tpoint_mask": "0x0" 00:03:34.168 }, 00:03:34.168 "thread": { 00:03:34.168 "mask": "0x400", 00:03:34.168 "tpoint_mask": "0x0" 00:03:34.168 }, 00:03:34.168 "nvme_pcie": { 00:03:34.168 "mask": "0x800", 00:03:34.168 "tpoint_mask": "0x0" 00:03:34.168 }, 00:03:34.168 "iaa": { 00:03:34.168 "mask": "0x1000", 00:03:34.168 "tpoint_mask": "0x0" 00:03:34.168 }, 00:03:34.168 "nvme_tcp": { 00:03:34.168 "mask": "0x2000", 00:03:34.168 "tpoint_mask": "0x0" 00:03:34.168 }, 00:03:34.168 "bdev_nvme": { 00:03:34.168 "mask": "0x4000", 00:03:34.168 "tpoint_mask": "0x0" 00:03:34.168 }, 00:03:34.168 "sock": { 00:03:34.168 "mask": "0x8000", 00:03:34.168 "tpoint_mask": "0x0" 00:03:34.168 } 00:03:34.168 }' 00:03:34.168 22:12:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:34.168 22:12:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:03:34.168 22:12:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:34.426 22:12:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:34.426 22:12:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:34.427 22:12:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:34.427 22:12:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:34.427 22:12:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:34.427 22:12:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:34.427 22:13:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:34.427 00:03:34.427 real 0m0.209s 00:03:34.427 user 0m0.184s 00:03:34.427 sys 0m0.016s 00:03:34.427 22:13:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:34.427 22:13:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:34.427 ************************************ 00:03:34.427 END TEST rpc_trace_cmd_test 00:03:34.427 ************************************ 00:03:34.427 22:13:00 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:34.427 22:13:00 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:34.427 22:13:00 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:34.427 22:13:00 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:34.427 22:13:00 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:34.427 22:13:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:34.427 ************************************ 00:03:34.427 START TEST rpc_daemon_integrity 00:03:34.427 ************************************ 00:03:34.427 22:13:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:03:34.427 22:13:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:34.427 22:13:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:34.427 22:13:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:34.427 22:13:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:34.427 22:13:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:34.427 22:13:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:34.427 22:13:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:34.427 22:13:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:34.427 22:13:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:34.427 22:13:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:34.689 22:13:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:34.689 22:13:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:34.689 22:13:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:34.689 22:13:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:34.689 22:13:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:34.689 22:13:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:34.689 22:13:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:34.689 { 00:03:34.689 "name": "Malloc2", 00:03:34.689 "aliases": [ 00:03:34.689 "6c405689-04b3-4643-9e07-fabea7c10eb0" 00:03:34.689 ], 00:03:34.689 "product_name": "Malloc disk", 00:03:34.689 "block_size": 512, 00:03:34.689 "num_blocks": 16384, 00:03:34.689 "uuid": "6c405689-04b3-4643-9e07-fabea7c10eb0", 00:03:34.689 "assigned_rate_limits": { 00:03:34.689 "rw_ios_per_sec": 0, 00:03:34.689 "rw_mbytes_per_sec": 0, 00:03:34.689 "r_mbytes_per_sec": 0, 00:03:34.689 "w_mbytes_per_sec": 0 00:03:34.689 }, 00:03:34.689 "claimed": false, 00:03:34.689 "zoned": false, 00:03:34.689 "supported_io_types": { 00:03:34.689 "read": true, 00:03:34.689 "write": true, 00:03:34.689 "unmap": true, 00:03:34.689 "flush": true, 00:03:34.689 "reset": true, 00:03:34.689 "nvme_admin": false, 00:03:34.689 "nvme_io": false, 00:03:34.689 "nvme_io_md": false, 00:03:34.689 "write_zeroes": true, 00:03:34.689 "zcopy": true, 00:03:34.689 "get_zone_info": false, 00:03:34.689 "zone_management": false, 00:03:34.689 "zone_append": false, 00:03:34.689 "compare": false, 00:03:34.689 "compare_and_write": false, 00:03:34.689 "abort": true, 00:03:34.689 "seek_hole": false, 00:03:34.689 "seek_data": false, 00:03:34.689 "copy": true, 00:03:34.689 "nvme_iov_md": false 00:03:34.689 }, 00:03:34.689 "memory_domains": [ 00:03:34.689 { 00:03:34.689 "dma_device_id": "system", 00:03:34.689 "dma_device_type": 1 00:03:34.689 }, 00:03:34.689 { 00:03:34.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:34.689 "dma_device_type": 2 00:03:34.689 } 00:03:34.689 ], 00:03:34.689 "driver_specific": {} 00:03:34.689 } 00:03:34.689 ]' 00:03:34.689 22:13:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:34.689 22:13:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:34.689 22:13:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:34.689 22:13:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:34.689 22:13:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:34.689 [2024-07-24 22:13:00.196264] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:34.689 [2024-07-24 22:13:00.196313] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:34.689 [2024-07-24 22:13:00.196339] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xaae0c0 00:03:34.689 [2024-07-24 22:13:00.196354] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:34.689 [2024-07-24 22:13:00.197804] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:34.689 [2024-07-24 22:13:00.197831] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:34.689 Passthru0 00:03:34.689 22:13:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:34.689 22:13:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:34.689 22:13:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:34.689 22:13:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:34.689 22:13:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:34.689 22:13:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:34.689 { 00:03:34.689 "name": "Malloc2", 00:03:34.689 "aliases": [ 00:03:34.689 "6c405689-04b3-4643-9e07-fabea7c10eb0" 00:03:34.689 ], 00:03:34.689 "product_name": "Malloc disk", 00:03:34.689 "block_size": 512, 00:03:34.689 "num_blocks": 16384, 00:03:34.689 "uuid": "6c405689-04b3-4643-9e07-fabea7c10eb0", 00:03:34.689 "assigned_rate_limits": { 00:03:34.689 "rw_ios_per_sec": 0, 00:03:34.689 "rw_mbytes_per_sec": 0, 00:03:34.689 "r_mbytes_per_sec": 0, 00:03:34.689 "w_mbytes_per_sec": 0 00:03:34.689 }, 00:03:34.689 "claimed": true, 00:03:34.689 "claim_type": "exclusive_write", 00:03:34.689 "zoned": false, 00:03:34.689 "supported_io_types": { 00:03:34.689 "read": true, 00:03:34.690 "write": true, 00:03:34.690 "unmap": true, 00:03:34.690 "flush": true, 00:03:34.690 "reset": true, 00:03:34.690 "nvme_admin": false, 00:03:34.690 "nvme_io": false, 00:03:34.690 "nvme_io_md": false, 00:03:34.690 "write_zeroes": true, 00:03:34.690 "zcopy": true, 00:03:34.690 "get_zone_info": false, 00:03:34.690 "zone_management": false, 00:03:34.690 "zone_append": false, 00:03:34.690 "compare": false, 00:03:34.690 "compare_and_write": false, 00:03:34.690 "abort": true, 00:03:34.690 "seek_hole": false, 00:03:34.690 "seek_data": false, 00:03:34.690 "copy": true, 00:03:34.690 "nvme_iov_md": false 00:03:34.690 }, 00:03:34.690 "memory_domains": [ 00:03:34.690 { 00:03:34.690 "dma_device_id": "system", 00:03:34.690 "dma_device_type": 1 00:03:34.690 }, 00:03:34.690 { 00:03:34.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:34.690 "dma_device_type": 2 00:03:34.690 } 00:03:34.690 ], 00:03:34.690 "driver_specific": {} 00:03:34.690 }, 00:03:34.690 { 00:03:34.690 "name": "Passthru0", 00:03:34.690 "aliases": [ 00:03:34.690 "ee9a5646-1c1b-5940-b2ec-30b5e221fbcf" 00:03:34.690 ], 00:03:34.690 "product_name": "passthru", 00:03:34.690 "block_size": 512, 00:03:34.690 "num_blocks": 16384, 00:03:34.690 "uuid": "ee9a5646-1c1b-5940-b2ec-30b5e221fbcf", 00:03:34.690 "assigned_rate_limits": { 00:03:34.690 "rw_ios_per_sec": 0, 00:03:34.690 "rw_mbytes_per_sec": 0, 00:03:34.690 "r_mbytes_per_sec": 0, 00:03:34.690 "w_mbytes_per_sec": 0 00:03:34.690 }, 00:03:34.690 "claimed": false, 00:03:34.690 "zoned": false, 00:03:34.690 "supported_io_types": { 00:03:34.690 "read": true, 00:03:34.690 "write": true, 00:03:34.690 "unmap": true, 00:03:34.690 "flush": true, 00:03:34.690 "reset": true, 00:03:34.690 "nvme_admin": false, 00:03:34.690 "nvme_io": false, 00:03:34.690 "nvme_io_md": false, 00:03:34.690 "write_zeroes": true, 00:03:34.690 "zcopy": true, 00:03:34.690 "get_zone_info": false, 00:03:34.690 "zone_management": false, 00:03:34.690 "zone_append": false, 00:03:34.690 "compare": false, 00:03:34.690 "compare_and_write": false, 00:03:34.690 "abort": true, 00:03:34.690 "seek_hole": false, 00:03:34.690 "seek_data": false, 00:03:34.690 "copy": true, 00:03:34.690 "nvme_iov_md": false 00:03:34.690 }, 00:03:34.690 "memory_domains": [ 00:03:34.690 { 00:03:34.690 "dma_device_id": "system", 00:03:34.690 "dma_device_type": 1 00:03:34.690 }, 00:03:34.690 { 00:03:34.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:34.690 "dma_device_type": 2 00:03:34.690 } 00:03:34.690 ], 00:03:34.690 "driver_specific": { 00:03:34.690 "passthru": { 00:03:34.690 "name": "Passthru0", 00:03:34.690 "base_bdev_name": "Malloc2" 00:03:34.690 } 00:03:34.690 } 00:03:34.690 } 00:03:34.690 ]' 00:03:34.690 22:13:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:34.690 22:13:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:34.690 22:13:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:34.690 22:13:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:34.690 22:13:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:34.690 22:13:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:34.690 22:13:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:34.690 22:13:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:34.690 22:13:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:34.690 22:13:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:34.690 22:13:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:34.690 22:13:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:34.690 22:13:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:34.690 22:13:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:34.690 22:13:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:34.690 22:13:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:34.690 22:13:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:34.690 00:03:34.690 real 0m0.263s 00:03:34.690 user 0m0.171s 00:03:34.690 sys 0m0.024s 00:03:34.690 22:13:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:34.690 22:13:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:34.690 ************************************ 00:03:34.690 END TEST rpc_daemon_integrity 00:03:34.690 ************************************ 00:03:34.690 22:13:00 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:34.690 22:13:00 rpc -- rpc/rpc.sh@84 -- # killprocess 3744610 00:03:34.690 22:13:00 rpc -- common/autotest_common.sh@948 -- # '[' -z 3744610 ']' 00:03:34.690 22:13:00 rpc -- common/autotest_common.sh@952 -- # kill -0 3744610 00:03:34.690 22:13:00 rpc -- common/autotest_common.sh@953 -- # uname 00:03:34.690 22:13:00 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:34.690 22:13:00 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3744610 00:03:34.690 22:13:00 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:34.690 22:13:00 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:34.690 22:13:00 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3744610' 00:03:34.690 killing process with pid 3744610 00:03:34.690 22:13:00 rpc -- common/autotest_common.sh@967 -- # kill 3744610 00:03:34.690 22:13:00 rpc -- common/autotest_common.sh@972 -- # wait 3744610 00:03:35.310 00:03:35.310 real 0m1.908s 00:03:35.310 user 0m2.498s 00:03:35.310 sys 0m0.587s 00:03:35.310 22:13:00 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:35.310 22:13:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:35.310 ************************************ 00:03:35.310 END TEST rpc 00:03:35.310 ************************************ 00:03:35.310 22:13:00 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:35.310 22:13:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:35.310 22:13:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:35.310 22:13:00 -- common/autotest_common.sh@10 -- # set +x 00:03:35.310 ************************************ 00:03:35.310 START TEST skip_rpc 00:03:35.310 ************************************ 00:03:35.310 22:13:00 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:35.310 * Looking for test storage... 00:03:35.310 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:35.310 22:13:00 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:35.310 22:13:00 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:35.310 22:13:00 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:35.310 22:13:00 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:35.310 22:13:00 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:35.310 22:13:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:35.310 ************************************ 00:03:35.310 START TEST skip_rpc 00:03:35.310 ************************************ 00:03:35.310 22:13:00 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:03:35.310 22:13:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3744981 00:03:35.310 22:13:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:35.310 22:13:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:35.310 22:13:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:35.310 [2024-07-24 22:13:00.900308] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:03:35.310 [2024-07-24 22:13:00.900406] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3744981 ] 00:03:35.310 EAL: No free 2048 kB hugepages reported on node 1 00:03:35.310 [2024-07-24 22:13:00.960091] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:35.569 [2024-07-24 22:13:01.080183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:40.829 22:13:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:40.829 22:13:05 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:03:40.829 22:13:05 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:40.829 22:13:05 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:03:40.829 22:13:05 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:40.829 22:13:05 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:03:40.829 22:13:05 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:40.829 22:13:05 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:03:40.829 22:13:05 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:40.829 22:13:05 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:40.829 22:13:05 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:03:40.829 22:13:05 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:03:40.829 22:13:05 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:03:40.829 22:13:05 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:03:40.829 22:13:05 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:03:40.829 22:13:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:40.829 22:13:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3744981 00:03:40.829 22:13:05 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 3744981 ']' 00:03:40.829 22:13:05 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 3744981 00:03:40.829 22:13:05 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:03:40.829 22:13:05 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:40.829 22:13:05 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3744981 00:03:40.829 22:13:05 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:40.829 22:13:05 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:40.829 22:13:05 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3744981' 00:03:40.829 killing process with pid 3744981 00:03:40.829 22:13:05 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 3744981 00:03:40.829 22:13:05 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 3744981 00:03:40.829 00:03:40.829 real 0m5.360s 00:03:40.829 user 0m5.065s 00:03:40.829 sys 0m0.285s 00:03:40.829 22:13:06 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:40.829 22:13:06 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:40.829 ************************************ 00:03:40.829 END TEST skip_rpc 00:03:40.829 ************************************ 00:03:40.829 22:13:06 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:40.829 22:13:06 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:40.829 22:13:06 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:40.829 22:13:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:40.829 ************************************ 00:03:40.829 START TEST skip_rpc_with_json 00:03:40.829 ************************************ 00:03:40.829 22:13:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:03:40.829 22:13:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:40.829 22:13:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3745513 00:03:40.829 22:13:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:40.829 22:13:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:40.829 22:13:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3745513 00:03:40.829 22:13:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 3745513 ']' 00:03:40.829 22:13:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:40.829 22:13:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:40.829 22:13:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:40.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:40.829 22:13:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:40.829 22:13:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:40.829 [2024-07-24 22:13:06.315149] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:03:40.829 [2024-07-24 22:13:06.315251] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3745513 ] 00:03:40.829 EAL: No free 2048 kB hugepages reported on node 1 00:03:40.829 [2024-07-24 22:13:06.374491] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:40.829 [2024-07-24 22:13:06.491389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:41.088 22:13:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:41.088 22:13:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:03:41.088 22:13:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:41.088 22:13:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:41.088 22:13:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:41.088 [2024-07-24 22:13:06.725469] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:41.088 request: 00:03:41.088 { 00:03:41.088 "trtype": "tcp", 00:03:41.088 "method": "nvmf_get_transports", 00:03:41.088 "req_id": 1 00:03:41.088 } 00:03:41.088 Got JSON-RPC error response 00:03:41.088 response: 00:03:41.088 { 00:03:41.088 "code": -19, 00:03:41.088 "message": "No such device" 00:03:41.088 } 00:03:41.088 22:13:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:03:41.088 22:13:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:41.088 22:13:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:41.088 22:13:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:41.088 [2024-07-24 22:13:06.733592] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:41.088 22:13:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:41.088 22:13:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:41.088 22:13:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:41.088 22:13:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:41.346 22:13:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:41.346 22:13:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:41.346 { 00:03:41.346 "subsystems": [ 00:03:41.346 { 00:03:41.346 "subsystem": "vfio_user_target", 00:03:41.346 "config": null 00:03:41.346 }, 00:03:41.346 { 00:03:41.346 "subsystem": "keyring", 00:03:41.346 "config": [] 00:03:41.346 }, 00:03:41.346 { 00:03:41.346 "subsystem": "iobuf", 00:03:41.346 "config": [ 00:03:41.346 { 00:03:41.346 "method": "iobuf_set_options", 00:03:41.346 "params": { 00:03:41.346 "small_pool_count": 8192, 00:03:41.347 "large_pool_count": 1024, 00:03:41.347 "small_bufsize": 8192, 00:03:41.347 "large_bufsize": 135168 00:03:41.347 } 00:03:41.347 } 00:03:41.347 ] 00:03:41.347 }, 00:03:41.347 { 00:03:41.347 "subsystem": "sock", 00:03:41.347 "config": [ 00:03:41.347 { 00:03:41.347 "method": "sock_set_default_impl", 00:03:41.347 "params": { 00:03:41.347 "impl_name": "posix" 00:03:41.347 } 00:03:41.347 }, 00:03:41.347 { 00:03:41.347 "method": "sock_impl_set_options", 00:03:41.347 "params": { 00:03:41.347 "impl_name": "ssl", 00:03:41.347 "recv_buf_size": 4096, 00:03:41.347 "send_buf_size": 4096, 00:03:41.347 "enable_recv_pipe": true, 00:03:41.347 "enable_quickack": false, 00:03:41.347 "enable_placement_id": 0, 00:03:41.347 "enable_zerocopy_send_server": true, 00:03:41.347 "enable_zerocopy_send_client": false, 00:03:41.347 "zerocopy_threshold": 0, 00:03:41.347 "tls_version": 0, 00:03:41.347 "enable_ktls": false 00:03:41.347 } 00:03:41.347 }, 00:03:41.347 { 00:03:41.347 "method": "sock_impl_set_options", 00:03:41.347 "params": { 00:03:41.347 "impl_name": "posix", 00:03:41.347 "recv_buf_size": 2097152, 00:03:41.347 "send_buf_size": 2097152, 00:03:41.347 "enable_recv_pipe": true, 00:03:41.347 "enable_quickack": false, 00:03:41.347 "enable_placement_id": 0, 00:03:41.347 "enable_zerocopy_send_server": true, 00:03:41.347 "enable_zerocopy_send_client": false, 00:03:41.347 "zerocopy_threshold": 0, 00:03:41.347 "tls_version": 0, 00:03:41.347 "enable_ktls": false 00:03:41.347 } 00:03:41.347 } 00:03:41.347 ] 00:03:41.347 }, 00:03:41.347 { 00:03:41.347 "subsystem": "vmd", 00:03:41.347 "config": [] 00:03:41.347 }, 00:03:41.347 { 00:03:41.347 "subsystem": "accel", 00:03:41.347 "config": [ 00:03:41.347 { 00:03:41.347 "method": "accel_set_options", 00:03:41.347 "params": { 00:03:41.347 "small_cache_size": 128, 00:03:41.347 "large_cache_size": 16, 00:03:41.347 "task_count": 2048, 00:03:41.347 "sequence_count": 2048, 00:03:41.347 "buf_count": 2048 00:03:41.347 } 00:03:41.347 } 00:03:41.347 ] 00:03:41.347 }, 00:03:41.347 { 00:03:41.347 "subsystem": "bdev", 00:03:41.347 "config": [ 00:03:41.347 { 00:03:41.347 "method": "bdev_set_options", 00:03:41.347 "params": { 00:03:41.347 "bdev_io_pool_size": 65535, 00:03:41.347 "bdev_io_cache_size": 256, 00:03:41.347 "bdev_auto_examine": true, 00:03:41.347 "iobuf_small_cache_size": 128, 00:03:41.347 "iobuf_large_cache_size": 16 00:03:41.347 } 00:03:41.347 }, 00:03:41.347 { 00:03:41.347 "method": "bdev_raid_set_options", 00:03:41.347 "params": { 00:03:41.347 "process_window_size_kb": 1024, 00:03:41.347 "process_max_bandwidth_mb_sec": 0 00:03:41.347 } 00:03:41.347 }, 00:03:41.347 { 00:03:41.347 "method": "bdev_iscsi_set_options", 00:03:41.347 "params": { 00:03:41.347 "timeout_sec": 30 00:03:41.347 } 00:03:41.347 }, 00:03:41.347 { 00:03:41.347 "method": "bdev_nvme_set_options", 00:03:41.347 "params": { 00:03:41.347 "action_on_timeout": "none", 00:03:41.347 "timeout_us": 0, 00:03:41.347 "timeout_admin_us": 0, 00:03:41.347 "keep_alive_timeout_ms": 10000, 00:03:41.347 "arbitration_burst": 0, 00:03:41.347 "low_priority_weight": 0, 00:03:41.347 "medium_priority_weight": 0, 00:03:41.347 "high_priority_weight": 0, 00:03:41.347 "nvme_adminq_poll_period_us": 10000, 00:03:41.347 "nvme_ioq_poll_period_us": 0, 00:03:41.347 "io_queue_requests": 0, 00:03:41.347 "delay_cmd_submit": true, 00:03:41.347 "transport_retry_count": 4, 00:03:41.347 "bdev_retry_count": 3, 00:03:41.347 "transport_ack_timeout": 0, 00:03:41.347 "ctrlr_loss_timeout_sec": 0, 00:03:41.347 "reconnect_delay_sec": 0, 00:03:41.347 "fast_io_fail_timeout_sec": 0, 00:03:41.347 "disable_auto_failback": false, 00:03:41.347 "generate_uuids": false, 00:03:41.347 "transport_tos": 0, 00:03:41.347 "nvme_error_stat": false, 00:03:41.347 "rdma_srq_size": 0, 00:03:41.347 "io_path_stat": false, 00:03:41.347 "allow_accel_sequence": false, 00:03:41.347 "rdma_max_cq_size": 0, 00:03:41.347 "rdma_cm_event_timeout_ms": 0, 00:03:41.347 "dhchap_digests": [ 00:03:41.347 "sha256", 00:03:41.347 "sha384", 00:03:41.347 "sha512" 00:03:41.347 ], 00:03:41.347 "dhchap_dhgroups": [ 00:03:41.347 "null", 00:03:41.347 "ffdhe2048", 00:03:41.347 "ffdhe3072", 00:03:41.347 "ffdhe4096", 00:03:41.347 "ffdhe6144", 00:03:41.347 "ffdhe8192" 00:03:41.347 ] 00:03:41.347 } 00:03:41.347 }, 00:03:41.347 { 00:03:41.347 "method": "bdev_nvme_set_hotplug", 00:03:41.347 "params": { 00:03:41.347 "period_us": 100000, 00:03:41.347 "enable": false 00:03:41.347 } 00:03:41.347 }, 00:03:41.347 { 00:03:41.347 "method": "bdev_wait_for_examine" 00:03:41.347 } 00:03:41.347 ] 00:03:41.347 }, 00:03:41.347 { 00:03:41.347 "subsystem": "scsi", 00:03:41.347 "config": null 00:03:41.347 }, 00:03:41.347 { 00:03:41.347 "subsystem": "scheduler", 00:03:41.347 "config": [ 00:03:41.347 { 00:03:41.347 "method": "framework_set_scheduler", 00:03:41.347 "params": { 00:03:41.347 "name": "static" 00:03:41.347 } 00:03:41.347 } 00:03:41.347 ] 00:03:41.347 }, 00:03:41.347 { 00:03:41.347 "subsystem": "vhost_scsi", 00:03:41.347 "config": [] 00:03:41.347 }, 00:03:41.347 { 00:03:41.347 "subsystem": "vhost_blk", 00:03:41.347 "config": [] 00:03:41.347 }, 00:03:41.347 { 00:03:41.347 "subsystem": "ublk", 00:03:41.347 "config": [] 00:03:41.347 }, 00:03:41.347 { 00:03:41.347 "subsystem": "nbd", 00:03:41.347 "config": [] 00:03:41.347 }, 00:03:41.347 { 00:03:41.347 "subsystem": "nvmf", 00:03:41.347 "config": [ 00:03:41.347 { 00:03:41.347 "method": "nvmf_set_config", 00:03:41.347 "params": { 00:03:41.347 "discovery_filter": "match_any", 00:03:41.347 "admin_cmd_passthru": { 00:03:41.347 "identify_ctrlr": false 00:03:41.347 } 00:03:41.347 } 00:03:41.347 }, 00:03:41.347 { 00:03:41.347 "method": "nvmf_set_max_subsystems", 00:03:41.347 "params": { 00:03:41.347 "max_subsystems": 1024 00:03:41.347 } 00:03:41.347 }, 00:03:41.347 { 00:03:41.347 "method": "nvmf_set_crdt", 00:03:41.347 "params": { 00:03:41.347 "crdt1": 0, 00:03:41.347 "crdt2": 0, 00:03:41.347 "crdt3": 0 00:03:41.347 } 00:03:41.347 }, 00:03:41.347 { 00:03:41.347 "method": "nvmf_create_transport", 00:03:41.347 "params": { 00:03:41.347 "trtype": "TCP", 00:03:41.347 "max_queue_depth": 128, 00:03:41.347 "max_io_qpairs_per_ctrlr": 127, 00:03:41.347 "in_capsule_data_size": 4096, 00:03:41.347 "max_io_size": 131072, 00:03:41.347 "io_unit_size": 131072, 00:03:41.347 "max_aq_depth": 128, 00:03:41.347 "num_shared_buffers": 511, 00:03:41.347 "buf_cache_size": 4294967295, 00:03:41.347 "dif_insert_or_strip": false, 00:03:41.347 "zcopy": false, 00:03:41.347 "c2h_success": true, 00:03:41.347 "sock_priority": 0, 00:03:41.347 "abort_timeout_sec": 1, 00:03:41.347 "ack_timeout": 0, 00:03:41.347 "data_wr_pool_size": 0 00:03:41.347 } 00:03:41.347 } 00:03:41.347 ] 00:03:41.347 }, 00:03:41.347 { 00:03:41.347 "subsystem": "iscsi", 00:03:41.347 "config": [ 00:03:41.347 { 00:03:41.348 "method": "iscsi_set_options", 00:03:41.348 "params": { 00:03:41.348 "node_base": "iqn.2016-06.io.spdk", 00:03:41.348 "max_sessions": 128, 00:03:41.348 "max_connections_per_session": 2, 00:03:41.348 "max_queue_depth": 64, 00:03:41.348 "default_time2wait": 2, 00:03:41.348 "default_time2retain": 20, 00:03:41.348 "first_burst_length": 8192, 00:03:41.348 "immediate_data": true, 00:03:41.348 "allow_duplicated_isid": false, 00:03:41.348 "error_recovery_level": 0, 00:03:41.348 "nop_timeout": 60, 00:03:41.348 "nop_in_interval": 30, 00:03:41.348 "disable_chap": false, 00:03:41.348 "require_chap": false, 00:03:41.348 "mutual_chap": false, 00:03:41.348 "chap_group": 0, 00:03:41.348 "max_large_datain_per_connection": 64, 00:03:41.348 "max_r2t_per_connection": 4, 00:03:41.348 "pdu_pool_size": 36864, 00:03:41.348 "immediate_data_pool_size": 16384, 00:03:41.348 "data_out_pool_size": 2048 00:03:41.348 } 00:03:41.348 } 00:03:41.348 ] 00:03:41.348 } 00:03:41.348 ] 00:03:41.348 } 00:03:41.348 22:13:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:41.348 22:13:06 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3745513 00:03:41.348 22:13:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 3745513 ']' 00:03:41.348 22:13:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 3745513 00:03:41.348 22:13:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:03:41.348 22:13:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:41.348 22:13:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3745513 00:03:41.348 22:13:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:41.348 22:13:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:41.348 22:13:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3745513' 00:03:41.348 killing process with pid 3745513 00:03:41.348 22:13:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 3745513 00:03:41.348 22:13:06 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 3745513 00:03:41.607 22:13:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3745617 00:03:41.607 22:13:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:41.607 22:13:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:46.867 22:13:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3745617 00:03:46.867 22:13:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 3745617 ']' 00:03:46.867 22:13:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 3745617 00:03:46.867 22:13:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:03:46.867 22:13:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:46.867 22:13:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3745617 00:03:46.867 22:13:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:46.867 22:13:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:46.867 22:13:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3745617' 00:03:46.867 killing process with pid 3745617 00:03:46.867 22:13:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 3745617 00:03:46.867 22:13:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 3745617 00:03:47.126 22:13:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:47.126 22:13:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:47.126 00:03:47.126 real 0m6.358s 00:03:47.126 user 0m6.036s 00:03:47.126 sys 0m0.637s 00:03:47.126 22:13:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:47.126 22:13:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:47.126 ************************************ 00:03:47.126 END TEST skip_rpc_with_json 00:03:47.126 ************************************ 00:03:47.126 22:13:12 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:03:47.126 22:13:12 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:47.126 22:13:12 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:47.126 22:13:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:47.126 ************************************ 00:03:47.126 START TEST skip_rpc_with_delay 00:03:47.126 ************************************ 00:03:47.126 22:13:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:03:47.126 22:13:12 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:47.126 22:13:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:03:47.126 22:13:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:47.126 22:13:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:47.126 22:13:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:47.126 22:13:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:47.126 22:13:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:47.126 22:13:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:47.126 22:13:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:47.126 22:13:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:47.126 22:13:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:47.126 22:13:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:47.126 [2024-07-24 22:13:12.734019] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:03:47.126 [2024-07-24 22:13:12.734168] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:03:47.126 22:13:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:03:47.126 22:13:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:03:47.126 22:13:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:03:47.126 22:13:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:03:47.126 00:03:47.126 real 0m0.079s 00:03:47.126 user 0m0.049s 00:03:47.126 sys 0m0.029s 00:03:47.126 22:13:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:47.126 22:13:12 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:03:47.126 ************************************ 00:03:47.126 END TEST skip_rpc_with_delay 00:03:47.126 ************************************ 00:03:47.126 22:13:12 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:03:47.126 22:13:12 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:03:47.126 22:13:12 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:03:47.126 22:13:12 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:47.126 22:13:12 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:47.126 22:13:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:47.126 ************************************ 00:03:47.126 START TEST exit_on_failed_rpc_init 00:03:47.126 ************************************ 00:03:47.126 22:13:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:03:47.126 22:13:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3746173 00:03:47.126 22:13:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:47.126 22:13:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3746173 00:03:47.126 22:13:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 3746173 ']' 00:03:47.126 22:13:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:47.126 22:13:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:47.126 22:13:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:47.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:47.126 22:13:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:47.127 22:13:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:47.385 [2024-07-24 22:13:12.865907] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:03:47.385 [2024-07-24 22:13:12.866001] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3746173 ] 00:03:47.385 EAL: No free 2048 kB hugepages reported on node 1 00:03:47.385 [2024-07-24 22:13:12.926650] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:47.385 [2024-07-24 22:13:13.046632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:47.644 22:13:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:47.644 22:13:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:03:47.644 22:13:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:47.644 22:13:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:47.644 22:13:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:03:47.644 22:13:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:47.644 22:13:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:47.644 22:13:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:47.644 22:13:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:47.644 22:13:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:47.644 22:13:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:47.644 22:13:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:47.644 22:13:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:47.644 22:13:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:47.644 22:13:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:47.644 [2024-07-24 22:13:13.339834] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:03:47.644 [2024-07-24 22:13:13.339940] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3746182 ] 00:03:47.901 EAL: No free 2048 kB hugepages reported on node 1 00:03:47.901 [2024-07-24 22:13:13.399593] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:47.901 [2024-07-24 22:13:13.518441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:03:47.901 [2024-07-24 22:13:13.518569] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:03:47.901 [2024-07-24 22:13:13.518591] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:03:47.901 [2024-07-24 22:13:13.518605] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:03:48.159 22:13:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:03:48.159 22:13:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:03:48.159 22:13:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:03:48.159 22:13:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:03:48.159 22:13:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:03:48.159 22:13:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:03:48.159 22:13:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:03:48.159 22:13:13 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3746173 00:03:48.159 22:13:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 3746173 ']' 00:03:48.159 22:13:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 3746173 00:03:48.159 22:13:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:03:48.159 22:13:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:48.159 22:13:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3746173 00:03:48.159 22:13:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:48.159 22:13:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:48.159 22:13:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3746173' 00:03:48.159 killing process with pid 3746173 00:03:48.159 22:13:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 3746173 00:03:48.160 22:13:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 3746173 00:03:48.419 00:03:48.419 real 0m1.186s 00:03:48.419 user 0m1.440s 00:03:48.419 sys 0m0.405s 00:03:48.419 22:13:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:48.419 22:13:13 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:48.419 ************************************ 00:03:48.419 END TEST exit_on_failed_rpc_init 00:03:48.419 ************************************ 00:03:48.419 22:13:14 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:48.419 00:03:48.419 real 0m13.265s 00:03:48.419 user 0m12.690s 00:03:48.419 sys 0m1.550s 00:03:48.419 22:13:14 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:48.419 22:13:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:48.419 ************************************ 00:03:48.419 END TEST skip_rpc 00:03:48.419 ************************************ 00:03:48.419 22:13:14 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:48.419 22:13:14 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:48.419 22:13:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:48.419 22:13:14 -- common/autotest_common.sh@10 -- # set +x 00:03:48.419 ************************************ 00:03:48.419 START TEST rpc_client 00:03:48.419 ************************************ 00:03:48.419 22:13:14 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:48.678 * Looking for test storage... 00:03:48.678 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:03:48.678 22:13:14 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:03:48.678 OK 00:03:48.678 22:13:14 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:03:48.678 00:03:48.678 real 0m0.073s 00:03:48.678 user 0m0.031s 00:03:48.678 sys 0m0.047s 00:03:48.678 22:13:14 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:48.678 22:13:14 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:03:48.678 ************************************ 00:03:48.678 END TEST rpc_client 00:03:48.678 ************************************ 00:03:48.678 22:13:14 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:48.678 22:13:14 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:48.678 22:13:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:48.678 22:13:14 -- common/autotest_common.sh@10 -- # set +x 00:03:48.678 ************************************ 00:03:48.678 START TEST json_config 00:03:48.678 ************************************ 00:03:48.678 22:13:14 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:48.678 22:13:14 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:48.678 22:13:14 json_config -- nvmf/common.sh@7 -- # uname -s 00:03:48.678 22:13:14 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:48.678 22:13:14 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:48.678 22:13:14 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:48.678 22:13:14 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:48.678 22:13:14 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:48.678 22:13:14 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:48.678 22:13:14 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:48.678 22:13:14 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:48.678 22:13:14 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:48.678 22:13:14 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:48.678 22:13:14 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:03:48.678 22:13:14 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:03:48.678 22:13:14 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:48.678 22:13:14 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:48.678 22:13:14 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:48.678 22:13:14 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:48.678 22:13:14 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:48.678 22:13:14 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:48.678 22:13:14 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:48.678 22:13:14 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:48.678 22:13:14 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:48.679 22:13:14 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:48.679 22:13:14 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:48.679 22:13:14 json_config -- paths/export.sh@5 -- # export PATH 00:03:48.679 22:13:14 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:48.679 22:13:14 json_config -- nvmf/common.sh@47 -- # : 0 00:03:48.679 22:13:14 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:48.679 22:13:14 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:48.679 22:13:14 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:48.679 22:13:14 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:48.679 22:13:14 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:48.679 22:13:14 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:48.679 22:13:14 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:48.679 22:13:14 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:48.679 22:13:14 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:48.679 22:13:14 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:03:48.679 22:13:14 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:03:48.679 22:13:14 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:03:48.679 22:13:14 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:03:48.679 22:13:14 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:03:48.679 22:13:14 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:03:48.679 22:13:14 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:03:48.679 22:13:14 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:03:48.679 22:13:14 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:03:48.679 22:13:14 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:03:48.679 22:13:14 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:03:48.679 22:13:14 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:03:48.679 22:13:14 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:03:48.679 22:13:14 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:48.679 22:13:14 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:03:48.679 INFO: JSON configuration test init 00:03:48.679 22:13:14 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:03:48.679 22:13:14 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:03:48.679 22:13:14 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:48.679 22:13:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:48.679 22:13:14 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:03:48.679 22:13:14 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:48.679 22:13:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:48.679 22:13:14 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:03:48.679 22:13:14 json_config -- json_config/common.sh@9 -- # local app=target 00:03:48.679 22:13:14 json_config -- json_config/common.sh@10 -- # shift 00:03:48.679 22:13:14 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:48.679 22:13:14 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:48.679 22:13:14 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:48.679 22:13:14 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:48.679 22:13:14 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:48.679 22:13:14 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3746393 00:03:48.679 22:13:14 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:03:48.679 22:13:14 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:48.679 Waiting for target to run... 00:03:48.679 22:13:14 json_config -- json_config/common.sh@25 -- # waitforlisten 3746393 /var/tmp/spdk_tgt.sock 00:03:48.679 22:13:14 json_config -- common/autotest_common.sh@829 -- # '[' -z 3746393 ']' 00:03:48.679 22:13:14 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:48.679 22:13:14 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:48.679 22:13:14 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:48.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:48.679 22:13:14 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:48.679 22:13:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:48.679 [2024-07-24 22:13:14.324835] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:03:48.679 [2024-07-24 22:13:14.324941] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3746393 ] 00:03:48.679 EAL: No free 2048 kB hugepages reported on node 1 00:03:48.938 [2024-07-24 22:13:14.635215] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:49.195 [2024-07-24 22:13:14.732184] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:49.761 22:13:15 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:49.761 22:13:15 json_config -- common/autotest_common.sh@862 -- # return 0 00:03:49.761 22:13:15 json_config -- json_config/common.sh@26 -- # echo '' 00:03:49.761 00:03:49.761 22:13:15 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:03:49.761 22:13:15 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:03:49.761 22:13:15 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:49.761 22:13:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:49.761 22:13:15 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:03:49.761 22:13:15 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:03:49.761 22:13:15 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:49.761 22:13:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:49.761 22:13:15 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:03:49.761 22:13:15 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:03:49.761 22:13:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:03:53.048 22:13:18 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:03:53.048 22:13:18 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:03:53.048 22:13:18 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:53.048 22:13:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:53.048 22:13:18 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:03:53.048 22:13:18 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:03:53.048 22:13:18 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:03:53.048 22:13:18 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:03:53.048 22:13:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:03:53.048 22:13:18 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:03:53.306 22:13:18 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:03:53.306 22:13:18 json_config -- json_config/json_config.sh@48 -- # local get_types 00:03:53.306 22:13:18 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:03:53.306 22:13:18 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:03:53.306 22:13:18 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:03:53.306 22:13:18 json_config -- json_config/json_config.sh@51 -- # sort 00:03:53.306 22:13:18 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:03:53.306 22:13:18 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:03:53.306 22:13:18 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:03:53.306 22:13:18 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:03:53.306 22:13:18 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:53.306 22:13:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:53.306 22:13:18 json_config -- json_config/json_config.sh@59 -- # return 0 00:03:53.306 22:13:18 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:03:53.306 22:13:18 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:03:53.306 22:13:18 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:03:53.306 22:13:18 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:03:53.306 22:13:18 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:03:53.306 22:13:18 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:03:53.307 22:13:18 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:53.307 22:13:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:53.307 22:13:18 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:03:53.307 22:13:18 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:03:53.307 22:13:18 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:03:53.307 22:13:18 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:53.307 22:13:18 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:53.565 MallocForNvmf0 00:03:53.565 22:13:19 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:53.565 22:13:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:53.823 MallocForNvmf1 00:03:53.823 22:13:19 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:03:53.823 22:13:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:03:54.081 [2024-07-24 22:13:19.773074] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:54.340 22:13:19 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:54.340 22:13:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:54.598 22:13:20 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:54.598 22:13:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:54.856 22:13:20 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:54.856 22:13:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:55.114 22:13:20 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:55.114 22:13:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:55.372 [2024-07-24 22:13:20.952833] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:55.372 22:13:20 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:03:55.372 22:13:20 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:55.372 22:13:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:55.372 22:13:20 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:03:55.372 22:13:20 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:55.372 22:13:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:55.372 22:13:21 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:03:55.372 22:13:21 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:55.372 22:13:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:55.630 MallocBdevForConfigChangeCheck 00:03:55.630 22:13:21 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:03:55.630 22:13:21 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:55.630 22:13:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:55.888 22:13:21 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:03:55.888 22:13:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:56.146 22:13:21 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:03:56.146 INFO: shutting down applications... 00:03:56.146 22:13:21 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:03:56.146 22:13:21 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:03:56.146 22:13:21 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:03:56.146 22:13:21 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:03:58.045 Calling clear_iscsi_subsystem 00:03:58.045 Calling clear_nvmf_subsystem 00:03:58.045 Calling clear_nbd_subsystem 00:03:58.045 Calling clear_ublk_subsystem 00:03:58.045 Calling clear_vhost_blk_subsystem 00:03:58.045 Calling clear_vhost_scsi_subsystem 00:03:58.045 Calling clear_bdev_subsystem 00:03:58.045 22:13:23 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:03:58.045 22:13:23 json_config -- json_config/json_config.sh@347 -- # count=100 00:03:58.045 22:13:23 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:03:58.045 22:13:23 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:03:58.045 22:13:23 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:58.045 22:13:23 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:03:58.302 22:13:23 json_config -- json_config/json_config.sh@349 -- # break 00:03:58.302 22:13:23 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:03:58.302 22:13:23 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:03:58.302 22:13:23 json_config -- json_config/common.sh@31 -- # local app=target 00:03:58.302 22:13:23 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:03:58.302 22:13:23 json_config -- json_config/common.sh@35 -- # [[ -n 3746393 ]] 00:03:58.302 22:13:23 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3746393 00:03:58.302 22:13:23 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:03:58.302 22:13:23 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:58.302 22:13:23 json_config -- json_config/common.sh@41 -- # kill -0 3746393 00:03:58.302 22:13:23 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:03:58.871 22:13:24 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:03:58.871 22:13:24 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:58.871 22:13:24 json_config -- json_config/common.sh@41 -- # kill -0 3746393 00:03:58.871 22:13:24 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:03:58.871 22:13:24 json_config -- json_config/common.sh@43 -- # break 00:03:58.871 22:13:24 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:03:58.871 22:13:24 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:03:58.871 SPDK target shutdown done 00:03:58.871 22:13:24 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:03:58.871 INFO: relaunching applications... 00:03:58.871 22:13:24 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:58.871 22:13:24 json_config -- json_config/common.sh@9 -- # local app=target 00:03:58.871 22:13:24 json_config -- json_config/common.sh@10 -- # shift 00:03:58.871 22:13:24 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:58.871 22:13:24 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:58.871 22:13:24 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:58.871 22:13:24 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:58.871 22:13:24 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:58.871 22:13:24 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3747428 00:03:58.871 22:13:24 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:58.871 22:13:24 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:58.871 Waiting for target to run... 00:03:58.871 22:13:24 json_config -- json_config/common.sh@25 -- # waitforlisten 3747428 /var/tmp/spdk_tgt.sock 00:03:58.871 22:13:24 json_config -- common/autotest_common.sh@829 -- # '[' -z 3747428 ']' 00:03:58.871 22:13:24 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:58.871 22:13:24 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:58.871 22:13:24 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:58.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:58.871 22:13:24 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:58.871 22:13:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:58.871 [2024-07-24 22:13:24.410906] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:03:58.871 [2024-07-24 22:13:24.410990] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3747428 ] 00:03:58.871 EAL: No free 2048 kB hugepages reported on node 1 00:03:59.129 [2024-07-24 22:13:24.764569] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:59.389 [2024-07-24 22:13:24.860729] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:02.674 [2024-07-24 22:13:27.881710] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:02.674 [2024-07-24 22:13:27.914090] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:02.674 22:13:27 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:02.674 22:13:27 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:02.674 22:13:27 json_config -- json_config/common.sh@26 -- # echo '' 00:04:02.674 00:04:02.674 22:13:27 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:04:02.674 22:13:27 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:02.674 INFO: Checking if target configuration is the same... 00:04:02.674 22:13:27 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:02.674 22:13:27 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:04:02.674 22:13:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:02.674 + '[' 2 -ne 2 ']' 00:04:02.674 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:02.675 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:02.675 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:02.675 +++ basename /dev/fd/62 00:04:02.675 ++ mktemp /tmp/62.XXX 00:04:02.675 + tmp_file_1=/tmp/62.t3r 00:04:02.675 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:02.675 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:02.675 + tmp_file_2=/tmp/spdk_tgt_config.json.tGL 00:04:02.675 + ret=0 00:04:02.675 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:02.675 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:02.932 + diff -u /tmp/62.t3r /tmp/spdk_tgt_config.json.tGL 00:04:02.932 + echo 'INFO: JSON config files are the same' 00:04:02.932 INFO: JSON config files are the same 00:04:02.932 + rm /tmp/62.t3r /tmp/spdk_tgt_config.json.tGL 00:04:02.932 + exit 0 00:04:02.932 22:13:28 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:04:02.932 22:13:28 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:02.932 INFO: changing configuration and checking if this can be detected... 00:04:02.932 22:13:28 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:02.932 22:13:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:03.191 22:13:28 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:03.191 22:13:28 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:04:03.191 22:13:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:03.191 + '[' 2 -ne 2 ']' 00:04:03.191 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:03.191 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:03.191 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:03.191 +++ basename /dev/fd/62 00:04:03.191 ++ mktemp /tmp/62.XXX 00:04:03.191 + tmp_file_1=/tmp/62.9U6 00:04:03.191 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:03.191 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:03.191 + tmp_file_2=/tmp/spdk_tgt_config.json.p4L 00:04:03.191 + ret=0 00:04:03.191 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:03.757 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:03.757 + diff -u /tmp/62.9U6 /tmp/spdk_tgt_config.json.p4L 00:04:03.757 + ret=1 00:04:03.757 + echo '=== Start of file: /tmp/62.9U6 ===' 00:04:03.757 + cat /tmp/62.9U6 00:04:03.757 + echo '=== End of file: /tmp/62.9U6 ===' 00:04:03.757 + echo '' 00:04:03.757 + echo '=== Start of file: /tmp/spdk_tgt_config.json.p4L ===' 00:04:03.757 + cat /tmp/spdk_tgt_config.json.p4L 00:04:03.757 + echo '=== End of file: /tmp/spdk_tgt_config.json.p4L ===' 00:04:03.757 + echo '' 00:04:03.757 + rm /tmp/62.9U6 /tmp/spdk_tgt_config.json.p4L 00:04:03.757 + exit 1 00:04:03.757 22:13:29 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:04:03.757 INFO: configuration change detected. 00:04:03.757 22:13:29 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:04:03.757 22:13:29 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:04:03.757 22:13:29 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:03.757 22:13:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:03.757 22:13:29 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:04:03.757 22:13:29 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:04:03.757 22:13:29 json_config -- json_config/json_config.sh@321 -- # [[ -n 3747428 ]] 00:04:03.757 22:13:29 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:04:03.757 22:13:29 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:04:03.757 22:13:29 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:03.757 22:13:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:03.757 22:13:29 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:04:03.757 22:13:29 json_config -- json_config/json_config.sh@197 -- # uname -s 00:04:03.757 22:13:29 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:04:03.757 22:13:29 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:04:03.757 22:13:29 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:04:03.757 22:13:29 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:04:03.757 22:13:29 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:03.757 22:13:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:03.757 22:13:29 json_config -- json_config/json_config.sh@327 -- # killprocess 3747428 00:04:03.757 22:13:29 json_config -- common/autotest_common.sh@948 -- # '[' -z 3747428 ']' 00:04:03.757 22:13:29 json_config -- common/autotest_common.sh@952 -- # kill -0 3747428 00:04:03.757 22:13:29 json_config -- common/autotest_common.sh@953 -- # uname 00:04:03.757 22:13:29 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:03.757 22:13:29 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3747428 00:04:03.757 22:13:29 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:03.757 22:13:29 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:03.757 22:13:29 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3747428' 00:04:03.757 killing process with pid 3747428 00:04:03.757 22:13:29 json_config -- common/autotest_common.sh@967 -- # kill 3747428 00:04:03.757 22:13:29 json_config -- common/autotest_common.sh@972 -- # wait 3747428 00:04:05.659 22:13:30 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:05.659 22:13:30 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:04:05.659 22:13:30 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:05.659 22:13:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:05.659 22:13:30 json_config -- json_config/json_config.sh@332 -- # return 0 00:04:05.659 22:13:30 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:04:05.659 INFO: Success 00:04:05.659 00:04:05.659 real 0m16.696s 00:04:05.659 user 0m19.576s 00:04:05.659 sys 0m1.907s 00:04:05.659 22:13:30 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:05.659 22:13:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:05.659 ************************************ 00:04:05.659 END TEST json_config 00:04:05.659 ************************************ 00:04:05.659 22:13:30 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:05.659 22:13:30 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:05.659 22:13:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:05.659 22:13:30 -- common/autotest_common.sh@10 -- # set +x 00:04:05.659 ************************************ 00:04:05.659 START TEST json_config_extra_key 00:04:05.659 ************************************ 00:04:05.659 22:13:30 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:05.659 22:13:30 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:05.659 22:13:30 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:05.659 22:13:30 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:05.659 22:13:30 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:05.659 22:13:30 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:05.659 22:13:30 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:05.659 22:13:30 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:05.659 22:13:30 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:05.659 22:13:30 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:05.659 22:13:30 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:05.659 22:13:30 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:05.659 22:13:30 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:05.659 22:13:30 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:04:05.659 22:13:30 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:04:05.659 22:13:30 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:05.659 22:13:30 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:05.659 22:13:30 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:05.659 22:13:30 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:05.659 22:13:30 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:05.659 22:13:31 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:05.659 22:13:31 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:05.659 22:13:31 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:05.659 22:13:31 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:05.659 22:13:31 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:05.659 22:13:31 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:05.659 22:13:31 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:05.659 22:13:31 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:05.659 22:13:31 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:05.659 22:13:31 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:05.659 22:13:31 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:05.659 22:13:31 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:05.659 22:13:31 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:05.659 22:13:31 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:05.659 22:13:31 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:05.659 22:13:31 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:05.659 22:13:31 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:05.659 22:13:31 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:05.659 22:13:31 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:05.660 22:13:31 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:05.660 22:13:31 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:05.660 22:13:31 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:05.660 22:13:31 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:05.660 22:13:31 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:05.660 22:13:31 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:05.660 22:13:31 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:05.660 22:13:31 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:05.660 22:13:31 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:05.660 INFO: launching applications... 00:04:05.660 22:13:31 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:05.660 22:13:31 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:05.660 22:13:31 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:05.660 22:13:31 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:05.660 22:13:31 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:05.660 22:13:31 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:05.660 22:13:31 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:05.660 22:13:31 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:05.660 22:13:31 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3748144 00:04:05.660 22:13:31 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:05.660 22:13:31 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:05.660 Waiting for target to run... 00:04:05.660 22:13:31 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3748144 /var/tmp/spdk_tgt.sock 00:04:05.660 22:13:31 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 3748144 ']' 00:04:05.660 22:13:31 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:05.660 22:13:31 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:05.660 22:13:31 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:05.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:05.660 22:13:31 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:05.660 22:13:31 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:05.660 [2024-07-24 22:13:31.063735] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:04:05.660 [2024-07-24 22:13:31.063838] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3748144 ] 00:04:05.660 EAL: No free 2048 kB hugepages reported on node 1 00:04:05.918 [2024-07-24 22:13:31.380195] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:05.918 [2024-07-24 22:13:31.475438] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:06.487 22:13:32 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:06.487 22:13:32 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:04:06.487 22:13:32 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:06.487 00:04:06.487 22:13:32 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:06.487 INFO: shutting down applications... 00:04:06.487 22:13:32 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:06.487 22:13:32 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:06.487 22:13:32 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:06.487 22:13:32 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3748144 ]] 00:04:06.487 22:13:32 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3748144 00:04:06.487 22:13:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:06.487 22:13:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:06.487 22:13:32 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3748144 00:04:06.487 22:13:32 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:07.084 22:13:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:07.084 22:13:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:07.084 22:13:32 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3748144 00:04:07.084 22:13:32 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:07.084 22:13:32 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:07.084 22:13:32 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:07.084 22:13:32 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:07.084 SPDK target shutdown done 00:04:07.084 22:13:32 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:07.084 Success 00:04:07.084 00:04:07.084 real 0m1.677s 00:04:07.084 user 0m1.660s 00:04:07.084 sys 0m0.424s 00:04:07.084 22:13:32 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:07.084 22:13:32 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:07.084 ************************************ 00:04:07.084 END TEST json_config_extra_key 00:04:07.084 ************************************ 00:04:07.085 22:13:32 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:07.085 22:13:32 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:07.085 22:13:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:07.085 22:13:32 -- common/autotest_common.sh@10 -- # set +x 00:04:07.085 ************************************ 00:04:07.085 START TEST alias_rpc 00:04:07.085 ************************************ 00:04:07.085 22:13:32 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:07.085 * Looking for test storage... 00:04:07.085 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:07.085 22:13:32 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:07.085 22:13:32 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3748396 00:04:07.085 22:13:32 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:07.085 22:13:32 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3748396 00:04:07.085 22:13:32 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 3748396 ']' 00:04:07.085 22:13:32 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:07.085 22:13:32 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:07.085 22:13:32 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:07.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:07.085 22:13:32 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:07.085 22:13:32 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.085 [2024-07-24 22:13:32.784433] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:04:07.085 [2024-07-24 22:13:32.784541] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3748396 ] 00:04:07.343 EAL: No free 2048 kB hugepages reported on node 1 00:04:07.343 [2024-07-24 22:13:32.843895] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:07.343 [2024-07-24 22:13:32.960500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:07.601 22:13:33 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:07.601 22:13:33 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:07.601 22:13:33 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:07.866 22:13:33 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3748396 00:04:07.866 22:13:33 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 3748396 ']' 00:04:07.867 22:13:33 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 3748396 00:04:07.867 22:13:33 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:04:07.867 22:13:33 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:07.867 22:13:33 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3748396 00:04:07.867 22:13:33 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:07.867 22:13:33 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:07.867 22:13:33 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3748396' 00:04:07.867 killing process with pid 3748396 00:04:07.867 22:13:33 alias_rpc -- common/autotest_common.sh@967 -- # kill 3748396 00:04:07.867 22:13:33 alias_rpc -- common/autotest_common.sh@972 -- # wait 3748396 00:04:08.437 00:04:08.437 real 0m1.189s 00:04:08.437 user 0m1.384s 00:04:08.437 sys 0m0.388s 00:04:08.437 22:13:33 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:08.437 22:13:33 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.437 ************************************ 00:04:08.437 END TEST alias_rpc 00:04:08.437 ************************************ 00:04:08.437 22:13:33 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:04:08.437 22:13:33 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:08.437 22:13:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:08.437 22:13:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:08.437 22:13:33 -- common/autotest_common.sh@10 -- # set +x 00:04:08.437 ************************************ 00:04:08.437 START TEST spdkcli_tcp 00:04:08.437 ************************************ 00:04:08.437 22:13:33 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:08.437 * Looking for test storage... 00:04:08.437 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:08.437 22:13:33 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:08.437 22:13:33 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:08.437 22:13:33 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:08.437 22:13:33 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:08.437 22:13:33 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:08.437 22:13:33 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:08.437 22:13:33 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:08.437 22:13:33 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:08.437 22:13:33 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:08.437 22:13:33 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3748550 00:04:08.437 22:13:33 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:08.437 22:13:33 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3748550 00:04:08.437 22:13:33 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 3748550 ']' 00:04:08.437 22:13:33 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:08.437 22:13:33 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:08.437 22:13:33 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:08.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:08.437 22:13:33 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:08.437 22:13:33 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:08.437 [2024-07-24 22:13:34.037842] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:04:08.437 [2024-07-24 22:13:34.037946] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3748550 ] 00:04:08.437 EAL: No free 2048 kB hugepages reported on node 1 00:04:08.437 [2024-07-24 22:13:34.097303] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:08.695 [2024-07-24 22:13:34.215093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:08.695 [2024-07-24 22:13:34.215167] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:08.953 22:13:34 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:08.953 22:13:34 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:04:08.953 22:13:34 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3748554 00:04:08.953 22:13:34 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:08.953 22:13:34 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:09.211 [ 00:04:09.211 "bdev_malloc_delete", 00:04:09.211 "bdev_malloc_create", 00:04:09.211 "bdev_null_resize", 00:04:09.211 "bdev_null_delete", 00:04:09.211 "bdev_null_create", 00:04:09.211 "bdev_nvme_cuse_unregister", 00:04:09.211 "bdev_nvme_cuse_register", 00:04:09.211 "bdev_opal_new_user", 00:04:09.211 "bdev_opal_set_lock_state", 00:04:09.211 "bdev_opal_delete", 00:04:09.211 "bdev_opal_get_info", 00:04:09.211 "bdev_opal_create", 00:04:09.211 "bdev_nvme_opal_revert", 00:04:09.211 "bdev_nvme_opal_init", 00:04:09.211 "bdev_nvme_send_cmd", 00:04:09.211 "bdev_nvme_get_path_iostat", 00:04:09.211 "bdev_nvme_get_mdns_discovery_info", 00:04:09.211 "bdev_nvme_stop_mdns_discovery", 00:04:09.211 "bdev_nvme_start_mdns_discovery", 00:04:09.211 "bdev_nvme_set_multipath_policy", 00:04:09.211 "bdev_nvme_set_preferred_path", 00:04:09.211 "bdev_nvme_get_io_paths", 00:04:09.211 "bdev_nvme_remove_error_injection", 00:04:09.211 "bdev_nvme_add_error_injection", 00:04:09.211 "bdev_nvme_get_discovery_info", 00:04:09.211 "bdev_nvme_stop_discovery", 00:04:09.211 "bdev_nvme_start_discovery", 00:04:09.211 "bdev_nvme_get_controller_health_info", 00:04:09.211 "bdev_nvme_disable_controller", 00:04:09.211 "bdev_nvme_enable_controller", 00:04:09.211 "bdev_nvme_reset_controller", 00:04:09.211 "bdev_nvme_get_transport_statistics", 00:04:09.211 "bdev_nvme_apply_firmware", 00:04:09.211 "bdev_nvme_detach_controller", 00:04:09.211 "bdev_nvme_get_controllers", 00:04:09.211 "bdev_nvme_attach_controller", 00:04:09.211 "bdev_nvme_set_hotplug", 00:04:09.211 "bdev_nvme_set_options", 00:04:09.211 "bdev_passthru_delete", 00:04:09.211 "bdev_passthru_create", 00:04:09.211 "bdev_lvol_set_parent_bdev", 00:04:09.211 "bdev_lvol_set_parent", 00:04:09.211 "bdev_lvol_check_shallow_copy", 00:04:09.211 "bdev_lvol_start_shallow_copy", 00:04:09.211 "bdev_lvol_grow_lvstore", 00:04:09.211 "bdev_lvol_get_lvols", 00:04:09.211 "bdev_lvol_get_lvstores", 00:04:09.211 "bdev_lvol_delete", 00:04:09.211 "bdev_lvol_set_read_only", 00:04:09.211 "bdev_lvol_resize", 00:04:09.211 "bdev_lvol_decouple_parent", 00:04:09.211 "bdev_lvol_inflate", 00:04:09.211 "bdev_lvol_rename", 00:04:09.211 "bdev_lvol_clone_bdev", 00:04:09.211 "bdev_lvol_clone", 00:04:09.211 "bdev_lvol_snapshot", 00:04:09.211 "bdev_lvol_create", 00:04:09.211 "bdev_lvol_delete_lvstore", 00:04:09.211 "bdev_lvol_rename_lvstore", 00:04:09.211 "bdev_lvol_create_lvstore", 00:04:09.211 "bdev_raid_set_options", 00:04:09.211 "bdev_raid_remove_base_bdev", 00:04:09.211 "bdev_raid_add_base_bdev", 00:04:09.211 "bdev_raid_delete", 00:04:09.211 "bdev_raid_create", 00:04:09.211 "bdev_raid_get_bdevs", 00:04:09.211 "bdev_error_inject_error", 00:04:09.211 "bdev_error_delete", 00:04:09.211 "bdev_error_create", 00:04:09.211 "bdev_split_delete", 00:04:09.211 "bdev_split_create", 00:04:09.211 "bdev_delay_delete", 00:04:09.211 "bdev_delay_create", 00:04:09.211 "bdev_delay_update_latency", 00:04:09.211 "bdev_zone_block_delete", 00:04:09.211 "bdev_zone_block_create", 00:04:09.211 "blobfs_create", 00:04:09.211 "blobfs_detect", 00:04:09.211 "blobfs_set_cache_size", 00:04:09.211 "bdev_aio_delete", 00:04:09.211 "bdev_aio_rescan", 00:04:09.211 "bdev_aio_create", 00:04:09.211 "bdev_ftl_set_property", 00:04:09.211 "bdev_ftl_get_properties", 00:04:09.211 "bdev_ftl_get_stats", 00:04:09.211 "bdev_ftl_unmap", 00:04:09.211 "bdev_ftl_unload", 00:04:09.211 "bdev_ftl_delete", 00:04:09.211 "bdev_ftl_load", 00:04:09.211 "bdev_ftl_create", 00:04:09.211 "bdev_virtio_attach_controller", 00:04:09.211 "bdev_virtio_scsi_get_devices", 00:04:09.211 "bdev_virtio_detach_controller", 00:04:09.211 "bdev_virtio_blk_set_hotplug", 00:04:09.211 "bdev_iscsi_delete", 00:04:09.211 "bdev_iscsi_create", 00:04:09.211 "bdev_iscsi_set_options", 00:04:09.211 "accel_error_inject_error", 00:04:09.211 "ioat_scan_accel_module", 00:04:09.211 "dsa_scan_accel_module", 00:04:09.211 "iaa_scan_accel_module", 00:04:09.211 "vfu_virtio_create_scsi_endpoint", 00:04:09.211 "vfu_virtio_scsi_remove_target", 00:04:09.211 "vfu_virtio_scsi_add_target", 00:04:09.211 "vfu_virtio_create_blk_endpoint", 00:04:09.211 "vfu_virtio_delete_endpoint", 00:04:09.211 "keyring_file_remove_key", 00:04:09.211 "keyring_file_add_key", 00:04:09.211 "keyring_linux_set_options", 00:04:09.211 "iscsi_get_histogram", 00:04:09.211 "iscsi_enable_histogram", 00:04:09.212 "iscsi_set_options", 00:04:09.212 "iscsi_get_auth_groups", 00:04:09.212 "iscsi_auth_group_remove_secret", 00:04:09.212 "iscsi_auth_group_add_secret", 00:04:09.212 "iscsi_delete_auth_group", 00:04:09.212 "iscsi_create_auth_group", 00:04:09.212 "iscsi_set_discovery_auth", 00:04:09.212 "iscsi_get_options", 00:04:09.212 "iscsi_target_node_request_logout", 00:04:09.212 "iscsi_target_node_set_redirect", 00:04:09.212 "iscsi_target_node_set_auth", 00:04:09.212 "iscsi_target_node_add_lun", 00:04:09.212 "iscsi_get_stats", 00:04:09.212 "iscsi_get_connections", 00:04:09.212 "iscsi_portal_group_set_auth", 00:04:09.212 "iscsi_start_portal_group", 00:04:09.212 "iscsi_delete_portal_group", 00:04:09.212 "iscsi_create_portal_group", 00:04:09.212 "iscsi_get_portal_groups", 00:04:09.212 "iscsi_delete_target_node", 00:04:09.212 "iscsi_target_node_remove_pg_ig_maps", 00:04:09.212 "iscsi_target_node_add_pg_ig_maps", 00:04:09.212 "iscsi_create_target_node", 00:04:09.212 "iscsi_get_target_nodes", 00:04:09.212 "iscsi_delete_initiator_group", 00:04:09.212 "iscsi_initiator_group_remove_initiators", 00:04:09.212 "iscsi_initiator_group_add_initiators", 00:04:09.212 "iscsi_create_initiator_group", 00:04:09.212 "iscsi_get_initiator_groups", 00:04:09.212 "nvmf_set_crdt", 00:04:09.212 "nvmf_set_config", 00:04:09.212 "nvmf_set_max_subsystems", 00:04:09.212 "nvmf_stop_mdns_prr", 00:04:09.212 "nvmf_publish_mdns_prr", 00:04:09.212 "nvmf_subsystem_get_listeners", 00:04:09.212 "nvmf_subsystem_get_qpairs", 00:04:09.212 "nvmf_subsystem_get_controllers", 00:04:09.212 "nvmf_get_stats", 00:04:09.212 "nvmf_get_transports", 00:04:09.212 "nvmf_create_transport", 00:04:09.212 "nvmf_get_targets", 00:04:09.212 "nvmf_delete_target", 00:04:09.212 "nvmf_create_target", 00:04:09.212 "nvmf_subsystem_allow_any_host", 00:04:09.212 "nvmf_subsystem_remove_host", 00:04:09.212 "nvmf_subsystem_add_host", 00:04:09.212 "nvmf_ns_remove_host", 00:04:09.212 "nvmf_ns_add_host", 00:04:09.212 "nvmf_subsystem_remove_ns", 00:04:09.212 "nvmf_subsystem_add_ns", 00:04:09.212 "nvmf_subsystem_listener_set_ana_state", 00:04:09.212 "nvmf_discovery_get_referrals", 00:04:09.212 "nvmf_discovery_remove_referral", 00:04:09.212 "nvmf_discovery_add_referral", 00:04:09.212 "nvmf_subsystem_remove_listener", 00:04:09.212 "nvmf_subsystem_add_listener", 00:04:09.212 "nvmf_delete_subsystem", 00:04:09.212 "nvmf_create_subsystem", 00:04:09.212 "nvmf_get_subsystems", 00:04:09.212 "env_dpdk_get_mem_stats", 00:04:09.212 "nbd_get_disks", 00:04:09.212 "nbd_stop_disk", 00:04:09.212 "nbd_start_disk", 00:04:09.212 "ublk_recover_disk", 00:04:09.212 "ublk_get_disks", 00:04:09.212 "ublk_stop_disk", 00:04:09.212 "ublk_start_disk", 00:04:09.212 "ublk_destroy_target", 00:04:09.212 "ublk_create_target", 00:04:09.212 "virtio_blk_create_transport", 00:04:09.212 "virtio_blk_get_transports", 00:04:09.212 "vhost_controller_set_coalescing", 00:04:09.212 "vhost_get_controllers", 00:04:09.212 "vhost_delete_controller", 00:04:09.212 "vhost_create_blk_controller", 00:04:09.212 "vhost_scsi_controller_remove_target", 00:04:09.212 "vhost_scsi_controller_add_target", 00:04:09.212 "vhost_start_scsi_controller", 00:04:09.212 "vhost_create_scsi_controller", 00:04:09.212 "thread_set_cpumask", 00:04:09.212 "framework_get_governor", 00:04:09.212 "framework_get_scheduler", 00:04:09.212 "framework_set_scheduler", 00:04:09.212 "framework_get_reactors", 00:04:09.212 "thread_get_io_channels", 00:04:09.212 "thread_get_pollers", 00:04:09.212 "thread_get_stats", 00:04:09.212 "framework_monitor_context_switch", 00:04:09.212 "spdk_kill_instance", 00:04:09.212 "log_enable_timestamps", 00:04:09.212 "log_get_flags", 00:04:09.212 "log_clear_flag", 00:04:09.212 "log_set_flag", 00:04:09.212 "log_get_level", 00:04:09.212 "log_set_level", 00:04:09.212 "log_get_print_level", 00:04:09.212 "log_set_print_level", 00:04:09.212 "framework_enable_cpumask_locks", 00:04:09.212 "framework_disable_cpumask_locks", 00:04:09.212 "framework_wait_init", 00:04:09.212 "framework_start_init", 00:04:09.212 "scsi_get_devices", 00:04:09.212 "bdev_get_histogram", 00:04:09.212 "bdev_enable_histogram", 00:04:09.212 "bdev_set_qos_limit", 00:04:09.212 "bdev_set_qd_sampling_period", 00:04:09.212 "bdev_get_bdevs", 00:04:09.212 "bdev_reset_iostat", 00:04:09.212 "bdev_get_iostat", 00:04:09.212 "bdev_examine", 00:04:09.212 "bdev_wait_for_examine", 00:04:09.212 "bdev_set_options", 00:04:09.212 "notify_get_notifications", 00:04:09.212 "notify_get_types", 00:04:09.212 "accel_get_stats", 00:04:09.212 "accel_set_options", 00:04:09.212 "accel_set_driver", 00:04:09.212 "accel_crypto_key_destroy", 00:04:09.212 "accel_crypto_keys_get", 00:04:09.212 "accel_crypto_key_create", 00:04:09.212 "accel_assign_opc", 00:04:09.212 "accel_get_module_info", 00:04:09.212 "accel_get_opc_assignments", 00:04:09.212 "vmd_rescan", 00:04:09.212 "vmd_remove_device", 00:04:09.212 "vmd_enable", 00:04:09.212 "sock_get_default_impl", 00:04:09.212 "sock_set_default_impl", 00:04:09.212 "sock_impl_set_options", 00:04:09.212 "sock_impl_get_options", 00:04:09.212 "iobuf_get_stats", 00:04:09.212 "iobuf_set_options", 00:04:09.212 "keyring_get_keys", 00:04:09.212 "framework_get_pci_devices", 00:04:09.212 "framework_get_config", 00:04:09.212 "framework_get_subsystems", 00:04:09.212 "vfu_tgt_set_base_path", 00:04:09.212 "trace_get_info", 00:04:09.212 "trace_get_tpoint_group_mask", 00:04:09.212 "trace_disable_tpoint_group", 00:04:09.212 "trace_enable_tpoint_group", 00:04:09.212 "trace_clear_tpoint_mask", 00:04:09.212 "trace_set_tpoint_mask", 00:04:09.212 "spdk_get_version", 00:04:09.212 "rpc_get_methods" 00:04:09.212 ] 00:04:09.212 22:13:34 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:09.212 22:13:34 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:09.212 22:13:34 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:09.212 22:13:34 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:09.212 22:13:34 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3748550 00:04:09.212 22:13:34 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 3748550 ']' 00:04:09.212 22:13:34 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 3748550 00:04:09.212 22:13:34 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:04:09.212 22:13:34 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:09.212 22:13:34 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3748550 00:04:09.212 22:13:34 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:09.212 22:13:34 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:09.212 22:13:34 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3748550' 00:04:09.212 killing process with pid 3748550 00:04:09.212 22:13:34 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 3748550 00:04:09.212 22:13:34 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 3748550 00:04:09.472 00:04:09.472 real 0m1.202s 00:04:09.472 user 0m2.180s 00:04:09.472 sys 0m0.424s 00:04:09.472 22:13:35 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:09.472 22:13:35 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:09.472 ************************************ 00:04:09.472 END TEST spdkcli_tcp 00:04:09.472 ************************************ 00:04:09.472 22:13:35 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:09.472 22:13:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:09.472 22:13:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:09.472 22:13:35 -- common/autotest_common.sh@10 -- # set +x 00:04:09.472 ************************************ 00:04:09.472 START TEST dpdk_mem_utility 00:04:09.472 ************************************ 00:04:09.472 22:13:35 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:09.731 * Looking for test storage... 00:04:09.731 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:09.731 22:13:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:09.731 22:13:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3748720 00:04:09.731 22:13:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3748720 00:04:09.731 22:13:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:09.731 22:13:35 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 3748720 ']' 00:04:09.731 22:13:35 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:09.731 22:13:35 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:09.731 22:13:35 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:09.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:09.731 22:13:35 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:09.731 22:13:35 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:09.731 [2024-07-24 22:13:35.285060] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:04:09.731 [2024-07-24 22:13:35.285161] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3748720 ] 00:04:09.731 EAL: No free 2048 kB hugepages reported on node 1 00:04:09.731 [2024-07-24 22:13:35.344807] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:09.989 [2024-07-24 22:13:35.461675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:09.989 22:13:35 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:09.989 22:13:35 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:04:09.989 22:13:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:09.989 22:13:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:09.989 22:13:35 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:09.989 22:13:35 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:10.248 { 00:04:10.248 "filename": "/tmp/spdk_mem_dump.txt" 00:04:10.248 } 00:04:10.248 22:13:35 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:10.248 22:13:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:10.248 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:10.248 1 heaps totaling size 814.000000 MiB 00:04:10.248 size: 814.000000 MiB heap id: 0 00:04:10.248 end heaps---------- 00:04:10.248 8 mempools totaling size 598.116089 MiB 00:04:10.248 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:10.248 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:10.248 size: 84.521057 MiB name: bdev_io_3748720 00:04:10.248 size: 51.011292 MiB name: evtpool_3748720 00:04:10.248 size: 50.003479 MiB name: msgpool_3748720 00:04:10.248 size: 21.763794 MiB name: PDU_Pool 00:04:10.248 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:10.248 size: 0.026123 MiB name: Session_Pool 00:04:10.248 end mempools------- 00:04:10.248 6 memzones totaling size 4.142822 MiB 00:04:10.248 size: 1.000366 MiB name: RG_ring_0_3748720 00:04:10.248 size: 1.000366 MiB name: RG_ring_1_3748720 00:04:10.248 size: 1.000366 MiB name: RG_ring_4_3748720 00:04:10.248 size: 1.000366 MiB name: RG_ring_5_3748720 00:04:10.248 size: 0.125366 MiB name: RG_ring_2_3748720 00:04:10.248 size: 0.015991 MiB name: RG_ring_3_3748720 00:04:10.248 end memzones------- 00:04:10.248 22:13:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:10.248 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:04:10.248 list of free elements. size: 12.519348 MiB 00:04:10.248 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:10.248 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:10.248 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:10.248 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:10.248 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:10.248 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:10.248 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:10.248 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:10.248 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:10.248 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:04:10.248 element at address: 0x20000b200000 with size: 0.490723 MiB 00:04:10.248 element at address: 0x200000800000 with size: 0.487793 MiB 00:04:10.248 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:10.248 element at address: 0x200027e00000 with size: 0.410034 MiB 00:04:10.248 element at address: 0x200003a00000 with size: 0.355530 MiB 00:04:10.248 list of standard malloc elements. size: 199.218079 MiB 00:04:10.248 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:10.248 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:10.248 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:10.248 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:10.248 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:10.248 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:10.248 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:10.248 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:10.248 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:10.248 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:10.248 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:10.248 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:10.248 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:10.248 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:10.248 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:10.249 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:10.249 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:10.249 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:10.249 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:10.249 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:10.249 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:10.249 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:10.249 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:10.249 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:10.249 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:10.249 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:10.249 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:10.249 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:10.249 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:10.249 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:10.249 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:10.249 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:10.249 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:10.249 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:10.249 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:10.249 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:10.249 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:04:10.249 element at address: 0x200027e69040 with size: 0.000183 MiB 00:04:10.249 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:04:10.249 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:10.249 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:10.249 list of memzone associated elements. size: 602.262573 MiB 00:04:10.249 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:10.249 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:10.249 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:10.249 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:10.249 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:10.249 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3748720_0 00:04:10.249 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:10.249 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3748720_0 00:04:10.249 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:10.249 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3748720_0 00:04:10.249 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:10.249 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:10.249 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:10.249 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:10.249 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:10.249 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3748720 00:04:10.249 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:10.249 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3748720 00:04:10.249 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:10.249 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3748720 00:04:10.249 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:10.249 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:10.249 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:10.249 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:10.249 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:10.249 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:10.249 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:10.249 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:10.249 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:10.249 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3748720 00:04:10.249 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:10.249 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3748720 00:04:10.249 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:10.249 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3748720 00:04:10.249 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:10.249 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3748720 00:04:10.249 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:10.249 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3748720 00:04:10.249 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:10.249 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:10.249 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:10.249 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:10.249 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:10.249 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:10.249 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:10.249 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3748720 00:04:10.249 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:10.249 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:10.249 element at address: 0x200027e69100 with size: 0.023743 MiB 00:04:10.249 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:10.249 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:10.249 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3748720 00:04:10.249 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:04:10.249 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:10.249 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:10.249 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3748720 00:04:10.249 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:10.249 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3748720 00:04:10.249 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:04:10.249 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:10.249 22:13:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:10.249 22:13:35 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3748720 00:04:10.249 22:13:35 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 3748720 ']' 00:04:10.249 22:13:35 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 3748720 00:04:10.249 22:13:35 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:04:10.249 22:13:35 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:10.249 22:13:35 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3748720 00:04:10.249 22:13:35 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:10.249 22:13:35 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:10.249 22:13:35 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3748720' 00:04:10.249 killing process with pid 3748720 00:04:10.249 22:13:35 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 3748720 00:04:10.249 22:13:35 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 3748720 00:04:10.508 00:04:10.508 real 0m1.003s 00:04:10.508 user 0m1.066s 00:04:10.508 sys 0m0.362s 00:04:10.508 22:13:36 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:10.508 22:13:36 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:10.508 ************************************ 00:04:10.508 END TEST dpdk_mem_utility 00:04:10.508 ************************************ 00:04:10.508 22:13:36 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:10.508 22:13:36 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:10.508 22:13:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:10.508 22:13:36 -- common/autotest_common.sh@10 -- # set +x 00:04:10.766 ************************************ 00:04:10.766 START TEST event 00:04:10.766 ************************************ 00:04:10.766 22:13:36 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:10.766 * Looking for test storage... 00:04:10.766 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:10.766 22:13:36 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:10.766 22:13:36 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:10.766 22:13:36 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:10.766 22:13:36 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:04:10.766 22:13:36 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:10.766 22:13:36 event -- common/autotest_common.sh@10 -- # set +x 00:04:10.766 ************************************ 00:04:10.766 START TEST event_perf 00:04:10.766 ************************************ 00:04:10.766 22:13:36 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:10.766 Running I/O for 1 seconds...[2024-07-24 22:13:36.320246] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:04:10.766 [2024-07-24 22:13:36.320315] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3748876 ] 00:04:10.766 EAL: No free 2048 kB hugepages reported on node 1 00:04:10.766 [2024-07-24 22:13:36.379443] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:11.024 [2024-07-24 22:13:36.499636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:11.024 [2024-07-24 22:13:36.499752] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:11.024 [2024-07-24 22:13:36.499802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:11.024 [2024-07-24 22:13:36.499806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:11.959 Running I/O for 1 seconds... 00:04:11.959 lcore 0: 226690 00:04:11.959 lcore 1: 226690 00:04:11.959 lcore 2: 226689 00:04:11.959 lcore 3: 226689 00:04:11.959 done. 00:04:11.959 00:04:11.959 real 0m1.303s 00:04:11.959 user 0m4.218s 00:04:11.959 sys 0m0.075s 00:04:11.959 22:13:37 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:11.959 22:13:37 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:11.959 ************************************ 00:04:11.959 END TEST event_perf 00:04:11.959 ************************************ 00:04:11.959 22:13:37 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:11.959 22:13:37 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:11.959 22:13:37 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:11.959 22:13:37 event -- common/autotest_common.sh@10 -- # set +x 00:04:12.217 ************************************ 00:04:12.217 START TEST event_reactor 00:04:12.217 ************************************ 00:04:12.217 22:13:37 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:12.217 [2024-07-24 22:13:37.683240] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:04:12.217 [2024-07-24 22:13:37.683314] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3749020 ] 00:04:12.217 EAL: No free 2048 kB hugepages reported on node 1 00:04:12.217 [2024-07-24 22:13:37.743688] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:12.217 [2024-07-24 22:13:37.864147] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.588 test_start 00:04:13.588 oneshot 00:04:13.588 tick 100 00:04:13.588 tick 100 00:04:13.588 tick 250 00:04:13.588 tick 100 00:04:13.588 tick 100 00:04:13.588 tick 100 00:04:13.588 tick 250 00:04:13.588 tick 500 00:04:13.588 tick 100 00:04:13.588 tick 100 00:04:13.588 tick 250 00:04:13.588 tick 100 00:04:13.588 tick 100 00:04:13.588 test_end 00:04:13.588 00:04:13.588 real 0m1.306s 00:04:13.588 user 0m1.226s 00:04:13.588 sys 0m0.073s 00:04:13.588 22:13:38 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:13.588 22:13:38 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:13.588 ************************************ 00:04:13.588 END TEST event_reactor 00:04:13.588 ************************************ 00:04:13.588 22:13:38 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:13.588 22:13:38 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:13.588 22:13:38 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.588 22:13:38 event -- common/autotest_common.sh@10 -- # set +x 00:04:13.588 ************************************ 00:04:13.588 START TEST event_reactor_perf 00:04:13.588 ************************************ 00:04:13.588 22:13:39 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:13.588 [2024-07-24 22:13:39.037726] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:04:13.588 [2024-07-24 22:13:39.037791] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3749219 ] 00:04:13.588 EAL: No free 2048 kB hugepages reported on node 1 00:04:13.588 [2024-07-24 22:13:39.098108] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:13.588 [2024-07-24 22:13:39.217858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.959 test_start 00:04:14.959 test_end 00:04:14.959 Performance: 327985 events per second 00:04:14.959 00:04:14.959 real 0m1.304s 00:04:14.959 user 0m1.222s 00:04:14.959 sys 0m0.074s 00:04:14.959 22:13:40 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:14.959 22:13:40 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:14.959 ************************************ 00:04:14.959 END TEST event_reactor_perf 00:04:14.959 ************************************ 00:04:14.959 22:13:40 event -- event/event.sh@49 -- # uname -s 00:04:14.959 22:13:40 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:14.959 22:13:40 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:14.959 22:13:40 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:14.959 22:13:40 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:14.959 22:13:40 event -- common/autotest_common.sh@10 -- # set +x 00:04:14.959 ************************************ 00:04:14.959 START TEST event_scheduler 00:04:14.959 ************************************ 00:04:14.959 22:13:40 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:14.959 * Looking for test storage... 00:04:14.959 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:14.959 22:13:40 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:14.959 22:13:40 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3749376 00:04:14.959 22:13:40 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:14.959 22:13:40 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:14.959 22:13:40 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3749376 00:04:14.959 22:13:40 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 3749376 ']' 00:04:14.959 22:13:40 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:14.959 22:13:40 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:14.959 22:13:40 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:14.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:14.959 22:13:40 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:14.959 22:13:40 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:14.959 [2024-07-24 22:13:40.487929] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:04:14.959 [2024-07-24 22:13:40.488031] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3749376 ] 00:04:14.959 EAL: No free 2048 kB hugepages reported on node 1 00:04:14.959 [2024-07-24 22:13:40.551917] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:15.218 [2024-07-24 22:13:40.672024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.218 [2024-07-24 22:13:40.672079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:15.218 [2024-07-24 22:13:40.672128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:15.218 [2024-07-24 22:13:40.672131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:15.218 22:13:40 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:15.218 22:13:40 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:04:15.218 22:13:40 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:15.218 22:13:40 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:15.218 22:13:40 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:15.218 [2024-07-24 22:13:40.745033] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:15.218 [2024-07-24 22:13:40.745069] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:04:15.218 [2024-07-24 22:13:40.745088] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:15.218 [2024-07-24 22:13:40.745101] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:15.218 [2024-07-24 22:13:40.745113] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:15.218 22:13:40 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:15.218 22:13:40 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:15.218 22:13:40 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:15.218 22:13:40 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:15.218 [2024-07-24 22:13:40.836640] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:15.218 22:13:40 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:15.218 22:13:40 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:15.218 22:13:40 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:15.218 22:13:40 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:15.218 22:13:40 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:15.218 ************************************ 00:04:15.218 START TEST scheduler_create_thread 00:04:15.218 ************************************ 00:04:15.218 22:13:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:04:15.218 22:13:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:15.218 22:13:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:15.218 22:13:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:15.218 2 00:04:15.218 22:13:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:15.218 22:13:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:15.218 22:13:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:15.218 22:13:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:15.218 3 00:04:15.218 22:13:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:15.218 22:13:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:15.218 22:13:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:15.218 22:13:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:15.218 4 00:04:15.218 22:13:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:15.218 22:13:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:15.218 22:13:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:15.218 22:13:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:15.218 5 00:04:15.218 22:13:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:15.218 22:13:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:15.218 22:13:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:15.218 22:13:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:15.218 6 00:04:15.218 22:13:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:15.218 22:13:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:15.218 22:13:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:15.218 22:13:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:15.218 7 00:04:15.218 22:13:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:15.218 22:13:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:15.218 22:13:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:15.218 22:13:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:15.477 8 00:04:15.477 22:13:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:15.477 22:13:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:15.477 22:13:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:15.477 22:13:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:15.477 9 00:04:15.477 22:13:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:15.477 22:13:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:15.477 22:13:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:15.477 22:13:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:15.477 10 00:04:15.477 22:13:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:15.477 22:13:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:15.477 22:13:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:15.477 22:13:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:15.477 22:13:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:15.477 22:13:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:15.477 22:13:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:15.477 22:13:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:15.477 22:13:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:15.477 22:13:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:15.477 22:13:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:15.477 22:13:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:15.477 22:13:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:16.042 22:13:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:16.042 22:13:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:16.042 22:13:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:16.042 22:13:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:16.042 22:13:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:16.974 22:13:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:16.974 00:04:16.974 real 0m1.754s 00:04:16.974 user 0m0.013s 00:04:16.974 sys 0m0.003s 00:04:16.974 22:13:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:16.974 22:13:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:16.974 ************************************ 00:04:16.974 END TEST scheduler_create_thread 00:04:16.974 ************************************ 00:04:16.974 22:13:42 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:16.974 22:13:42 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3749376 00:04:16.974 22:13:42 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 3749376 ']' 00:04:16.974 22:13:42 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 3749376 00:04:16.974 22:13:42 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:04:16.974 22:13:42 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:16.974 22:13:42 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3749376 00:04:16.974 22:13:42 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:04:16.974 22:13:42 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:04:16.974 22:13:42 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3749376' 00:04:16.974 killing process with pid 3749376 00:04:16.974 22:13:42 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 3749376 00:04:16.974 22:13:42 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 3749376 00:04:17.539 [2024-07-24 22:13:43.100329] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:17.801 00:04:17.801 real 0m2.926s 00:04:17.801 user 0m3.818s 00:04:17.801 sys 0m0.328s 00:04:17.801 22:13:43 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:17.801 22:13:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:17.801 ************************************ 00:04:17.801 END TEST event_scheduler 00:04:17.801 ************************************ 00:04:17.801 22:13:43 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:17.801 22:13:43 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:17.801 22:13:43 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:17.801 22:13:43 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:17.801 22:13:43 event -- common/autotest_common.sh@10 -- # set +x 00:04:17.801 ************************************ 00:04:17.801 START TEST app_repeat 00:04:17.801 ************************************ 00:04:17.801 22:13:43 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:04:17.801 22:13:43 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:17.801 22:13:43 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:17.801 22:13:43 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:17.801 22:13:43 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:17.801 22:13:43 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:17.801 22:13:43 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:17.801 22:13:43 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:17.801 22:13:43 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3749644 00:04:17.801 22:13:43 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:17.801 22:13:43 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:17.801 22:13:43 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3749644' 00:04:17.801 Process app_repeat pid: 3749644 00:04:17.801 22:13:43 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:17.801 22:13:43 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:17.801 spdk_app_start Round 0 00:04:17.801 22:13:43 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3749644 /var/tmp/spdk-nbd.sock 00:04:17.801 22:13:43 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3749644 ']' 00:04:17.801 22:13:43 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:17.801 22:13:43 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:17.801 22:13:43 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:17.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:17.802 22:13:43 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:17.802 22:13:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:17.802 [2024-07-24 22:13:43.395316] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:04:17.802 [2024-07-24 22:13:43.395390] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3749644 ] 00:04:17.802 EAL: No free 2048 kB hugepages reported on node 1 00:04:17.802 [2024-07-24 22:13:43.454809] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:18.066 [2024-07-24 22:13:43.572753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:18.066 [2024-07-24 22:13:43.572822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:18.066 22:13:43 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:18.066 22:13:43 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:18.066 22:13:43 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:18.323 Malloc0 00:04:18.323 22:13:43 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:18.890 Malloc1 00:04:18.890 22:13:44 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:18.890 22:13:44 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:18.890 22:13:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:18.890 22:13:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:18.890 22:13:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:18.890 22:13:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:18.890 22:13:44 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:18.890 22:13:44 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:18.890 22:13:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:18.890 22:13:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:18.890 22:13:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:18.890 22:13:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:18.890 22:13:44 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:18.890 22:13:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:18.890 22:13:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:18.890 22:13:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:19.148 /dev/nbd0 00:04:19.148 22:13:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:19.148 22:13:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:19.148 22:13:44 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:19.148 22:13:44 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:19.148 22:13:44 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:19.148 22:13:44 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:19.148 22:13:44 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:19.148 22:13:44 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:19.148 22:13:44 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:19.148 22:13:44 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:19.148 22:13:44 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:19.148 1+0 records in 00:04:19.148 1+0 records out 00:04:19.148 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00017789 s, 23.0 MB/s 00:04:19.148 22:13:44 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:19.148 22:13:44 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:19.148 22:13:44 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:19.148 22:13:44 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:19.148 22:13:44 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:19.148 22:13:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:19.148 22:13:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:19.148 22:13:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:19.406 /dev/nbd1 00:04:19.406 22:13:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:19.406 22:13:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:19.406 22:13:44 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:19.406 22:13:44 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:19.406 22:13:44 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:19.406 22:13:44 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:19.406 22:13:44 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:19.406 22:13:44 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:19.406 22:13:44 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:19.406 22:13:44 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:19.406 22:13:44 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:19.406 1+0 records in 00:04:19.406 1+0 records out 00:04:19.406 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00022474 s, 18.2 MB/s 00:04:19.406 22:13:44 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:19.406 22:13:44 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:19.406 22:13:44 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:19.406 22:13:44 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:19.406 22:13:44 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:19.406 22:13:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:19.406 22:13:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:19.406 22:13:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:19.406 22:13:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:19.406 22:13:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:19.664 22:13:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:19.664 { 00:04:19.664 "nbd_device": "/dev/nbd0", 00:04:19.664 "bdev_name": "Malloc0" 00:04:19.664 }, 00:04:19.664 { 00:04:19.664 "nbd_device": "/dev/nbd1", 00:04:19.664 "bdev_name": "Malloc1" 00:04:19.664 } 00:04:19.664 ]' 00:04:19.664 22:13:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:19.664 { 00:04:19.664 "nbd_device": "/dev/nbd0", 00:04:19.664 "bdev_name": "Malloc0" 00:04:19.664 }, 00:04:19.664 { 00:04:19.664 "nbd_device": "/dev/nbd1", 00:04:19.664 "bdev_name": "Malloc1" 00:04:19.664 } 00:04:19.664 ]' 00:04:19.664 22:13:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:19.664 22:13:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:19.664 /dev/nbd1' 00:04:19.664 22:13:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:19.664 /dev/nbd1' 00:04:19.664 22:13:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:19.664 22:13:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:19.664 22:13:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:19.664 22:13:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:19.664 22:13:45 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:19.664 22:13:45 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:19.664 22:13:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:19.664 22:13:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:19.664 22:13:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:19.664 22:13:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:19.664 22:13:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:19.664 22:13:45 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:19.664 256+0 records in 00:04:19.664 256+0 records out 00:04:19.664 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00606269 s, 173 MB/s 00:04:19.664 22:13:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:19.664 22:13:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:19.664 256+0 records in 00:04:19.664 256+0 records out 00:04:19.664 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0271609 s, 38.6 MB/s 00:04:19.664 22:13:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:19.664 22:13:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:19.922 256+0 records in 00:04:19.922 256+0 records out 00:04:19.922 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0276821 s, 37.9 MB/s 00:04:19.922 22:13:45 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:19.922 22:13:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:19.922 22:13:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:19.922 22:13:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:19.923 22:13:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:19.923 22:13:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:19.923 22:13:45 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:19.923 22:13:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:19.923 22:13:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:19.923 22:13:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:19.923 22:13:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:19.923 22:13:45 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:19.923 22:13:45 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:19.923 22:13:45 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:19.923 22:13:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:19.923 22:13:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:19.923 22:13:45 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:19.923 22:13:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:19.923 22:13:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:20.180 22:13:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:20.180 22:13:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:20.180 22:13:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:20.180 22:13:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:20.180 22:13:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:20.180 22:13:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:20.180 22:13:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:20.181 22:13:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:20.181 22:13:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:20.181 22:13:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:20.439 22:13:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:20.439 22:13:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:20.439 22:13:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:20.439 22:13:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:20.439 22:13:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:20.439 22:13:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:20.439 22:13:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:20.439 22:13:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:20.439 22:13:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:20.439 22:13:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:20.439 22:13:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:20.697 22:13:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:20.697 22:13:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:20.697 22:13:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:20.697 22:13:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:20.697 22:13:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:20.697 22:13:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:20.697 22:13:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:20.697 22:13:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:20.697 22:13:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:20.697 22:13:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:20.697 22:13:46 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:20.697 22:13:46 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:20.697 22:13:46 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:21.263 22:13:46 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:21.263 [2024-07-24 22:13:46.897199] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:21.521 [2024-07-24 22:13:47.016043] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:21.521 [2024-07-24 22:13:47.016067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.521 [2024-07-24 22:13:47.066753] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:21.521 [2024-07-24 22:13:47.066830] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:24.048 22:13:49 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:24.048 22:13:49 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:24.048 spdk_app_start Round 1 00:04:24.048 22:13:49 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3749644 /var/tmp/spdk-nbd.sock 00:04:24.048 22:13:49 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3749644 ']' 00:04:24.048 22:13:49 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:24.048 22:13:49 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:24.048 22:13:49 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:24.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:24.048 22:13:49 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:24.048 22:13:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:24.306 22:13:50 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:24.306 22:13:50 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:24.306 22:13:50 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:24.872 Malloc0 00:04:24.872 22:13:50 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:25.130 Malloc1 00:04:25.130 22:13:50 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:25.130 22:13:50 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:25.130 22:13:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:25.130 22:13:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:25.130 22:13:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:25.130 22:13:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:25.130 22:13:50 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:25.130 22:13:50 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:25.130 22:13:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:25.130 22:13:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:25.130 22:13:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:25.130 22:13:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:25.130 22:13:50 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:25.130 22:13:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:25.130 22:13:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:25.130 22:13:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:25.388 /dev/nbd0 00:04:25.388 22:13:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:25.388 22:13:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:25.388 22:13:50 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:25.388 22:13:50 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:25.388 22:13:50 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:25.388 22:13:50 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:25.388 22:13:50 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:25.388 22:13:50 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:25.388 22:13:50 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:25.388 22:13:50 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:25.388 22:13:50 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:25.388 1+0 records in 00:04:25.388 1+0 records out 00:04:25.388 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000164189 s, 24.9 MB/s 00:04:25.388 22:13:50 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:25.388 22:13:50 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:25.388 22:13:50 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:25.388 22:13:50 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:25.388 22:13:50 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:25.388 22:13:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:25.388 22:13:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:25.388 22:13:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:25.647 /dev/nbd1 00:04:25.647 22:13:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:25.647 22:13:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:25.647 22:13:51 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:25.647 22:13:51 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:25.647 22:13:51 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:25.647 22:13:51 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:25.647 22:13:51 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:25.647 22:13:51 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:25.647 22:13:51 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:25.647 22:13:51 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:25.647 22:13:51 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:25.647 1+0 records in 00:04:25.647 1+0 records out 00:04:25.647 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000204859 s, 20.0 MB/s 00:04:25.647 22:13:51 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:25.647 22:13:51 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:25.647 22:13:51 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:25.647 22:13:51 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:25.647 22:13:51 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:25.647 22:13:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:25.647 22:13:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:25.647 22:13:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:25.647 22:13:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:25.647 22:13:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:25.905 22:13:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:25.905 { 00:04:25.905 "nbd_device": "/dev/nbd0", 00:04:25.905 "bdev_name": "Malloc0" 00:04:25.905 }, 00:04:25.905 { 00:04:25.905 "nbd_device": "/dev/nbd1", 00:04:25.905 "bdev_name": "Malloc1" 00:04:25.905 } 00:04:25.905 ]' 00:04:25.905 22:13:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:25.905 { 00:04:25.905 "nbd_device": "/dev/nbd0", 00:04:25.905 "bdev_name": "Malloc0" 00:04:25.905 }, 00:04:25.905 { 00:04:25.905 "nbd_device": "/dev/nbd1", 00:04:25.905 "bdev_name": "Malloc1" 00:04:25.905 } 00:04:25.905 ]' 00:04:25.905 22:13:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:26.163 22:13:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:26.163 /dev/nbd1' 00:04:26.163 22:13:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:26.163 /dev/nbd1' 00:04:26.163 22:13:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:26.163 22:13:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:26.163 22:13:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:26.163 22:13:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:26.163 22:13:51 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:26.163 22:13:51 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:26.163 22:13:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:26.163 22:13:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:26.163 22:13:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:26.163 22:13:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:26.163 22:13:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:26.163 22:13:51 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:26.163 256+0 records in 00:04:26.163 256+0 records out 00:04:26.163 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00597826 s, 175 MB/s 00:04:26.163 22:13:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:26.163 22:13:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:26.163 256+0 records in 00:04:26.163 256+0 records out 00:04:26.163 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0260321 s, 40.3 MB/s 00:04:26.163 22:13:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:26.163 22:13:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:26.163 256+0 records in 00:04:26.163 256+0 records out 00:04:26.163 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0274297 s, 38.2 MB/s 00:04:26.163 22:13:51 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:26.163 22:13:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:26.163 22:13:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:26.163 22:13:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:26.163 22:13:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:26.163 22:13:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:26.163 22:13:51 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:26.163 22:13:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:26.163 22:13:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:26.163 22:13:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:26.163 22:13:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:26.163 22:13:51 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:26.163 22:13:51 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:26.163 22:13:51 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:26.163 22:13:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:26.163 22:13:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:26.163 22:13:51 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:26.163 22:13:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:26.163 22:13:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:26.421 22:13:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:26.421 22:13:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:26.421 22:13:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:26.421 22:13:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:26.421 22:13:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:26.421 22:13:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:26.421 22:13:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:26.421 22:13:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:26.421 22:13:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:26.421 22:13:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:26.679 22:13:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:26.679 22:13:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:26.679 22:13:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:26.679 22:13:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:26.679 22:13:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:26.679 22:13:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:26.679 22:13:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:26.679 22:13:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:26.679 22:13:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:26.679 22:13:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:26.679 22:13:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:27.245 22:13:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:27.245 22:13:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:27.245 22:13:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:27.245 22:13:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:27.245 22:13:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:27.245 22:13:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:27.245 22:13:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:27.245 22:13:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:27.245 22:13:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:27.245 22:13:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:27.245 22:13:52 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:27.245 22:13:52 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:27.245 22:13:52 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:27.503 22:13:53 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:27.503 [2024-07-24 22:13:53.201947] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:27.765 [2024-07-24 22:13:53.319744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:27.765 [2024-07-24 22:13:53.319777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.765 [2024-07-24 22:13:53.368393] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:27.765 [2024-07-24 22:13:53.368469] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:31.047 22:13:56 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:31.047 22:13:56 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:31.047 spdk_app_start Round 2 00:04:31.047 22:13:56 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3749644 /var/tmp/spdk-nbd.sock 00:04:31.047 22:13:56 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3749644 ']' 00:04:31.047 22:13:56 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:31.047 22:13:56 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:31.047 22:13:56 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:31.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:31.048 22:13:56 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:31.048 22:13:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:31.048 22:13:56 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:31.048 22:13:56 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:31.048 22:13:56 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:31.048 Malloc0 00:04:31.048 22:13:56 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:31.305 Malloc1 00:04:31.305 22:13:56 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:31.305 22:13:56 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:31.305 22:13:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:31.305 22:13:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:31.305 22:13:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:31.305 22:13:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:31.305 22:13:56 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:31.305 22:13:56 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:31.305 22:13:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:31.305 22:13:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:31.305 22:13:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:31.305 22:13:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:31.305 22:13:56 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:31.305 22:13:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:31.306 22:13:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:31.306 22:13:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:31.563 /dev/nbd0 00:04:31.563 22:13:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:31.563 22:13:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:31.563 22:13:57 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:31.563 22:13:57 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:31.563 22:13:57 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:31.563 22:13:57 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:31.563 22:13:57 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:31.563 22:13:57 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:31.563 22:13:57 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:31.563 22:13:57 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:31.563 22:13:57 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:31.563 1+0 records in 00:04:31.563 1+0 records out 00:04:31.563 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000162712 s, 25.2 MB/s 00:04:31.563 22:13:57 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:31.563 22:13:57 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:31.563 22:13:57 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:31.563 22:13:57 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:31.563 22:13:57 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:31.563 22:13:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:31.563 22:13:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:31.563 22:13:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:31.821 /dev/nbd1 00:04:31.821 22:13:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:31.821 22:13:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:31.821 22:13:57 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:31.821 22:13:57 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:31.821 22:13:57 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:31.821 22:13:57 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:31.821 22:13:57 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:31.821 22:13:57 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:31.821 22:13:57 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:31.821 22:13:57 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:31.821 22:13:57 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:31.821 1+0 records in 00:04:31.821 1+0 records out 00:04:31.821 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000231112 s, 17.7 MB/s 00:04:31.821 22:13:57 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:31.821 22:13:57 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:31.821 22:13:57 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:31.821 22:13:57 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:31.821 22:13:57 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:31.822 22:13:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:31.822 22:13:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:31.822 22:13:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:31.822 22:13:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:31.822 22:13:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:32.080 22:13:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:32.080 { 00:04:32.080 "nbd_device": "/dev/nbd0", 00:04:32.080 "bdev_name": "Malloc0" 00:04:32.080 }, 00:04:32.080 { 00:04:32.080 "nbd_device": "/dev/nbd1", 00:04:32.080 "bdev_name": "Malloc1" 00:04:32.080 } 00:04:32.080 ]' 00:04:32.080 22:13:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:32.080 { 00:04:32.080 "nbd_device": "/dev/nbd0", 00:04:32.080 "bdev_name": "Malloc0" 00:04:32.080 }, 00:04:32.080 { 00:04:32.080 "nbd_device": "/dev/nbd1", 00:04:32.080 "bdev_name": "Malloc1" 00:04:32.080 } 00:04:32.080 ]' 00:04:32.080 22:13:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:32.080 22:13:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:32.080 /dev/nbd1' 00:04:32.080 22:13:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:32.080 /dev/nbd1' 00:04:32.080 22:13:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:32.080 22:13:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:32.080 22:13:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:32.080 22:13:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:32.080 22:13:57 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:32.080 22:13:57 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:32.080 22:13:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:32.080 22:13:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:32.080 22:13:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:32.080 22:13:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:32.080 22:13:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:32.080 22:13:57 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:32.080 256+0 records in 00:04:32.080 256+0 records out 00:04:32.080 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0053466 s, 196 MB/s 00:04:32.080 22:13:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:32.080 22:13:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:32.080 256+0 records in 00:04:32.080 256+0 records out 00:04:32.080 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0253194 s, 41.4 MB/s 00:04:32.080 22:13:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:32.080 22:13:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:32.080 256+0 records in 00:04:32.080 256+0 records out 00:04:32.080 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0267276 s, 39.2 MB/s 00:04:32.080 22:13:57 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:32.080 22:13:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:32.080 22:13:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:32.080 22:13:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:32.080 22:13:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:32.080 22:13:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:32.080 22:13:57 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:32.080 22:13:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:32.080 22:13:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:32.081 22:13:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:32.081 22:13:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:32.081 22:13:57 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:32.081 22:13:57 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:32.081 22:13:57 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:32.081 22:13:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:32.081 22:13:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:32.081 22:13:57 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:32.081 22:13:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:32.081 22:13:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:32.372 22:13:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:32.372 22:13:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:32.372 22:13:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:32.372 22:13:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:32.372 22:13:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:32.372 22:13:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:32.372 22:13:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:32.372 22:13:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:32.372 22:13:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:32.372 22:13:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:32.653 22:13:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:32.653 22:13:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:32.653 22:13:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:32.653 22:13:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:32.653 22:13:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:32.653 22:13:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:32.653 22:13:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:32.653 22:13:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:32.653 22:13:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:32.653 22:13:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:32.653 22:13:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:32.912 22:13:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:32.912 22:13:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:32.912 22:13:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:32.912 22:13:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:32.912 22:13:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:32.912 22:13:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:32.912 22:13:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:32.912 22:13:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:32.912 22:13:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:32.912 22:13:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:32.912 22:13:58 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:32.912 22:13:58 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:32.912 22:13:58 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:33.171 22:13:58 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:33.429 [2024-07-24 22:13:59.039236] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:33.687 [2024-07-24 22:13:59.157269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.687 [2024-07-24 22:13:59.157270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:33.687 [2024-07-24 22:13:59.206593] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:33.687 [2024-07-24 22:13:59.206667] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:36.216 22:14:01 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3749644 /var/tmp/spdk-nbd.sock 00:04:36.216 22:14:01 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3749644 ']' 00:04:36.216 22:14:01 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:36.216 22:14:01 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:36.216 22:14:01 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:36.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:36.216 22:14:01 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:36.216 22:14:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:36.474 22:14:02 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:36.474 22:14:02 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:36.474 22:14:02 event.app_repeat -- event/event.sh@39 -- # killprocess 3749644 00:04:36.474 22:14:02 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 3749644 ']' 00:04:36.474 22:14:02 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 3749644 00:04:36.474 22:14:02 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:04:36.474 22:14:02 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:36.474 22:14:02 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3749644 00:04:36.474 22:14:02 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:36.474 22:14:02 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:36.474 22:14:02 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3749644' 00:04:36.474 killing process with pid 3749644 00:04:36.474 22:14:02 event.app_repeat -- common/autotest_common.sh@967 -- # kill 3749644 00:04:36.474 22:14:02 event.app_repeat -- common/autotest_common.sh@972 -- # wait 3749644 00:04:36.735 spdk_app_start is called in Round 0. 00:04:36.735 Shutdown signal received, stop current app iteration 00:04:36.735 Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 reinitialization... 00:04:36.735 spdk_app_start is called in Round 1. 00:04:36.735 Shutdown signal received, stop current app iteration 00:04:36.735 Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 reinitialization... 00:04:36.735 spdk_app_start is called in Round 2. 00:04:36.735 Shutdown signal received, stop current app iteration 00:04:36.735 Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 reinitialization... 00:04:36.735 spdk_app_start is called in Round 3. 00:04:36.735 Shutdown signal received, stop current app iteration 00:04:36.735 22:14:02 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:36.735 22:14:02 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:36.735 00:04:36.735 real 0m19.001s 00:04:36.735 user 0m42.024s 00:04:36.735 sys 0m3.282s 00:04:36.735 22:14:02 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:36.735 22:14:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:36.735 ************************************ 00:04:36.735 END TEST app_repeat 00:04:36.735 ************************************ 00:04:36.735 22:14:02 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:36.735 22:14:02 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:36.735 22:14:02 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:36.735 22:14:02 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.735 22:14:02 event -- common/autotest_common.sh@10 -- # set +x 00:04:36.735 ************************************ 00:04:36.735 START TEST cpu_locks 00:04:36.735 ************************************ 00:04:36.735 22:14:02 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:36.994 * Looking for test storage... 00:04:36.994 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:36.994 22:14:02 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:36.994 22:14:02 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:36.994 22:14:02 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:36.994 22:14:02 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:36.994 22:14:02 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:36.994 22:14:02 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.994 22:14:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:36.994 ************************************ 00:04:36.994 START TEST default_locks 00:04:36.994 ************************************ 00:04:36.994 22:14:02 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:04:36.994 22:14:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3751665 00:04:36.994 22:14:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:36.994 22:14:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3751665 00:04:36.994 22:14:02 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 3751665 ']' 00:04:36.994 22:14:02 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:36.994 22:14:02 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:36.994 22:14:02 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:36.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:36.994 22:14:02 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:36.994 22:14:02 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:36.994 [2024-07-24 22:14:02.575211] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:04:36.994 [2024-07-24 22:14:02.575318] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3751665 ] 00:04:36.994 EAL: No free 2048 kB hugepages reported on node 1 00:04:36.994 [2024-07-24 22:14:02.634771] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.252 [2024-07-24 22:14:02.751764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.511 22:14:02 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:37.511 22:14:02 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:04:37.511 22:14:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3751665 00:04:37.511 22:14:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3751665 00:04:37.511 22:14:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:37.771 lslocks: write error 00:04:37.771 22:14:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3751665 00:04:37.771 22:14:03 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 3751665 ']' 00:04:37.771 22:14:03 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 3751665 00:04:37.771 22:14:03 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:04:37.771 22:14:03 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:37.771 22:14:03 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3751665 00:04:37.771 22:14:03 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:37.771 22:14:03 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:37.771 22:14:03 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3751665' 00:04:37.771 killing process with pid 3751665 00:04:37.771 22:14:03 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 3751665 00:04:37.771 22:14:03 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 3751665 00:04:38.030 22:14:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3751665 00:04:38.030 22:14:03 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:04:38.030 22:14:03 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3751665 00:04:38.030 22:14:03 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:04:38.030 22:14:03 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:38.030 22:14:03 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:04:38.030 22:14:03 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:38.030 22:14:03 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 3751665 00:04:38.030 22:14:03 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 3751665 ']' 00:04:38.030 22:14:03 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:38.030 22:14:03 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:38.030 22:14:03 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:38.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:38.030 22:14:03 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:38.030 22:14:03 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:38.030 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (3751665) - No such process 00:04:38.030 ERROR: process (pid: 3751665) is no longer running 00:04:38.030 22:14:03 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:38.030 22:14:03 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:04:38.030 22:14:03 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:04:38.030 22:14:03 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:38.030 22:14:03 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:38.030 22:14:03 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:38.030 22:14:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:38.030 22:14:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:38.030 22:14:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:38.030 22:14:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:38.030 00:04:38.030 real 0m1.109s 00:04:38.030 user 0m1.104s 00:04:38.030 sys 0m0.521s 00:04:38.030 22:14:03 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:38.030 22:14:03 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:38.030 ************************************ 00:04:38.030 END TEST default_locks 00:04:38.030 ************************************ 00:04:38.030 22:14:03 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:38.030 22:14:03 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:38.030 22:14:03 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.030 22:14:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:38.030 ************************************ 00:04:38.030 START TEST default_locks_via_rpc 00:04:38.030 ************************************ 00:04:38.030 22:14:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:04:38.030 22:14:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3751795 00:04:38.030 22:14:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:38.030 22:14:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3751795 00:04:38.030 22:14:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3751795 ']' 00:04:38.031 22:14:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:38.031 22:14:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:38.031 22:14:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:38.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:38.031 22:14:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:38.031 22:14:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.291 [2024-07-24 22:14:03.744522] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:04:38.291 [2024-07-24 22:14:03.744620] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3751795 ] 00:04:38.291 EAL: No free 2048 kB hugepages reported on node 1 00:04:38.291 [2024-07-24 22:14:03.804027] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.291 [2024-07-24 22:14:03.920664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.551 22:14:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:38.551 22:14:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:38.551 22:14:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:38.551 22:14:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.551 22:14:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.551 22:14:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.551 22:14:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:38.551 22:14:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:38.551 22:14:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:38.551 22:14:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:38.551 22:14:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:38.551 22:14:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:38.551 22:14:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.551 22:14:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:38.551 22:14:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3751795 00:04:38.551 22:14:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3751795 00:04:38.551 22:14:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:39.120 22:14:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3751795 00:04:39.120 22:14:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 3751795 ']' 00:04:39.120 22:14:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 3751795 00:04:39.120 22:14:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:04:39.120 22:14:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:39.120 22:14:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3751795 00:04:39.120 22:14:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:39.120 22:14:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:39.120 22:14:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3751795' 00:04:39.120 killing process with pid 3751795 00:04:39.120 22:14:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 3751795 00:04:39.120 22:14:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 3751795 00:04:39.379 00:04:39.379 real 0m1.236s 00:04:39.379 user 0m1.223s 00:04:39.379 sys 0m0.529s 00:04:39.379 22:14:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:39.379 22:14:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.379 ************************************ 00:04:39.379 END TEST default_locks_via_rpc 00:04:39.379 ************************************ 00:04:39.379 22:14:04 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:39.379 22:14:04 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:39.379 22:14:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.379 22:14:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:39.379 ************************************ 00:04:39.379 START TEST non_locking_app_on_locked_coremask 00:04:39.379 ************************************ 00:04:39.379 22:14:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:04:39.379 22:14:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3751931 00:04:39.379 22:14:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:39.379 22:14:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3751931 /var/tmp/spdk.sock 00:04:39.379 22:14:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3751931 ']' 00:04:39.379 22:14:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.379 22:14:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:39.379 22:14:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.379 22:14:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:39.379 22:14:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:39.379 [2024-07-24 22:14:05.032497] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:04:39.379 [2024-07-24 22:14:05.032591] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3751931 ] 00:04:39.379 EAL: No free 2048 kB hugepages reported on node 1 00:04:39.639 [2024-07-24 22:14:05.092130] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.639 [2024-07-24 22:14:05.209157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.898 22:14:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:39.898 22:14:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:04:39.898 22:14:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3752027 00:04:39.898 22:14:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3752027 /var/tmp/spdk2.sock 00:04:39.898 22:14:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3752027 ']' 00:04:39.898 22:14:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:39.898 22:14:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:39.898 22:14:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:39.898 22:14:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:39.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:39.898 22:14:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:39.898 22:14:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:39.898 [2024-07-24 22:14:05.500827] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:04:39.898 [2024-07-24 22:14:05.500927] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3752027 ] 00:04:39.898 EAL: No free 2048 kB hugepages reported on node 1 00:04:39.898 [2024-07-24 22:14:05.592078] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:39.898 [2024-07-24 22:14:05.592114] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.158 [2024-07-24 22:14:05.825920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.099 22:14:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:41.099 22:14:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:04:41.099 22:14:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3751931 00:04:41.099 22:14:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3751931 00:04:41.099 22:14:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:41.668 lslocks: write error 00:04:41.668 22:14:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3751931 00:04:41.668 22:14:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3751931 ']' 00:04:41.668 22:14:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 3751931 00:04:41.668 22:14:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:04:41.668 22:14:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:41.668 22:14:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3751931 00:04:41.668 22:14:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:41.668 22:14:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:41.668 22:14:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3751931' 00:04:41.668 killing process with pid 3751931 00:04:41.668 22:14:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 3751931 00:04:41.668 22:14:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 3751931 00:04:42.608 22:14:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3752027 00:04:42.608 22:14:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3752027 ']' 00:04:42.608 22:14:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 3752027 00:04:42.608 22:14:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:04:42.608 22:14:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:42.608 22:14:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3752027 00:04:42.609 22:14:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:42.609 22:14:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:42.609 22:14:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3752027' 00:04:42.609 killing process with pid 3752027 00:04:42.609 22:14:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 3752027 00:04:42.609 22:14:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 3752027 00:04:42.868 00:04:42.868 real 0m3.349s 00:04:42.868 user 0m3.706s 00:04:42.868 sys 0m1.022s 00:04:42.868 22:14:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:42.868 22:14:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:42.868 ************************************ 00:04:42.868 END TEST non_locking_app_on_locked_coremask 00:04:42.868 ************************************ 00:04:42.868 22:14:08 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:42.868 22:14:08 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:42.868 22:14:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.868 22:14:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:42.868 ************************************ 00:04:42.868 START TEST locking_app_on_unlocked_coremask 00:04:42.868 ************************************ 00:04:42.868 22:14:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:04:42.868 22:14:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3752273 00:04:42.868 22:14:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:42.868 22:14:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3752273 /var/tmp/spdk.sock 00:04:42.868 22:14:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3752273 ']' 00:04:42.869 22:14:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.869 22:14:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:42.869 22:14:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.869 22:14:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:42.869 22:14:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:42.869 [2024-07-24 22:14:08.436292] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:04:42.869 [2024-07-24 22:14:08.436388] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3752273 ] 00:04:42.869 EAL: No free 2048 kB hugepages reported on node 1 00:04:42.869 [2024-07-24 22:14:08.496946] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:42.869 [2024-07-24 22:14:08.497001] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.130 [2024-07-24 22:14:08.616115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.389 22:14:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:43.389 22:14:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:04:43.389 22:14:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3752365 00:04:43.389 22:14:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:43.389 22:14:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3752365 /var/tmp/spdk2.sock 00:04:43.389 22:14:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3752365 ']' 00:04:43.389 22:14:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:43.389 22:14:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:43.389 22:14:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:43.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:43.389 22:14:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:43.389 22:14:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:43.389 [2024-07-24 22:14:08.906070] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:04:43.389 [2024-07-24 22:14:08.906172] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3752365 ] 00:04:43.389 EAL: No free 2048 kB hugepages reported on node 1 00:04:43.389 [2024-07-24 22:14:08.998064] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.649 [2024-07-24 22:14:09.234745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.591 22:14:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:44.591 22:14:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:04:44.591 22:14:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3752365 00:04:44.591 22:14:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3752365 00:04:44.591 22:14:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:44.852 lslocks: write error 00:04:44.852 22:14:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3752273 00:04:44.852 22:14:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3752273 ']' 00:04:44.852 22:14:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 3752273 00:04:44.852 22:14:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:04:44.852 22:14:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:44.852 22:14:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3752273 00:04:44.852 22:14:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:44.852 22:14:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:44.852 22:14:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3752273' 00:04:44.852 killing process with pid 3752273 00:04:44.852 22:14:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 3752273 00:04:44.852 22:14:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 3752273 00:04:45.793 22:14:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3752365 00:04:45.793 22:14:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3752365 ']' 00:04:45.793 22:14:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 3752365 00:04:45.793 22:14:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:04:45.793 22:14:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:45.793 22:14:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3752365 00:04:45.793 22:14:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:45.793 22:14:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:45.793 22:14:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3752365' 00:04:45.793 killing process with pid 3752365 00:04:45.793 22:14:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 3752365 00:04:45.793 22:14:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 3752365 00:04:46.052 00:04:46.052 real 0m3.144s 00:04:46.052 user 0m3.526s 00:04:46.052 sys 0m1.022s 00:04:46.052 22:14:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:46.052 22:14:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:46.052 ************************************ 00:04:46.052 END TEST locking_app_on_unlocked_coremask 00:04:46.052 ************************************ 00:04:46.052 22:14:11 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:46.052 22:14:11 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:46.052 22:14:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.052 22:14:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:46.052 ************************************ 00:04:46.052 START TEST locking_app_on_locked_coremask 00:04:46.052 ************************************ 00:04:46.052 22:14:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:04:46.052 22:14:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3752607 00:04:46.052 22:14:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:46.052 22:14:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3752607 /var/tmp/spdk.sock 00:04:46.052 22:14:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3752607 ']' 00:04:46.052 22:14:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.052 22:14:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:46.052 22:14:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.052 22:14:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:46.052 22:14:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:46.052 [2024-07-24 22:14:11.635319] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:04:46.052 [2024-07-24 22:14:11.635419] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3752607 ] 00:04:46.052 EAL: No free 2048 kB hugepages reported on node 1 00:04:46.052 [2024-07-24 22:14:11.694443] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.311 [2024-07-24 22:14:11.811560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.569 22:14:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:46.569 22:14:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:04:46.569 22:14:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3752706 00:04:46.569 22:14:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3752706 /var/tmp/spdk2.sock 00:04:46.569 22:14:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:04:46.569 22:14:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:46.569 22:14:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3752706 /var/tmp/spdk2.sock 00:04:46.569 22:14:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:04:46.569 22:14:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:46.570 22:14:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:04:46.570 22:14:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:46.570 22:14:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3752706 /var/tmp/spdk2.sock 00:04:46.570 22:14:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3752706 ']' 00:04:46.570 22:14:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:46.570 22:14:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:46.570 22:14:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:46.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:46.570 22:14:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:46.570 22:14:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:46.570 [2024-07-24 22:14:12.100005] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:04:46.570 [2024-07-24 22:14:12.100106] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3752706 ] 00:04:46.570 EAL: No free 2048 kB hugepages reported on node 1 00:04:46.570 [2024-07-24 22:14:12.190721] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3752607 has claimed it. 00:04:46.570 [2024-07-24 22:14:12.190786] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:47.138 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (3752706) - No such process 00:04:47.138 ERROR: process (pid: 3752706) is no longer running 00:04:47.138 22:14:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:47.138 22:14:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:04:47.138 22:14:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:04:47.138 22:14:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:47.138 22:14:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:47.138 22:14:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:47.138 22:14:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3752607 00:04:47.138 22:14:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3752607 00:04:47.138 22:14:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:47.396 lslocks: write error 00:04:47.396 22:14:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3752607 00:04:47.396 22:14:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3752607 ']' 00:04:47.396 22:14:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 3752607 00:04:47.396 22:14:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:04:47.396 22:14:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:47.396 22:14:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3752607 00:04:47.654 22:14:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:47.654 22:14:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:47.654 22:14:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3752607' 00:04:47.654 killing process with pid 3752607 00:04:47.654 22:14:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 3752607 00:04:47.654 22:14:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 3752607 00:04:47.912 00:04:47.912 real 0m1.851s 00:04:47.912 user 0m2.125s 00:04:47.912 sys 0m0.615s 00:04:47.912 22:14:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:47.912 22:14:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:47.912 ************************************ 00:04:47.912 END TEST locking_app_on_locked_coremask 00:04:47.912 ************************************ 00:04:47.912 22:14:13 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:47.912 22:14:13 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:47.912 22:14:13 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.912 22:14:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:47.912 ************************************ 00:04:47.912 START TEST locking_overlapped_coremask 00:04:47.912 ************************************ 00:04:47.912 22:14:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:04:47.912 22:14:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3752839 00:04:47.912 22:14:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:04:47.912 22:14:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3752839 /var/tmp/spdk.sock 00:04:47.912 22:14:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 3752839 ']' 00:04:47.912 22:14:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:47.912 22:14:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:47.912 22:14:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:47.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:47.912 22:14:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:47.912 22:14:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:47.912 [2024-07-24 22:14:13.548670] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:04:47.912 [2024-07-24 22:14:13.548773] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3752839 ] 00:04:47.912 EAL: No free 2048 kB hugepages reported on node 1 00:04:47.912 [2024-07-24 22:14:13.613436] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:48.171 [2024-07-24 22:14:13.739509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:48.171 [2024-07-24 22:14:13.739608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.171 [2024-07-24 22:14:13.739572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:48.429 22:14:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:48.429 22:14:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:04:48.429 22:14:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3752853 00:04:48.429 22:14:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:48.429 22:14:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3752853 /var/tmp/spdk2.sock 00:04:48.429 22:14:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:04:48.429 22:14:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3752853 /var/tmp/spdk2.sock 00:04:48.429 22:14:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:04:48.429 22:14:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:48.429 22:14:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:04:48.429 22:14:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:48.429 22:14:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3752853 /var/tmp/spdk2.sock 00:04:48.429 22:14:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 3752853 ']' 00:04:48.429 22:14:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:48.429 22:14:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:48.429 22:14:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:48.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:48.429 22:14:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:48.429 22:14:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:48.429 [2024-07-24 22:14:14.033393] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:04:48.429 [2024-07-24 22:14:14.033500] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3752853 ] 00:04:48.429 EAL: No free 2048 kB hugepages reported on node 1 00:04:48.429 [2024-07-24 22:14:14.123220] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3752839 has claimed it. 00:04:48.429 [2024-07-24 22:14:14.123280] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:49.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (3752853) - No such process 00:04:49.366 ERROR: process (pid: 3752853) is no longer running 00:04:49.366 22:14:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:49.366 22:14:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:04:49.366 22:14:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:04:49.366 22:14:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:49.366 22:14:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:49.366 22:14:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:49.366 22:14:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:49.366 22:14:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:49.366 22:14:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:49.366 22:14:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:49.366 22:14:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3752839 00:04:49.366 22:14:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 3752839 ']' 00:04:49.366 22:14:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 3752839 00:04:49.366 22:14:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:04:49.366 22:14:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:49.366 22:14:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3752839 00:04:49.366 22:14:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:49.366 22:14:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:49.366 22:14:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3752839' 00:04:49.366 killing process with pid 3752839 00:04:49.366 22:14:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 3752839 00:04:49.366 22:14:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 3752839 00:04:49.627 00:04:49.627 real 0m1.641s 00:04:49.627 user 0m4.384s 00:04:49.627 sys 0m0.432s 00:04:49.627 22:14:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:49.627 22:14:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:49.627 ************************************ 00:04:49.627 END TEST locking_overlapped_coremask 00:04:49.627 ************************************ 00:04:49.627 22:14:15 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:04:49.627 22:14:15 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:49.627 22:14:15 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.627 22:14:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:49.627 ************************************ 00:04:49.627 START TEST locking_overlapped_coremask_via_rpc 00:04:49.627 ************************************ 00:04:49.627 22:14:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:04:49.627 22:14:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3752987 00:04:49.627 22:14:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:04:49.627 22:14:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3752987 /var/tmp/spdk.sock 00:04:49.627 22:14:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3752987 ']' 00:04:49.627 22:14:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.627 22:14:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:49.627 22:14:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.627 22:14:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:49.627 22:14:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.627 [2024-07-24 22:14:15.246753] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:04:49.627 [2024-07-24 22:14:15.246842] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3752987 ] 00:04:49.627 EAL: No free 2048 kB hugepages reported on node 1 00:04:49.627 [2024-07-24 22:14:15.308409] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:49.627 [2024-07-24 22:14:15.308458] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:49.888 [2024-07-24 22:14:15.431146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:49.888 [2024-07-24 22:14:15.431197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:49.888 [2024-07-24 22:14:15.431201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.146 22:14:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:50.146 22:14:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:50.146 22:14:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3753079 00:04:50.146 22:14:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:04:50.146 22:14:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3753079 /var/tmp/spdk2.sock 00:04:50.146 22:14:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3753079 ']' 00:04:50.146 22:14:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:50.146 22:14:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:50.146 22:14:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:50.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:50.146 22:14:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:50.146 22:14:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.146 [2024-07-24 22:14:15.717933] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:04:50.146 [2024-07-24 22:14:15.718034] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3753079 ] 00:04:50.146 EAL: No free 2048 kB hugepages reported on node 1 00:04:50.146 [2024-07-24 22:14:15.806828] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:50.146 [2024-07-24 22:14:15.806876] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:50.406 [2024-07-24 22:14:16.046623] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:50.406 [2024-07-24 22:14:16.050552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:04:50.406 [2024-07-24 22:14:16.050555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:51.341 22:14:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:51.341 22:14:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:51.341 22:14:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:04:51.341 22:14:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.341 22:14:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.341 22:14:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.341 22:14:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:51.341 22:14:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:51.341 22:14:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:51.341 22:14:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:51.341 22:14:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:51.341 22:14:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:51.341 22:14:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:51.341 22:14:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:51.341 22:14:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.341 22:14:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.341 [2024-07-24 22:14:16.766595] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3752987 has claimed it. 00:04:51.341 request: 00:04:51.341 { 00:04:51.341 "method": "framework_enable_cpumask_locks", 00:04:51.341 "req_id": 1 00:04:51.341 } 00:04:51.341 Got JSON-RPC error response 00:04:51.341 response: 00:04:51.341 { 00:04:51.341 "code": -32603, 00:04:51.341 "message": "Failed to claim CPU core: 2" 00:04:51.341 } 00:04:51.341 22:14:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:51.341 22:14:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:51.341 22:14:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:51.341 22:14:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:51.341 22:14:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:51.341 22:14:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3752987 /var/tmp/spdk.sock 00:04:51.341 22:14:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3752987 ']' 00:04:51.341 22:14:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.341 22:14:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:51.341 22:14:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.341 22:14:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:51.341 22:14:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.599 22:14:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:51.599 22:14:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:51.599 22:14:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3753079 /var/tmp/spdk2.sock 00:04:51.599 22:14:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3753079 ']' 00:04:51.599 22:14:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:51.599 22:14:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:51.599 22:14:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:51.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:51.599 22:14:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:51.599 22:14:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.860 22:14:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:51.860 22:14:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:51.860 22:14:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:04:51.860 22:14:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:51.860 22:14:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:51.860 22:14:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:51.860 00:04:51.860 real 0m2.185s 00:04:51.860 user 0m1.244s 00:04:51.860 sys 0m0.208s 00:04:51.860 22:14:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.860 22:14:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.860 ************************************ 00:04:51.860 END TEST locking_overlapped_coremask_via_rpc 00:04:51.860 ************************************ 00:04:51.860 22:14:17 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:04:51.860 22:14:17 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3752987 ]] 00:04:51.860 22:14:17 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3752987 00:04:51.860 22:14:17 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3752987 ']' 00:04:51.860 22:14:17 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3752987 00:04:51.860 22:14:17 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:04:51.860 22:14:17 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:51.860 22:14:17 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3752987 00:04:51.860 22:14:17 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:51.860 22:14:17 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:51.860 22:14:17 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3752987' 00:04:51.860 killing process with pid 3752987 00:04:51.860 22:14:17 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 3752987 00:04:51.860 22:14:17 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 3752987 00:04:52.120 22:14:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3753079 ]] 00:04:52.120 22:14:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3753079 00:04:52.120 22:14:17 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3753079 ']' 00:04:52.120 22:14:17 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3753079 00:04:52.120 22:14:17 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:04:52.120 22:14:17 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:52.120 22:14:17 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3753079 00:04:52.120 22:14:17 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:04:52.120 22:14:17 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:04:52.121 22:14:17 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3753079' 00:04:52.121 killing process with pid 3753079 00:04:52.121 22:14:17 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 3753079 00:04:52.121 22:14:17 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 3753079 00:04:52.690 22:14:18 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:52.690 22:14:18 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:04:52.690 22:14:18 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3752987 ]] 00:04:52.690 22:14:18 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3752987 00:04:52.690 22:14:18 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3752987 ']' 00:04:52.690 22:14:18 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3752987 00:04:52.690 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3752987) - No such process 00:04:52.690 22:14:18 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 3752987 is not found' 00:04:52.690 Process with pid 3752987 is not found 00:04:52.690 22:14:18 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3753079 ]] 00:04:52.690 22:14:18 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3753079 00:04:52.690 22:14:18 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3753079 ']' 00:04:52.690 22:14:18 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3753079 00:04:52.690 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3753079) - No such process 00:04:52.690 22:14:18 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 3753079 is not found' 00:04:52.690 Process with pid 3753079 is not found 00:04:52.690 22:14:18 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:52.690 00:04:52.690 real 0m15.698s 00:04:52.690 user 0m28.339s 00:04:52.690 sys 0m5.214s 00:04:52.690 22:14:18 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:52.690 22:14:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:52.690 ************************************ 00:04:52.690 END TEST cpu_locks 00:04:52.690 ************************************ 00:04:52.690 00:04:52.690 real 0m41.928s 00:04:52.690 user 1m21.005s 00:04:52.690 sys 0m9.297s 00:04:52.690 22:14:18 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:52.690 22:14:18 event -- common/autotest_common.sh@10 -- # set +x 00:04:52.690 ************************************ 00:04:52.690 END TEST event 00:04:52.690 ************************************ 00:04:52.690 22:14:18 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:52.690 22:14:18 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:52.690 22:14:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.690 22:14:18 -- common/autotest_common.sh@10 -- # set +x 00:04:52.690 ************************************ 00:04:52.690 START TEST thread 00:04:52.690 ************************************ 00:04:52.690 22:14:18 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:52.690 * Looking for test storage... 00:04:52.690 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:04:52.690 22:14:18 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:52.690 22:14:18 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:04:52.690 22:14:18 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.690 22:14:18 thread -- common/autotest_common.sh@10 -- # set +x 00:04:52.690 ************************************ 00:04:52.690 START TEST thread_poller_perf 00:04:52.690 ************************************ 00:04:52.690 22:14:18 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:52.690 [2024-07-24 22:14:18.302903] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:04:52.691 [2024-07-24 22:14:18.302976] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3753385 ] 00:04:52.691 EAL: No free 2048 kB hugepages reported on node 1 00:04:52.691 [2024-07-24 22:14:18.362978] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.950 [2024-07-24 22:14:18.481168] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.950 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:04:54.333 ====================================== 00:04:54.333 busy:2718206680 (cyc) 00:04:54.333 total_run_count: 261000 00:04:54.333 tsc_hz: 2700000000 (cyc) 00:04:54.333 ====================================== 00:04:54.333 poller_cost: 10414 (cyc), 3857 (nsec) 00:04:54.333 00:04:54.333 real 0m1.314s 00:04:54.333 user 0m1.231s 00:04:54.333 sys 0m0.075s 00:04:54.333 22:14:19 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:54.333 22:14:19 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:54.333 ************************************ 00:04:54.333 END TEST thread_poller_perf 00:04:54.333 ************************************ 00:04:54.333 22:14:19 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:54.333 22:14:19 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:04:54.333 22:14:19 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.333 22:14:19 thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.333 ************************************ 00:04:54.333 START TEST thread_poller_perf 00:04:54.333 ************************************ 00:04:54.333 22:14:19 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:54.333 [2024-07-24 22:14:19.671509] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:04:54.333 [2024-07-24 22:14:19.671589] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3753520 ] 00:04:54.333 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.333 [2024-07-24 22:14:19.732321] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.333 [2024-07-24 22:14:19.853789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.333 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:04:55.273 ====================================== 00:04:55.273 busy:2703015640 (cyc) 00:04:55.273 total_run_count: 3665000 00:04:55.273 tsc_hz: 2700000000 (cyc) 00:04:55.273 ====================================== 00:04:55.273 poller_cost: 737 (cyc), 272 (nsec) 00:04:55.273 00:04:55.273 real 0m1.309s 00:04:55.273 user 0m1.219s 00:04:55.273 sys 0m0.082s 00:04:55.273 22:14:20 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:55.273 22:14:20 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:55.273 ************************************ 00:04:55.273 END TEST thread_poller_perf 00:04:55.273 ************************************ 00:04:55.534 22:14:20 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:04:55.534 00:04:55.534 real 0m2.782s 00:04:55.534 user 0m2.512s 00:04:55.534 sys 0m0.264s 00:04:55.534 22:14:20 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:55.534 22:14:20 thread -- common/autotest_common.sh@10 -- # set +x 00:04:55.534 ************************************ 00:04:55.534 END TEST thread 00:04:55.535 ************************************ 00:04:55.535 22:14:21 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:04:55.535 22:14:21 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:55.535 22:14:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.535 22:14:21 -- common/autotest_common.sh@10 -- # set +x 00:04:55.535 ************************************ 00:04:55.535 START TEST accel 00:04:55.535 ************************************ 00:04:55.535 22:14:21 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:04:55.535 * Looking for test storage... 00:04:55.535 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:04:55.535 22:14:21 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:04:55.535 22:14:21 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:04:55.535 22:14:21 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:55.535 22:14:21 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=3753760 00:04:55.535 22:14:21 accel -- accel/accel.sh@63 -- # waitforlisten 3753760 00:04:55.535 22:14:21 accel -- common/autotest_common.sh@829 -- # '[' -z 3753760 ']' 00:04:55.535 22:14:21 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.535 22:14:21 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:55.535 22:14:21 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.535 22:14:21 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:55.535 22:14:21 accel -- common/autotest_common.sh@10 -- # set +x 00:04:55.535 22:14:21 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:04:55.535 22:14:21 accel -- accel/accel.sh@61 -- # build_accel_config 00:04:55.535 22:14:21 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:55.535 22:14:21 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:55.535 22:14:21 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:55.535 22:14:21 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:55.535 22:14:21 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:55.535 22:14:21 accel -- accel/accel.sh@40 -- # local IFS=, 00:04:55.535 22:14:21 accel -- accel/accel.sh@41 -- # jq -r . 00:04:55.535 [2024-07-24 22:14:21.156066] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:04:55.535 [2024-07-24 22:14:21.156166] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3753760 ] 00:04:55.535 EAL: No free 2048 kB hugepages reported on node 1 00:04:55.535 [2024-07-24 22:14:21.218238] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.796 [2024-07-24 22:14:21.338993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.058 22:14:21 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:56.058 22:14:21 accel -- common/autotest_common.sh@862 -- # return 0 00:04:56.058 22:14:21 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:04:56.058 22:14:21 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:04:56.058 22:14:21 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:04:56.058 22:14:21 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:04:56.058 22:14:21 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:04:56.058 22:14:21 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:04:56.058 22:14:21 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:56.058 22:14:21 accel -- common/autotest_common.sh@10 -- # set +x 00:04:56.058 22:14:21 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:04:56.058 22:14:21 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:56.058 22:14:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:56.058 22:14:21 accel -- accel/accel.sh@72 -- # IFS== 00:04:56.058 22:14:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:56.058 22:14:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:56.058 22:14:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:56.058 22:14:21 accel -- accel/accel.sh@72 -- # IFS== 00:04:56.058 22:14:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:56.058 22:14:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:56.058 22:14:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:56.058 22:14:21 accel -- accel/accel.sh@72 -- # IFS== 00:04:56.058 22:14:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:56.058 22:14:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:56.059 22:14:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:56.059 22:14:21 accel -- accel/accel.sh@72 -- # IFS== 00:04:56.059 22:14:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:56.059 22:14:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:56.059 22:14:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:56.059 22:14:21 accel -- accel/accel.sh@72 -- # IFS== 00:04:56.059 22:14:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:56.059 22:14:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:56.059 22:14:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:56.059 22:14:21 accel -- accel/accel.sh@72 -- # IFS== 00:04:56.059 22:14:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:56.059 22:14:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:56.059 22:14:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:56.059 22:14:21 accel -- accel/accel.sh@72 -- # IFS== 00:04:56.059 22:14:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:56.059 22:14:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:56.059 22:14:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:56.059 22:14:21 accel -- accel/accel.sh@72 -- # IFS== 00:04:56.059 22:14:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:56.059 22:14:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:56.059 22:14:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:56.059 22:14:21 accel -- accel/accel.sh@72 -- # IFS== 00:04:56.059 22:14:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:56.059 22:14:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:56.059 22:14:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:56.059 22:14:21 accel -- accel/accel.sh@72 -- # IFS== 00:04:56.059 22:14:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:56.059 22:14:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:56.059 22:14:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:56.059 22:14:21 accel -- accel/accel.sh@72 -- # IFS== 00:04:56.059 22:14:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:56.059 22:14:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:56.059 22:14:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:56.059 22:14:21 accel -- accel/accel.sh@72 -- # IFS== 00:04:56.059 22:14:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:56.059 22:14:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:56.059 22:14:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:56.059 22:14:21 accel -- accel/accel.sh@72 -- # IFS== 00:04:56.059 22:14:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:56.059 22:14:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:56.059 22:14:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:56.059 22:14:21 accel -- accel/accel.sh@72 -- # IFS== 00:04:56.059 22:14:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:56.059 22:14:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:56.059 22:14:21 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:56.059 22:14:21 accel -- accel/accel.sh@72 -- # IFS== 00:04:56.059 22:14:21 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:56.059 22:14:21 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:56.059 22:14:21 accel -- accel/accel.sh@75 -- # killprocess 3753760 00:04:56.059 22:14:21 accel -- common/autotest_common.sh@948 -- # '[' -z 3753760 ']' 00:04:56.059 22:14:21 accel -- common/autotest_common.sh@952 -- # kill -0 3753760 00:04:56.059 22:14:21 accel -- common/autotest_common.sh@953 -- # uname 00:04:56.059 22:14:21 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:56.059 22:14:21 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3753760 00:04:56.059 22:14:21 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:56.059 22:14:21 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:56.059 22:14:21 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3753760' 00:04:56.059 killing process with pid 3753760 00:04:56.059 22:14:21 accel -- common/autotest_common.sh@967 -- # kill 3753760 00:04:56.059 22:14:21 accel -- common/autotest_common.sh@972 -- # wait 3753760 00:04:56.318 22:14:21 accel -- accel/accel.sh@76 -- # trap - ERR 00:04:56.318 22:14:21 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:04:56.318 22:14:21 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:04:56.318 22:14:21 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.318 22:14:21 accel -- common/autotest_common.sh@10 -- # set +x 00:04:56.318 22:14:22 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:04:56.318 22:14:22 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:04:56.318 22:14:22 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:04:56.318 22:14:22 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:56.318 22:14:22 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:56.318 22:14:22 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:56.318 22:14:22 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:56.318 22:14:22 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:56.319 22:14:22 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:04:56.319 22:14:22 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:04:56.580 22:14:22 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:56.580 22:14:22 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:04:56.580 22:14:22 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:04:56.580 22:14:22 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:04:56.580 22:14:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.580 22:14:22 accel -- common/autotest_common.sh@10 -- # set +x 00:04:56.580 ************************************ 00:04:56.580 START TEST accel_missing_filename 00:04:56.580 ************************************ 00:04:56.580 22:14:22 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:04:56.580 22:14:22 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:04:56.580 22:14:22 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:04:56.580 22:14:22 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:04:56.580 22:14:22 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:56.580 22:14:22 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:04:56.580 22:14:22 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:56.580 22:14:22 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:04:56.580 22:14:22 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:04:56.580 22:14:22 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:04:56.580 22:14:22 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:56.580 22:14:22 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:56.580 22:14:22 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:56.580 22:14:22 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:56.580 22:14:22 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:56.580 22:14:22 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:04:56.580 22:14:22 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:04:56.580 [2024-07-24 22:14:22.101077] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:04:56.580 [2024-07-24 22:14:22.101153] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3753896 ] 00:04:56.580 EAL: No free 2048 kB hugepages reported on node 1 00:04:56.580 [2024-07-24 22:14:22.161294] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.580 [2024-07-24 22:14:22.281296] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.846 [2024-07-24 22:14:22.333175] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:56.846 [2024-07-24 22:14:22.382723] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:04:56.846 A filename is required. 00:04:56.846 22:14:22 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:04:56.846 22:14:22 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:56.846 22:14:22 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:04:56.846 22:14:22 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:04:56.846 22:14:22 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:04:56.846 22:14:22 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:56.846 00:04:56.846 real 0m0.413s 00:04:56.846 user 0m0.311s 00:04:56.846 sys 0m0.136s 00:04:56.846 22:14:22 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:56.846 22:14:22 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:04:56.846 ************************************ 00:04:56.846 END TEST accel_missing_filename 00:04:56.846 ************************************ 00:04:56.846 22:14:22 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:04:56.846 22:14:22 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:04:56.846 22:14:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.846 22:14:22 accel -- common/autotest_common.sh@10 -- # set +x 00:04:57.171 ************************************ 00:04:57.171 START TEST accel_compress_verify 00:04:57.171 ************************************ 00:04:57.171 22:14:22 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:04:57.171 22:14:22 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:04:57.171 22:14:22 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:04:57.171 22:14:22 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:04:57.171 22:14:22 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:57.171 22:14:22 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:04:57.171 22:14:22 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:57.171 22:14:22 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:04:57.171 22:14:22 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:04:57.171 22:14:22 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:04:57.171 22:14:22 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:57.171 22:14:22 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:57.171 22:14:22 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:57.171 22:14:22 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:57.171 22:14:22 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:57.171 22:14:22 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:04:57.171 22:14:22 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:04:57.171 [2024-07-24 22:14:22.570032] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:04:57.172 [2024-07-24 22:14:22.570105] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3753920 ] 00:04:57.172 EAL: No free 2048 kB hugepages reported on node 1 00:04:57.172 [2024-07-24 22:14:22.630786] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.172 [2024-07-24 22:14:22.751132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.172 [2024-07-24 22:14:22.802444] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:57.172 [2024-07-24 22:14:22.853024] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:04:57.457 00:04:57.457 Compression does not support the verify option, aborting. 00:04:57.457 22:14:22 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:04:57.457 22:14:22 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:57.457 22:14:22 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:04:57.457 22:14:22 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:04:57.457 22:14:22 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:04:57.457 22:14:22 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:57.457 00:04:57.457 real 0m0.413s 00:04:57.457 user 0m0.320s 00:04:57.457 sys 0m0.128s 00:04:57.457 22:14:22 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:57.457 22:14:22 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:04:57.457 ************************************ 00:04:57.457 END TEST accel_compress_verify 00:04:57.457 ************************************ 00:04:57.457 22:14:22 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:04:57.457 22:14:22 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:04:57.457 22:14:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.457 22:14:22 accel -- common/autotest_common.sh@10 -- # set +x 00:04:57.457 ************************************ 00:04:57.457 START TEST accel_wrong_workload 00:04:57.457 ************************************ 00:04:57.457 22:14:23 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:04:57.457 22:14:23 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:04:57.457 22:14:23 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:04:57.457 22:14:23 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:04:57.457 22:14:23 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:57.457 22:14:23 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:04:57.457 22:14:23 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:57.457 22:14:23 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:04:57.457 22:14:23 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:04:57.457 22:14:23 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:04:57.457 22:14:23 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:57.457 22:14:23 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:57.457 22:14:23 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:57.457 22:14:23 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:57.457 22:14:23 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:57.457 22:14:23 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:04:57.457 22:14:23 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:04:57.457 Unsupported workload type: foobar 00:04:57.457 [2024-07-24 22:14:23.036060] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:04:57.457 accel_perf options: 00:04:57.457 [-h help message] 00:04:57.457 [-q queue depth per core] 00:04:57.457 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:04:57.457 [-T number of threads per core 00:04:57.457 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:04:57.457 [-t time in seconds] 00:04:57.457 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:04:57.457 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:04:57.457 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:04:57.457 [-l for compress/decompress workloads, name of uncompressed input file 00:04:57.457 [-S for crc32c workload, use this seed value (default 0) 00:04:57.457 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:04:57.457 [-f for fill workload, use this BYTE value (default 255) 00:04:57.457 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:04:57.457 [-y verify result if this switch is on] 00:04:57.457 [-a tasks to allocate per core (default: same value as -q)] 00:04:57.457 Can be used to spread operations across a wider range of memory. 00:04:57.457 22:14:23 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:04:57.457 22:14:23 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:57.457 22:14:23 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:57.457 22:14:23 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:57.457 00:04:57.457 real 0m0.028s 00:04:57.457 user 0m0.014s 00:04:57.457 sys 0m0.014s 00:04:57.457 22:14:23 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:57.457 22:14:23 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:04:57.457 ************************************ 00:04:57.457 END TEST accel_wrong_workload 00:04:57.457 ************************************ 00:04:57.458 Error: writing output failed: Broken pipe 00:04:57.458 22:14:23 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:04:57.458 22:14:23 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:04:57.458 22:14:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.458 22:14:23 accel -- common/autotest_common.sh@10 -- # set +x 00:04:57.458 ************************************ 00:04:57.458 START TEST accel_negative_buffers 00:04:57.458 ************************************ 00:04:57.458 22:14:23 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:04:57.458 22:14:23 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:04:57.458 22:14:23 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:04:57.458 22:14:23 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:04:57.458 22:14:23 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:57.458 22:14:23 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:04:57.458 22:14:23 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:57.458 22:14:23 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:04:57.458 22:14:23 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:04:57.458 22:14:23 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:04:57.458 22:14:23 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:57.458 22:14:23 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:57.458 22:14:23 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:57.458 22:14:23 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:57.458 22:14:23 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:57.458 22:14:23 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:04:57.458 22:14:23 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:04:57.458 -x option must be non-negative. 00:04:57.458 [2024-07-24 22:14:23.116985] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:04:57.458 accel_perf options: 00:04:57.458 [-h help message] 00:04:57.458 [-q queue depth per core] 00:04:57.458 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:04:57.458 [-T number of threads per core 00:04:57.458 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:04:57.458 [-t time in seconds] 00:04:57.458 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:04:57.458 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:04:57.458 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:04:57.458 [-l for compress/decompress workloads, name of uncompressed input file 00:04:57.458 [-S for crc32c workload, use this seed value (default 0) 00:04:57.458 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:04:57.458 [-f for fill workload, use this BYTE value (default 255) 00:04:57.458 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:04:57.458 [-y verify result if this switch is on] 00:04:57.458 [-a tasks to allocate per core (default: same value as -q)] 00:04:57.458 Can be used to spread operations across a wider range of memory. 00:04:57.458 22:14:23 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:04:57.458 22:14:23 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:57.458 22:14:23 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:57.458 22:14:23 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:57.458 00:04:57.458 real 0m0.025s 00:04:57.458 user 0m0.012s 00:04:57.458 sys 0m0.013s 00:04:57.458 22:14:23 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:57.458 22:14:23 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:04:57.458 ************************************ 00:04:57.458 END TEST accel_negative_buffers 00:04:57.458 ************************************ 00:04:57.458 Error: writing output failed: Broken pipe 00:04:57.458 22:14:23 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:04:57.458 22:14:23 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:04:57.458 22:14:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.458 22:14:23 accel -- common/autotest_common.sh@10 -- # set +x 00:04:57.717 ************************************ 00:04:57.717 START TEST accel_crc32c 00:04:57.717 ************************************ 00:04:57.717 22:14:23 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:04:57.717 22:14:23 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:04:57.717 22:14:23 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:04:57.717 22:14:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.717 22:14:23 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:04:57.717 22:14:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.717 22:14:23 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:04:57.717 22:14:23 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:04:57.717 22:14:23 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:57.717 22:14:23 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:57.717 22:14:23 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:57.717 22:14:23 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:57.717 22:14:23 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:57.717 22:14:23 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:04:57.717 22:14:23 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:04:57.717 [2024-07-24 22:14:23.190082] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:04:57.717 [2024-07-24 22:14:23.190151] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3754081 ] 00:04:57.717 EAL: No free 2048 kB hugepages reported on node 1 00:04:57.717 [2024-07-24 22:14:23.250852] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.717 [2024-07-24 22:14:23.371218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.975 22:14:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:57.975 22:14:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.975 22:14:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.975 22:14:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.975 22:14:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:57.975 22:14:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.975 22:14:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.975 22:14:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.975 22:14:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:04:57.975 22:14:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.975 22:14:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.975 22:14:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.975 22:14:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:57.975 22:14:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.975 22:14:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.975 22:14:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.975 22:14:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:57.975 22:14:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.975 22:14:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.975 22:14:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.975 22:14:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:04:57.975 22:14:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.975 22:14:23 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:04:57.975 22:14:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.975 22:14:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.975 22:14:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:04:57.975 22:14:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.975 22:14:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.975 22:14:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.975 22:14:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:57.975 22:14:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.975 22:14:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.975 22:14:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.975 22:14:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:57.975 22:14:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.975 22:14:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.975 22:14:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.975 22:14:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:04:57.975 22:14:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.975 22:14:23 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:04:57.975 22:14:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.975 22:14:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.975 22:14:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:04:57.975 22:14:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.975 22:14:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.975 22:14:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.975 22:14:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:04:57.975 22:14:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.975 22:14:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.975 22:14:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.975 22:14:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:04:57.975 22:14:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.975 22:14:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.975 22:14:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.975 22:14:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:04:57.975 22:14:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.975 22:14:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.975 22:14:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.975 22:14:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:04:57.975 22:14:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.975 22:14:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.975 22:14:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.975 22:14:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:57.975 22:14:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.975 22:14:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.975 22:14:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.975 22:14:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:57.975 22:14:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.975 22:14:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.975 22:14:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:58.911 22:14:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:58.911 22:14:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:58.911 22:14:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:58.911 22:14:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:58.911 22:14:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:58.911 22:14:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:58.911 22:14:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:58.911 22:14:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:58.911 22:14:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:58.911 22:14:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:58.911 22:14:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:58.911 22:14:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:58.911 22:14:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:58.911 22:14:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:58.911 22:14:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:58.911 22:14:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:58.911 22:14:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:58.911 22:14:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:58.911 22:14:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:58.911 22:14:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:58.911 22:14:24 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:58.911 22:14:24 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:58.911 22:14:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:58.911 22:14:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:58.911 22:14:24 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:58.911 22:14:24 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:04:58.911 22:14:24 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:58.911 00:04:58.911 real 0m1.418s 00:04:58.911 user 0m1.291s 00:04:58.911 sys 0m0.128s 00:04:58.911 22:14:24 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:58.911 22:14:24 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:04:58.911 ************************************ 00:04:58.911 END TEST accel_crc32c 00:04:58.911 ************************************ 00:04:59.170 22:14:24 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:04:59.170 22:14:24 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:04:59.170 22:14:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:59.170 22:14:24 accel -- common/autotest_common.sh@10 -- # set +x 00:04:59.170 ************************************ 00:04:59.170 START TEST accel_crc32c_C2 00:04:59.170 ************************************ 00:04:59.170 22:14:24 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:04:59.170 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:04:59.170 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:04:59.170 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:59.170 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:04:59.170 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:59.170 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:04:59.170 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:04:59.170 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:59.170 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:59.170 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:59.170 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:59.170 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:59.170 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:04:59.170 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:04:59.170 [2024-07-24 22:14:24.660613] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:04:59.170 [2024-07-24 22:14:24.660687] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3754200 ] 00:04:59.170 EAL: No free 2048 kB hugepages reported on node 1 00:04:59.170 [2024-07-24 22:14:24.720659] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.170 [2024-07-24 22:14:24.840998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.429 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:59.429 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.429 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:59.429 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:59.429 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:59.429 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.429 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:59.429 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:59.429 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:04:59.429 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.429 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:59.429 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:59.429 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:59.429 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.429 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:59.429 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:59.429 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:59.429 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.429 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:59.429 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:59.429 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:04:59.429 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.429 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:04:59.429 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:59.429 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:59.429 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:04:59.429 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.429 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:59.429 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:59.429 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:59.429 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.429 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:59.429 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:59.429 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:59.429 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.429 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:59.429 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:59.429 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:04:59.429 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.429 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:04:59.429 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:59.429 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:59.430 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:04:59.430 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.430 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:59.430 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:59.430 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:04:59.430 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.430 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:59.430 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:59.430 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:04:59.430 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.430 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:59.430 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:59.430 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:04:59.430 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.430 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:59.430 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:59.430 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:04:59.430 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.430 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:59.430 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:59.430 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:59.430 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.430 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:59.430 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:59.430 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:59.430 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.430 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:59.430 22:14:24 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:00.365 22:14:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:00.365 22:14:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.365 22:14:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:00.365 22:14:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:00.365 22:14:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:00.365 22:14:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.365 22:14:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:00.365 22:14:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:00.365 22:14:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:00.365 22:14:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.365 22:14:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:00.365 22:14:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:00.365 22:14:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:00.365 22:14:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.365 22:14:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:00.365 22:14:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:00.365 22:14:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:00.365 22:14:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.365 22:14:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:00.365 22:14:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:00.365 22:14:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:00.365 22:14:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.365 22:14:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:00.365 22:14:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:00.365 22:14:26 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:00.365 22:14:26 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:00.366 22:14:26 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:00.366 00:05:00.366 real 0m1.413s 00:05:00.366 user 0m1.287s 00:05:00.366 sys 0m0.125s 00:05:00.366 22:14:26 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:00.366 22:14:26 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:00.366 ************************************ 00:05:00.366 END TEST accel_crc32c_C2 00:05:00.366 ************************************ 00:05:00.625 22:14:26 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:00.625 22:14:26 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:00.625 22:14:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:00.625 22:14:26 accel -- common/autotest_common.sh@10 -- # set +x 00:05:00.625 ************************************ 00:05:00.625 START TEST accel_copy 00:05:00.625 ************************************ 00:05:00.625 22:14:26 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:05:00.625 22:14:26 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:00.625 22:14:26 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:05:00.625 22:14:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:00.625 22:14:26 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:00.625 22:14:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:00.625 22:14:26 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:00.625 22:14:26 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:00.625 22:14:26 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:00.625 22:14:26 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:00.625 22:14:26 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:00.625 22:14:26 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:00.625 22:14:26 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:00.625 22:14:26 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:00.625 22:14:26 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:05:00.625 [2024-07-24 22:14:26.126662] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:05:00.625 [2024-07-24 22:14:26.126739] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3754325 ] 00:05:00.625 EAL: No free 2048 kB hugepages reported on node 1 00:05:00.625 [2024-07-24 22:14:26.186907] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.625 [2024-07-24 22:14:26.305084] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.883 22:14:26 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:00.883 22:14:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:00.884 22:14:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:00.884 22:14:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:00.884 22:14:26 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:00.884 22:14:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:00.884 22:14:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:00.884 22:14:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:00.884 22:14:26 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:05:00.884 22:14:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:00.884 22:14:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:00.884 22:14:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:00.884 22:14:26 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:00.884 22:14:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:00.884 22:14:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:00.884 22:14:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:00.884 22:14:26 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:00.884 22:14:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:00.884 22:14:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:00.884 22:14:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:00.884 22:14:26 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:05:00.884 22:14:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:00.884 22:14:26 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:05:00.884 22:14:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:00.884 22:14:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:00.884 22:14:26 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:00.884 22:14:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:00.884 22:14:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:00.884 22:14:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:00.884 22:14:26 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:00.884 22:14:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:00.884 22:14:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:00.884 22:14:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:00.884 22:14:26 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:05:00.884 22:14:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:00.884 22:14:26 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:00.884 22:14:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:00.884 22:14:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:00.884 22:14:26 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:00.884 22:14:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:00.884 22:14:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:00.884 22:14:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:00.884 22:14:26 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:00.884 22:14:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:00.884 22:14:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:00.884 22:14:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:00.884 22:14:26 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:05:00.884 22:14:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:00.884 22:14:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:00.884 22:14:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:00.884 22:14:26 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:00.884 22:14:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:00.884 22:14:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:00.884 22:14:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:00.884 22:14:26 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:05:00.884 22:14:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:00.884 22:14:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:00.884 22:14:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:00.884 22:14:26 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:00.884 22:14:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:00.884 22:14:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:00.884 22:14:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:00.884 22:14:26 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:00.884 22:14:26 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:00.884 22:14:26 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:00.884 22:14:26 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:01.818 22:14:27 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:01.818 22:14:27 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:01.818 22:14:27 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:01.818 22:14:27 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:01.818 22:14:27 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:01.818 22:14:27 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:01.818 22:14:27 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:01.818 22:14:27 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:01.818 22:14:27 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:01.818 22:14:27 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:01.818 22:14:27 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:01.818 22:14:27 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:01.818 22:14:27 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:01.818 22:14:27 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:01.818 22:14:27 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:01.818 22:14:27 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:01.818 22:14:27 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:01.818 22:14:27 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:01.818 22:14:27 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:01.818 22:14:27 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:01.818 22:14:27 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:01.818 22:14:27 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:01.818 22:14:27 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:01.818 22:14:27 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:01.818 22:14:27 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:01.818 22:14:27 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:01.818 22:14:27 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:01.818 00:05:01.818 real 0m1.409s 00:05:01.818 user 0m1.287s 00:05:01.818 sys 0m0.123s 00:05:01.818 22:14:27 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:01.818 22:14:27 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:05:01.818 ************************************ 00:05:01.818 END TEST accel_copy 00:05:01.818 ************************************ 00:05:02.077 22:14:27 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:02.077 22:14:27 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:02.077 22:14:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.077 22:14:27 accel -- common/autotest_common.sh@10 -- # set +x 00:05:02.077 ************************************ 00:05:02.077 START TEST accel_fill 00:05:02.077 ************************************ 00:05:02.077 22:14:27 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:02.077 22:14:27 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:05:02.077 22:14:27 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:05:02.077 22:14:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:02.077 22:14:27 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:02.077 22:14:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:02.077 22:14:27 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:02.077 22:14:27 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:05:02.077 22:14:27 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:02.078 22:14:27 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:02.078 22:14:27 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:02.078 22:14:27 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:02.078 22:14:27 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:02.078 22:14:27 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:05:02.078 22:14:27 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:05:02.078 [2024-07-24 22:14:27.585282] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:05:02.078 [2024-07-24 22:14:27.585355] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3754539 ] 00:05:02.078 EAL: No free 2048 kB hugepages reported on node 1 00:05:02.078 [2024-07-24 22:14:27.645409] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.078 [2024-07-24 22:14:27.765812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.336 22:14:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:02.336 22:14:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:02.336 22:14:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:02.336 22:14:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:02.336 22:14:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:02.336 22:14:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:02.336 22:14:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:02.336 22:14:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:02.336 22:14:27 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:05:02.336 22:14:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:02.336 22:14:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:02.336 22:14:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:02.336 22:14:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:02.336 22:14:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:02.336 22:14:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:02.336 22:14:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:02.336 22:14:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:02.336 22:14:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:02.336 22:14:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:02.336 22:14:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:02.336 22:14:27 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:05:02.336 22:14:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:02.336 22:14:27 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:05:02.336 22:14:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:02.336 22:14:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:02.336 22:14:27 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:05:02.336 22:14:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:02.336 22:14:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:02.336 22:14:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:02.336 22:14:27 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:02.336 22:14:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:02.336 22:14:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:02.336 22:14:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:02.336 22:14:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:02.336 22:14:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:02.336 22:14:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:02.336 22:14:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:02.336 22:14:27 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:05:02.336 22:14:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:02.336 22:14:27 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:05:02.336 22:14:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:02.336 22:14:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:02.336 22:14:27 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:02.336 22:14:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:02.336 22:14:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:02.336 22:14:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:02.336 22:14:27 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:02.336 22:14:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:02.336 22:14:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:02.336 22:14:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:02.336 22:14:27 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:05:02.336 22:14:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:02.336 22:14:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:02.336 22:14:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:02.336 22:14:27 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:05:02.336 22:14:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:02.336 22:14:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:02.336 22:14:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:02.336 22:14:27 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:05:02.336 22:14:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:02.336 22:14:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:02.336 22:14:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:02.336 22:14:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:02.336 22:14:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:02.336 22:14:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:02.336 22:14:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:02.336 22:14:27 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:02.336 22:14:27 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:02.336 22:14:27 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:02.336 22:14:27 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:03.711 22:14:28 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:03.711 22:14:28 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:03.711 22:14:28 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:03.711 22:14:28 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:03.711 22:14:28 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:03.711 22:14:28 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:03.711 22:14:28 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:03.711 22:14:28 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:03.711 22:14:28 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:03.711 22:14:28 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:03.711 22:14:28 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:03.711 22:14:28 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:03.711 22:14:28 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:03.711 22:14:28 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:03.711 22:14:28 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:03.711 22:14:28 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:03.711 22:14:28 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:03.711 22:14:28 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:03.711 22:14:28 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:03.711 22:14:28 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:03.711 22:14:28 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:03.711 22:14:28 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:03.711 22:14:28 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:03.711 22:14:28 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:03.711 22:14:28 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:03.711 22:14:28 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:03.711 22:14:28 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:03.711 00:05:03.711 real 0m1.416s 00:05:03.711 user 0m1.287s 00:05:03.711 sys 0m0.130s 00:05:03.711 22:14:28 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:03.711 22:14:28 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:05:03.711 ************************************ 00:05:03.711 END TEST accel_fill 00:05:03.711 ************************************ 00:05:03.711 22:14:29 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:03.711 22:14:29 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:03.711 22:14:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.711 22:14:29 accel -- common/autotest_common.sh@10 -- # set +x 00:05:03.711 ************************************ 00:05:03.711 START TEST accel_copy_crc32c 00:05:03.711 ************************************ 00:05:03.711 22:14:29 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:05:03.711 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:03.711 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:03.711 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:03.711 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:03.711 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:03.711 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:03.711 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:03.711 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:03.711 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:03.711 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:03.711 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:03.711 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:03.711 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:03.711 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:03.711 [2024-07-24 22:14:29.056921] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:05:03.711 [2024-07-24 22:14:29.057002] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3754660 ] 00:05:03.711 EAL: No free 2048 kB hugepages reported on node 1 00:05:03.711 [2024-07-24 22:14:29.116383] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.711 [2024-07-24 22:14:29.237106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.711 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:03.711 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:03.711 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:03.711 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:03.711 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:03.711 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:03.711 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:03.711 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:03.711 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:03.711 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:03.711 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:03.711 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:03.711 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:03.711 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:03.711 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:03.711 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:03.711 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:03.711 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:03.711 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:03.711 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:03.711 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:03.711 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:03.711 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:03.711 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:03.711 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:03.711 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:05:03.711 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:03.711 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:03.711 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:03.711 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:03.712 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:03.712 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:03.712 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:03.712 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:03.712 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:03.712 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:03.712 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:03.712 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:03.712 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:03.712 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:03.712 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:03.712 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:05:03.712 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:03.712 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:03.712 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:03.712 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:03.712 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:03.712 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:03.712 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:03.712 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:03.712 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:03.712 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:03.712 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:03.712 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:03.712 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:05:03.712 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:03.712 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:03.712 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:03.712 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:03.712 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:03.712 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:03.712 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:03.712 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:03.712 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:03.712 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:03.712 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:03.712 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:03.712 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:03.712 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:03.712 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:03.712 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:03.712 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:03.712 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:03.712 22:14:29 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:05.088 22:14:30 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:05.088 22:14:30 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:05.088 22:14:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:05.088 22:14:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:05.088 22:14:30 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:05.088 22:14:30 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:05.088 22:14:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:05.088 22:14:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:05.088 22:14:30 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:05.088 22:14:30 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:05.088 22:14:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:05.088 22:14:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:05.088 22:14:30 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:05.088 22:14:30 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:05.088 22:14:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:05.088 22:14:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:05.088 22:14:30 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:05.088 22:14:30 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:05.088 22:14:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:05.088 22:14:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:05.088 22:14:30 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:05.088 22:14:30 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:05.088 22:14:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:05.088 22:14:30 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:05.088 22:14:30 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:05.088 22:14:30 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:05.088 22:14:30 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:05.088 00:05:05.088 real 0m1.416s 00:05:05.088 user 0m1.288s 00:05:05.088 sys 0m0.129s 00:05:05.088 22:14:30 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:05.088 22:14:30 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:05.088 ************************************ 00:05:05.088 END TEST accel_copy_crc32c 00:05:05.088 ************************************ 00:05:05.088 22:14:30 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:05.088 22:14:30 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:05.088 22:14:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:05.088 22:14:30 accel -- common/autotest_common.sh@10 -- # set +x 00:05:05.088 ************************************ 00:05:05.088 START TEST accel_copy_crc32c_C2 00:05:05.088 ************************************ 00:05:05.088 22:14:30 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:05.088 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:05.088 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:05.088 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:05.088 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:05.088 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:05.088 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:05.089 [2024-07-24 22:14:30.528421] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:05:05.089 [2024-07-24 22:14:30.528500] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3754785 ] 00:05:05.089 EAL: No free 2048 kB hugepages reported on node 1 00:05:05.089 [2024-07-24 22:14:30.589039] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.089 [2024-07-24 22:14:30.706820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:05.089 22:14:30 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:06.463 22:14:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:06.463 22:14:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:06.463 22:14:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:06.463 22:14:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:06.463 22:14:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:06.463 22:14:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:06.463 22:14:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:06.463 22:14:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:06.463 22:14:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:06.463 22:14:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:06.463 22:14:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:06.463 22:14:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:06.463 22:14:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:06.463 22:14:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:06.463 22:14:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:06.463 22:14:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:06.463 22:14:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:06.463 22:14:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:06.463 22:14:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:06.463 22:14:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:06.463 22:14:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:06.463 22:14:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:06.463 22:14:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:06.463 22:14:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:06.463 22:14:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:06.463 22:14:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:06.463 22:14:31 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:06.463 00:05:06.463 real 0m1.409s 00:05:06.463 user 0m1.286s 00:05:06.463 sys 0m0.124s 00:05:06.463 22:14:31 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:06.463 22:14:31 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:06.463 ************************************ 00:05:06.463 END TEST accel_copy_crc32c_C2 00:05:06.463 ************************************ 00:05:06.463 22:14:31 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:06.463 22:14:31 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:06.463 22:14:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.463 22:14:31 accel -- common/autotest_common.sh@10 -- # set +x 00:05:06.463 ************************************ 00:05:06.463 START TEST accel_dualcast 00:05:06.463 ************************************ 00:05:06.463 22:14:31 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:05:06.463 22:14:31 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:05:06.463 22:14:31 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:05:06.463 22:14:31 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:06.463 22:14:31 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:06.463 22:14:31 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:06.463 22:14:31 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:06.463 22:14:31 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:05:06.463 22:14:31 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:06.464 22:14:31 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:06.464 22:14:31 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:06.464 22:14:31 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:06.464 22:14:31 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:06.464 22:14:31 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:05:06.464 22:14:31 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:05:06.464 [2024-07-24 22:14:31.990770] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:05:06.464 [2024-07-24 22:14:31.990859] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3754998 ] 00:05:06.464 EAL: No free 2048 kB hugepages reported on node 1 00:05:06.464 [2024-07-24 22:14:32.050307] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.722 [2024-07-24 22:14:32.170882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.722 22:14:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:06.722 22:14:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:06.722 22:14:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:06.722 22:14:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:06.722 22:14:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:06.722 22:14:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:06.722 22:14:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:06.722 22:14:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:06.722 22:14:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:05:06.722 22:14:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:06.722 22:14:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:06.722 22:14:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:06.722 22:14:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:06.722 22:14:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:06.722 22:14:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:06.722 22:14:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:06.722 22:14:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:06.722 22:14:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:06.722 22:14:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:06.722 22:14:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:06.722 22:14:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:05:06.722 22:14:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:06.722 22:14:32 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:06.722 22:14:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:06.722 22:14:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:06.722 22:14:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:06.722 22:14:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:06.722 22:14:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:06.722 22:14:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:06.722 22:14:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:06.722 22:14:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:06.722 22:14:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:06.722 22:14:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:06.722 22:14:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:05:06.722 22:14:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:06.722 22:14:32 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:05:06.722 22:14:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:06.722 22:14:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:06.722 22:14:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:06.722 22:14:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:06.722 22:14:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:06.722 22:14:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:06.722 22:14:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:06.722 22:14:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:06.722 22:14:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:06.722 22:14:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:06.722 22:14:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:05:06.722 22:14:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:06.722 22:14:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:06.722 22:14:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:06.722 22:14:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:05:06.722 22:14:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:06.722 22:14:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:06.722 22:14:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:06.722 22:14:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:05:06.722 22:14:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:06.722 22:14:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:06.722 22:14:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:06.722 22:14:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:06.723 22:14:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:06.723 22:14:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:06.723 22:14:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:06.723 22:14:32 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:06.723 22:14:32 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:06.723 22:14:32 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:06.723 22:14:32 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:08.097 22:14:33 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:08.097 22:14:33 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:08.097 22:14:33 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:08.097 22:14:33 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:08.097 22:14:33 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:08.097 22:14:33 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:08.097 22:14:33 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:08.097 22:14:33 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:08.097 22:14:33 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:08.097 22:14:33 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:08.097 22:14:33 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:08.097 22:14:33 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:08.097 22:14:33 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:08.097 22:14:33 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:08.097 22:14:33 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:08.097 22:14:33 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:08.097 22:14:33 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:08.097 22:14:33 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:08.097 22:14:33 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:08.097 22:14:33 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:08.097 22:14:33 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:08.097 22:14:33 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:08.097 22:14:33 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:08.097 22:14:33 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:08.097 22:14:33 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:08.097 22:14:33 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:08.097 22:14:33 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:08.097 00:05:08.097 real 0m1.421s 00:05:08.097 user 0m1.296s 00:05:08.097 sys 0m0.129s 00:05:08.097 22:14:33 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:08.097 22:14:33 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:05:08.097 ************************************ 00:05:08.097 END TEST accel_dualcast 00:05:08.097 ************************************ 00:05:08.097 22:14:33 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:08.097 22:14:33 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:08.097 22:14:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.097 22:14:33 accel -- common/autotest_common.sh@10 -- # set +x 00:05:08.097 ************************************ 00:05:08.097 START TEST accel_compare 00:05:08.097 ************************************ 00:05:08.097 22:14:33 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:05:08.097 22:14:33 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:05:08.097 22:14:33 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:05:08.097 22:14:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:08.097 22:14:33 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:08.097 22:14:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:08.097 22:14:33 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:08.097 22:14:33 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:05:08.097 22:14:33 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:08.097 22:14:33 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:08.097 22:14:33 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:08.097 22:14:33 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:08.097 22:14:33 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:08.098 22:14:33 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:05:08.098 22:14:33 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:05:08.098 [2024-07-24 22:14:33.464757] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:05:08.098 [2024-07-24 22:14:33.464832] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3755124 ] 00:05:08.098 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.098 [2024-07-24 22:14:33.525267] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.098 [2024-07-24 22:14:33.645624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.098 22:14:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:08.098 22:14:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:08.098 22:14:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:08.098 22:14:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:08.098 22:14:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:08.098 22:14:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:08.098 22:14:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:08.098 22:14:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:08.098 22:14:33 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:05:08.098 22:14:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:08.098 22:14:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:08.098 22:14:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:08.098 22:14:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:08.098 22:14:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:08.098 22:14:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:08.098 22:14:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:08.098 22:14:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:08.098 22:14:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:08.098 22:14:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:08.098 22:14:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:08.098 22:14:33 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:05:08.098 22:14:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:08.098 22:14:33 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:05:08.098 22:14:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:08.098 22:14:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:08.098 22:14:33 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:08.098 22:14:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:08.098 22:14:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:08.098 22:14:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:08.098 22:14:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:08.098 22:14:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:08.098 22:14:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:08.098 22:14:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:08.098 22:14:33 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:05:08.098 22:14:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:08.098 22:14:33 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:05:08.098 22:14:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:08.098 22:14:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:08.098 22:14:33 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:08.098 22:14:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:08.098 22:14:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:08.098 22:14:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:08.098 22:14:33 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:08.098 22:14:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:08.098 22:14:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:08.098 22:14:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:08.098 22:14:33 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:05:08.098 22:14:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:08.098 22:14:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:08.098 22:14:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:08.098 22:14:33 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:05:08.098 22:14:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:08.098 22:14:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:08.098 22:14:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:08.098 22:14:33 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:05:08.098 22:14:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:08.098 22:14:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:08.098 22:14:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:08.098 22:14:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:08.098 22:14:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:08.098 22:14:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:08.098 22:14:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:08.098 22:14:33 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:08.098 22:14:33 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:08.098 22:14:33 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:08.098 22:14:33 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:09.474 22:14:34 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:09.474 22:14:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:09.474 22:14:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:09.474 22:14:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:09.474 22:14:34 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:09.474 22:14:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:09.474 22:14:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:09.474 22:14:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:09.474 22:14:34 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:09.474 22:14:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:09.474 22:14:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:09.474 22:14:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:09.474 22:14:34 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:09.474 22:14:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:09.474 22:14:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:09.474 22:14:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:09.474 22:14:34 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:09.474 22:14:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:09.474 22:14:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:09.474 22:14:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:09.474 22:14:34 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:09.474 22:14:34 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:09.474 22:14:34 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:09.474 22:14:34 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:09.474 22:14:34 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:09.474 22:14:34 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:09.474 22:14:34 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:09.474 00:05:09.474 real 0m1.421s 00:05:09.474 user 0m1.293s 00:05:09.474 sys 0m0.133s 00:05:09.474 22:14:34 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:09.474 22:14:34 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:05:09.474 ************************************ 00:05:09.474 END TEST accel_compare 00:05:09.474 ************************************ 00:05:09.474 22:14:34 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:09.474 22:14:34 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:09.474 22:14:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.474 22:14:34 accel -- common/autotest_common.sh@10 -- # set +x 00:05:09.474 ************************************ 00:05:09.474 START TEST accel_xor 00:05:09.474 ************************************ 00:05:09.474 22:14:34 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:05:09.474 22:14:34 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:09.474 22:14:34 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:09.474 22:14:34 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:09.474 22:14:34 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:09.474 22:14:34 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:09.474 22:14:34 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:09.474 22:14:34 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:09.474 22:14:34 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:09.474 22:14:34 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:09.474 22:14:34 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:09.474 22:14:34 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:09.474 22:14:34 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:09.474 22:14:34 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:09.474 22:14:34 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:09.474 [2024-07-24 22:14:34.940253] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:05:09.474 [2024-07-24 22:14:34.940326] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3755245 ] 00:05:09.474 EAL: No free 2048 kB hugepages reported on node 1 00:05:09.474 [2024-07-24 22:14:35.001170] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.474 [2024-07-24 22:14:35.120667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.474 22:14:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:09.474 22:14:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:09.474 22:14:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:09.474 22:14:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:09.474 22:14:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:09.474 22:14:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:09.474 22:14:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:09.474 22:14:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:09.474 22:14:35 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:09.474 22:14:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:09.474 22:14:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:09.474 22:14:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:09.474 22:14:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:09.474 22:14:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:09.474 22:14:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:09.474 22:14:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:09.474 22:14:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:09.474 22:14:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:09.475 22:14:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:09.475 22:14:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:09.475 22:14:35 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:09.475 22:14:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:09.475 22:14:35 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:09.475 22:14:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:09.475 22:14:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:09.475 22:14:35 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:05:09.475 22:14:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:09.475 22:14:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:09.475 22:14:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:09.475 22:14:35 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:09.475 22:14:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:09.475 22:14:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:09.475 22:14:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:09.475 22:14:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:09.475 22:14:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:09.475 22:14:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:09.475 22:14:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:09.475 22:14:35 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:09.475 22:14:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:09.475 22:14:35 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:09.475 22:14:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:09.475 22:14:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:09.475 22:14:35 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:09.475 22:14:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:09.475 22:14:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:09.475 22:14:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:09.475 22:14:35 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:09.475 22:14:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:09.475 22:14:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:09.733 22:14:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:09.733 22:14:35 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:09.733 22:14:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:09.733 22:14:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:09.733 22:14:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:09.733 22:14:35 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:09.733 22:14:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:09.733 22:14:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:09.733 22:14:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:09.733 22:14:35 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:09.733 22:14:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:09.733 22:14:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:09.733 22:14:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:09.733 22:14:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:09.733 22:14:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:09.733 22:14:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:09.733 22:14:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:09.733 22:14:35 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:09.733 22:14:35 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:09.733 22:14:35 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:09.733 22:14:35 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:10.668 22:14:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:10.668 22:14:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:10.668 22:14:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:10.668 22:14:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:10.668 22:14:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:10.668 22:14:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:10.668 22:14:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:10.668 22:14:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:10.668 22:14:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:10.668 22:14:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:10.668 22:14:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:10.668 22:14:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:10.668 22:14:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:10.668 22:14:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:10.668 22:14:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:10.668 22:14:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:10.668 22:14:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:10.668 22:14:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:10.668 22:14:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:10.668 22:14:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:10.668 22:14:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:10.668 22:14:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:10.668 22:14:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:10.668 22:14:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:10.668 22:14:36 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:10.668 22:14:36 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:10.668 22:14:36 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:10.668 00:05:10.668 real 0m1.415s 00:05:10.668 user 0m1.293s 00:05:10.668 sys 0m0.123s 00:05:10.668 22:14:36 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.668 22:14:36 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:10.668 ************************************ 00:05:10.668 END TEST accel_xor 00:05:10.668 ************************************ 00:05:10.668 22:14:36 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:10.668 22:14:36 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:10.668 22:14:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.668 22:14:36 accel -- common/autotest_common.sh@10 -- # set +x 00:05:10.927 ************************************ 00:05:10.927 START TEST accel_xor 00:05:10.927 ************************************ 00:05:10.927 22:14:36 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:05:10.927 22:14:36 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:10.927 22:14:36 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:10.927 22:14:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:10.927 22:14:36 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:10.927 22:14:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:10.927 22:14:36 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:10.927 22:14:36 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:10.927 22:14:36 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:10.927 22:14:36 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:10.927 22:14:36 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:10.927 22:14:36 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:10.927 22:14:36 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:10.927 22:14:36 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:10.927 22:14:36 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:10.927 [2024-07-24 22:14:36.404769] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:05:10.927 [2024-07-24 22:14:36.404844] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3755440 ] 00:05:10.927 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.927 [2024-07-24 22:14:36.464466] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.927 [2024-07-24 22:14:36.583628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.185 22:14:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:11.185 22:14:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:11.185 22:14:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:11.185 22:14:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:11.185 22:14:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:11.185 22:14:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:11.185 22:14:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:11.185 22:14:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:11.185 22:14:36 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:11.185 22:14:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:11.185 22:14:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:11.185 22:14:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:11.185 22:14:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:11.185 22:14:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:11.185 22:14:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:11.185 22:14:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:11.185 22:14:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:11.185 22:14:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:11.185 22:14:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:11.185 22:14:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:11.185 22:14:36 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:11.185 22:14:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:11.185 22:14:36 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:11.185 22:14:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:11.185 22:14:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:11.185 22:14:36 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:05:11.185 22:14:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:11.185 22:14:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:11.185 22:14:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:11.185 22:14:36 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:11.185 22:14:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:11.185 22:14:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:11.185 22:14:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:11.185 22:14:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:11.185 22:14:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:11.185 22:14:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:11.185 22:14:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:11.185 22:14:36 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:11.185 22:14:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:11.185 22:14:36 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:11.186 22:14:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:11.186 22:14:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:11.186 22:14:36 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:11.186 22:14:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:11.186 22:14:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:11.186 22:14:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:11.186 22:14:36 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:11.186 22:14:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:11.186 22:14:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:11.186 22:14:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:11.186 22:14:36 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:11.186 22:14:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:11.186 22:14:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:11.186 22:14:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:11.186 22:14:36 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:11.186 22:14:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:11.186 22:14:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:11.186 22:14:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:11.186 22:14:36 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:11.186 22:14:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:11.186 22:14:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:11.186 22:14:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:11.186 22:14:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:11.186 22:14:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:11.186 22:14:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:11.186 22:14:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:11.186 22:14:36 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:11.186 22:14:36 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:11.186 22:14:36 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:11.186 22:14:36 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:12.129 22:14:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:12.129 22:14:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:12.129 22:14:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:12.129 22:14:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:12.129 22:14:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:12.129 22:14:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:12.129 22:14:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:12.129 22:14:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:12.129 22:14:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:12.129 22:14:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:12.129 22:14:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:12.129 22:14:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:12.129 22:14:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:12.129 22:14:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:12.129 22:14:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:12.129 22:14:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:12.129 22:14:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:12.129 22:14:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:12.129 22:14:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:12.129 22:14:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:12.129 22:14:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:12.129 22:14:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:12.129 22:14:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:12.129 22:14:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:12.129 22:14:37 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:12.129 22:14:37 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:12.129 22:14:37 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:12.129 00:05:12.129 real 0m1.415s 00:05:12.129 user 0m1.282s 00:05:12.129 sys 0m0.133s 00:05:12.129 22:14:37 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:12.129 22:14:37 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:12.129 ************************************ 00:05:12.129 END TEST accel_xor 00:05:12.129 ************************************ 00:05:12.129 22:14:37 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:12.129 22:14:37 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:12.129 22:14:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.129 22:14:37 accel -- common/autotest_common.sh@10 -- # set +x 00:05:12.388 ************************************ 00:05:12.388 START TEST accel_dif_verify 00:05:12.388 ************************************ 00:05:12.388 22:14:37 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:05:12.388 22:14:37 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:05:12.388 22:14:37 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:05:12.388 22:14:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:12.388 22:14:37 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:12.388 22:14:37 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:12.388 22:14:37 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:12.388 22:14:37 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:12.388 22:14:37 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:12.388 22:14:37 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:12.388 22:14:37 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:12.388 22:14:37 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:12.388 22:14:37 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:12.388 22:14:37 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:12.388 22:14:37 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:05:12.388 [2024-07-24 22:14:37.873210] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:05:12.388 [2024-07-24 22:14:37.873288] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3755583 ] 00:05:12.388 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.388 [2024-07-24 22:14:37.932951] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.388 [2024-07-24 22:14:38.052988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.646 22:14:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:12.646 22:14:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:12.646 22:14:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:12.646 22:14:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:12.646 22:14:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:12.646 22:14:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:12.647 22:14:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:12.647 22:14:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:12.647 22:14:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:05:12.647 22:14:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:12.647 22:14:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:12.647 22:14:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:12.647 22:14:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:12.647 22:14:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:12.647 22:14:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:12.647 22:14:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:12.647 22:14:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:12.647 22:14:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:12.647 22:14:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:12.647 22:14:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:12.647 22:14:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:05:12.647 22:14:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:12.647 22:14:38 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:05:12.647 22:14:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:12.647 22:14:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:12.647 22:14:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:12.647 22:14:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:12.647 22:14:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:12.647 22:14:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:12.647 22:14:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:12.647 22:14:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:12.647 22:14:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:12.647 22:14:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:12.647 22:14:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:05:12.647 22:14:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:12.647 22:14:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:12.647 22:14:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:12.647 22:14:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:05:12.647 22:14:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:12.647 22:14:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:12.647 22:14:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:12.647 22:14:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:12.647 22:14:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:12.647 22:14:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:12.647 22:14:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:12.647 22:14:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:05:12.647 22:14:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:12.647 22:14:38 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:05:12.647 22:14:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:12.647 22:14:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:12.647 22:14:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:12.647 22:14:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:12.647 22:14:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:12.647 22:14:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:12.647 22:14:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:12.647 22:14:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:12.647 22:14:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:12.647 22:14:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:12.647 22:14:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:05:12.647 22:14:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:12.647 22:14:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:12.647 22:14:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:12.647 22:14:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:05:12.647 22:14:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:12.647 22:14:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:12.647 22:14:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:12.647 22:14:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:05:12.647 22:14:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:12.647 22:14:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:12.647 22:14:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:12.647 22:14:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:12.647 22:14:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:12.647 22:14:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:12.647 22:14:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:12.647 22:14:38 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:12.647 22:14:38 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:12.647 22:14:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:12.647 22:14:38 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:13.581 22:14:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:13.581 22:14:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:13.581 22:14:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:13.581 22:14:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:13.581 22:14:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:13.581 22:14:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:13.581 22:14:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:13.581 22:14:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:13.581 22:14:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:13.581 22:14:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:13.581 22:14:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:13.581 22:14:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:13.581 22:14:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:13.581 22:14:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:13.581 22:14:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:13.581 22:14:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:13.581 22:14:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:13.581 22:14:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:13.581 22:14:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:13.581 22:14:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:13.581 22:14:39 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:13.581 22:14:39 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:13.581 22:14:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:13.581 22:14:39 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:13.581 22:14:39 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:13.581 22:14:39 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:05:13.581 22:14:39 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:13.581 00:05:13.581 real 0m1.416s 00:05:13.581 user 0m1.289s 00:05:13.581 sys 0m0.128s 00:05:13.581 22:14:39 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:13.581 22:14:39 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:05:13.581 ************************************ 00:05:13.581 END TEST accel_dif_verify 00:05:13.581 ************************************ 00:05:13.840 22:14:39 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:13.840 22:14:39 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:13.840 22:14:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.840 22:14:39 accel -- common/autotest_common.sh@10 -- # set +x 00:05:13.840 ************************************ 00:05:13.840 START TEST accel_dif_generate 00:05:13.840 ************************************ 00:05:13.840 22:14:39 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:05:13.840 22:14:39 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:05:13.840 22:14:39 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:05:13.840 22:14:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:13.840 22:14:39 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:13.840 22:14:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:13.840 22:14:39 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:13.840 22:14:39 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:05:13.840 22:14:39 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:13.840 22:14:39 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:13.840 22:14:39 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:13.840 22:14:39 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:13.840 22:14:39 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:13.840 22:14:39 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:05:13.840 22:14:39 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:05:13.840 [2024-07-24 22:14:39.345623] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:05:13.840 [2024-07-24 22:14:39.345696] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3755709 ] 00:05:13.840 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.840 [2024-07-24 22:14:39.405779] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.840 [2024-07-24 22:14:39.525787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.099 22:14:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:14.099 22:14:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:14.099 22:14:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:14.099 22:14:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:14.099 22:14:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:14.099 22:14:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:14.099 22:14:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:14.099 22:14:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:14.099 22:14:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:05:14.099 22:14:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:14.099 22:14:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:14.099 22:14:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:14.099 22:14:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:14.099 22:14:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:14.099 22:14:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:14.099 22:14:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:14.099 22:14:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:14.099 22:14:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:14.099 22:14:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:14.099 22:14:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:14.099 22:14:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:05:14.099 22:14:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:14.099 22:14:39 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:05:14.099 22:14:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:14.099 22:14:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:14.099 22:14:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:14.099 22:14:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:14.099 22:14:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:14.099 22:14:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:14.099 22:14:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:14.099 22:14:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:14.099 22:14:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:14.099 22:14:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:14.099 22:14:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:05:14.099 22:14:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:14.099 22:14:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:14.099 22:14:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:14.099 22:14:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:05:14.099 22:14:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:14.099 22:14:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:14.099 22:14:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:14.099 22:14:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:14.099 22:14:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:14.099 22:14:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:14.099 22:14:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:14.099 22:14:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:05:14.099 22:14:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:14.100 22:14:39 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:05:14.100 22:14:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:14.100 22:14:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:14.100 22:14:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:14.100 22:14:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:14.100 22:14:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:14.100 22:14:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:14.100 22:14:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:14.100 22:14:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:14.100 22:14:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:14.100 22:14:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:14.100 22:14:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:05:14.100 22:14:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:14.100 22:14:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:14.100 22:14:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:14.100 22:14:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:05:14.100 22:14:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:14.100 22:14:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:14.100 22:14:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:14.100 22:14:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:05:14.100 22:14:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:14.100 22:14:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:14.100 22:14:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:14.100 22:14:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:14.100 22:14:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:14.100 22:14:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:14.100 22:14:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:14.100 22:14:39 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:14.100 22:14:39 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:14.100 22:14:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:14.100 22:14:39 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:15.034 22:14:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:15.034 22:14:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:15.034 22:14:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:15.034 22:14:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:15.034 22:14:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:15.034 22:14:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:15.034 22:14:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:15.034 22:14:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:15.034 22:14:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:15.034 22:14:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:15.034 22:14:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:15.034 22:14:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:15.034 22:14:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:15.034 22:14:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:15.034 22:14:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:15.034 22:14:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:15.034 22:14:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:15.034 22:14:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:15.034 22:14:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:15.034 22:14:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:15.293 22:14:40 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:15.293 22:14:40 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:15.293 22:14:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:15.293 22:14:40 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:15.293 22:14:40 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:15.293 22:14:40 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:05:15.293 22:14:40 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:15.293 00:05:15.293 real 0m1.413s 00:05:15.293 user 0m1.284s 00:05:15.293 sys 0m0.130s 00:05:15.293 22:14:40 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:15.293 22:14:40 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:05:15.293 ************************************ 00:05:15.293 END TEST accel_dif_generate 00:05:15.293 ************************************ 00:05:15.293 22:14:40 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:15.293 22:14:40 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:15.293 22:14:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.293 22:14:40 accel -- common/autotest_common.sh@10 -- # set +x 00:05:15.293 ************************************ 00:05:15.293 START TEST accel_dif_generate_copy 00:05:15.293 ************************************ 00:05:15.293 22:14:40 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:05:15.293 22:14:40 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:15.293 22:14:40 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:05:15.293 22:14:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:15.293 22:14:40 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:15.293 22:14:40 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:15.293 22:14:40 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:15.293 22:14:40 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:15.293 22:14:40 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:15.293 22:14:40 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:15.293 22:14:40 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:15.293 22:14:40 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:15.293 22:14:40 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:15.293 22:14:40 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:15.293 22:14:40 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:05:15.293 [2024-07-24 22:14:40.807082] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:05:15.293 [2024-07-24 22:14:40.807154] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3755860 ] 00:05:15.293 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.293 [2024-07-24 22:14:40.867913] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.293 [2024-07-24 22:14:40.988177] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.552 22:14:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:15.552 22:14:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:15.552 22:14:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:15.552 22:14:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:15.552 22:14:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:15.552 22:14:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:15.552 22:14:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:15.552 22:14:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:15.552 22:14:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:05:15.552 22:14:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:15.552 22:14:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:15.552 22:14:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:15.552 22:14:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:15.552 22:14:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:15.552 22:14:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:15.552 22:14:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:15.552 22:14:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:15.552 22:14:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:15.552 22:14:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:15.552 22:14:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:15.552 22:14:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:05:15.552 22:14:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:15.552 22:14:41 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:05:15.552 22:14:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:15.552 22:14:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:15.552 22:14:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:15.552 22:14:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:15.552 22:14:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:15.552 22:14:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:15.552 22:14:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:15.552 22:14:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:15.552 22:14:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:15.552 22:14:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:15.552 22:14:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:15.552 22:14:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:15.552 22:14:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:15.552 22:14:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:15.552 22:14:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:05:15.552 22:14:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:15.553 22:14:41 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:15.553 22:14:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:15.553 22:14:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:15.553 22:14:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:15.553 22:14:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:15.553 22:14:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:15.553 22:14:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:15.553 22:14:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:15.553 22:14:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:15.553 22:14:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:15.553 22:14:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:15.553 22:14:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:05:15.553 22:14:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:15.553 22:14:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:15.553 22:14:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:15.553 22:14:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:15.553 22:14:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:15.553 22:14:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:15.553 22:14:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:15.553 22:14:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:05:15.553 22:14:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:15.553 22:14:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:15.553 22:14:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:15.553 22:14:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:15.553 22:14:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:15.553 22:14:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:15.553 22:14:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:15.553 22:14:41 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:15.553 22:14:41 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:15.553 22:14:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:15.553 22:14:41 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:16.928 22:14:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:16.928 22:14:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:16.928 22:14:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:16.928 22:14:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:16.928 22:14:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:16.928 22:14:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:16.929 22:14:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:16.929 22:14:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:16.929 22:14:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:16.929 22:14:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:16.929 22:14:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:16.929 22:14:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:16.929 22:14:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:16.929 22:14:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:16.929 22:14:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:16.929 22:14:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:16.929 22:14:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:16.929 22:14:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:16.929 22:14:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:16.929 22:14:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:16.929 22:14:42 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:16.929 22:14:42 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:16.929 22:14:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:16.929 22:14:42 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:16.929 22:14:42 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:16.929 22:14:42 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:05:16.929 22:14:42 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:16.929 00:05:16.929 real 0m1.418s 00:05:16.929 user 0m1.283s 00:05:16.929 sys 0m0.135s 00:05:16.929 22:14:42 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:16.929 22:14:42 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:05:16.929 ************************************ 00:05:16.929 END TEST accel_dif_generate_copy 00:05:16.929 ************************************ 00:05:16.929 22:14:42 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:05:16.929 22:14:42 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:16.929 22:14:42 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:16.929 22:14:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.929 22:14:42 accel -- common/autotest_common.sh@10 -- # set +x 00:05:16.929 ************************************ 00:05:16.929 START TEST accel_comp 00:05:16.929 ************************************ 00:05:16.929 22:14:42 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:16.929 22:14:42 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:05:16.929 22:14:42 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:05:16.929 22:14:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:16.929 22:14:42 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:16.929 22:14:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:16.929 22:14:42 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:16.929 22:14:42 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:05:16.929 22:14:42 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:16.929 22:14:42 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:16.929 22:14:42 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:16.929 22:14:42 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:16.929 22:14:42 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:16.929 22:14:42 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:05:16.929 22:14:42 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:05:16.929 [2024-07-24 22:14:42.273907] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:05:16.929 [2024-07-24 22:14:42.273978] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3756037 ] 00:05:16.929 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.929 [2024-07-24 22:14:42.333577] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.929 [2024-07-24 22:14:42.453925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.929 22:14:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:16.929 22:14:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:16.929 22:14:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:16.929 22:14:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:16.929 22:14:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:16.929 22:14:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:16.929 22:14:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:16.929 22:14:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:16.929 22:14:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:16.929 22:14:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:16.929 22:14:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:16.929 22:14:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:16.929 22:14:42 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:05:16.929 22:14:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:16.929 22:14:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:16.929 22:14:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:16.929 22:14:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:16.929 22:14:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:16.929 22:14:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:16.929 22:14:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:16.929 22:14:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:16.929 22:14:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:16.929 22:14:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:16.929 22:14:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:16.929 22:14:42 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:05:16.929 22:14:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:16.929 22:14:42 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:05:16.929 22:14:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:16.929 22:14:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:16.929 22:14:42 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:16.929 22:14:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:16.929 22:14:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:16.929 22:14:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:16.929 22:14:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:16.929 22:14:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:16.929 22:14:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:16.929 22:14:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:16.929 22:14:42 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:05:16.929 22:14:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:16.929 22:14:42 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:05:16.929 22:14:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:16.929 22:14:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:16.929 22:14:42 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:16.929 22:14:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:16.929 22:14:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:16.929 22:14:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:16.929 22:14:42 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:16.929 22:14:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:16.929 22:14:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:16.929 22:14:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:16.929 22:14:42 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:16.929 22:14:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:16.929 22:14:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:16.929 22:14:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:16.929 22:14:42 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:05:16.929 22:14:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:16.929 22:14:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:16.929 22:14:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:16.929 22:14:42 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:16.929 22:14:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:16.929 22:14:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:16.930 22:14:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:16.930 22:14:42 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:05:16.930 22:14:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:16.930 22:14:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:16.930 22:14:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:16.930 22:14:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:16.930 22:14:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:16.930 22:14:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:16.930 22:14:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:16.930 22:14:42 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:16.930 22:14:42 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:16.930 22:14:42 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:16.930 22:14:42 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:18.326 22:14:43 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:18.326 22:14:43 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:18.326 22:14:43 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:18.326 22:14:43 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:18.326 22:14:43 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:18.326 22:14:43 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:18.326 22:14:43 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:18.326 22:14:43 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:18.326 22:14:43 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:18.326 22:14:43 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:18.326 22:14:43 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:18.326 22:14:43 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:18.326 22:14:43 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:18.326 22:14:43 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:18.326 22:14:43 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:18.326 22:14:43 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:18.326 22:14:43 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:18.326 22:14:43 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:18.326 22:14:43 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:18.326 22:14:43 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:18.326 22:14:43 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:18.326 22:14:43 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:18.326 22:14:43 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:18.326 22:14:43 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:18.326 22:14:43 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:18.326 22:14:43 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:05:18.326 22:14:43 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:18.326 00:05:18.326 real 0m1.417s 00:05:18.326 user 0m1.291s 00:05:18.326 sys 0m0.126s 00:05:18.326 22:14:43 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:18.326 22:14:43 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:05:18.326 ************************************ 00:05:18.326 END TEST accel_comp 00:05:18.326 ************************************ 00:05:18.326 22:14:43 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:18.326 22:14:43 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:18.326 22:14:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.326 22:14:43 accel -- common/autotest_common.sh@10 -- # set +x 00:05:18.326 ************************************ 00:05:18.326 START TEST accel_decomp 00:05:18.326 ************************************ 00:05:18.326 22:14:43 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:18.326 22:14:43 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:05:18.326 22:14:43 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:05:18.326 22:14:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:18.326 22:14:43 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:18.326 22:14:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:18.326 22:14:43 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:18.326 22:14:43 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:05:18.326 22:14:43 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:18.326 22:14:43 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:18.326 22:14:43 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:18.326 22:14:43 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:18.326 22:14:43 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:18.326 22:14:43 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:05:18.326 22:14:43 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:05:18.326 [2024-07-24 22:14:43.742202] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:05:18.326 [2024-07-24 22:14:43.742284] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3756166 ] 00:05:18.326 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.326 [2024-07-24 22:14:43.803757] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.326 [2024-07-24 22:14:43.924169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.326 22:14:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:18.326 22:14:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:18.326 22:14:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:18.326 22:14:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:18.326 22:14:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:18.326 22:14:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:18.326 22:14:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:18.326 22:14:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:18.326 22:14:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:18.326 22:14:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:18.326 22:14:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:18.326 22:14:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:18.326 22:14:43 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:05:18.326 22:14:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:18.326 22:14:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:18.326 22:14:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:18.326 22:14:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:18.326 22:14:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:18.326 22:14:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:18.326 22:14:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:18.326 22:14:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:18.326 22:14:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:18.326 22:14:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:18.326 22:14:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:18.326 22:14:43 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:05:18.326 22:14:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:18.326 22:14:43 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:18.326 22:14:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:18.326 22:14:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:18.326 22:14:43 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:18.326 22:14:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:18.326 22:14:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:18.326 22:14:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:18.326 22:14:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:18.326 22:14:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:18.326 22:14:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:18.326 22:14:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:18.326 22:14:43 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:05:18.327 22:14:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:18.327 22:14:43 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:05:18.327 22:14:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:18.327 22:14:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:18.327 22:14:43 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:18.327 22:14:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:18.327 22:14:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:18.327 22:14:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:18.327 22:14:43 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:18.327 22:14:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:18.327 22:14:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:18.327 22:14:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:18.327 22:14:43 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:18.327 22:14:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:18.327 22:14:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:18.327 22:14:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:18.327 22:14:43 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:05:18.327 22:14:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:18.327 22:14:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:18.327 22:14:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:18.327 22:14:43 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:18.327 22:14:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:18.327 22:14:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:18.327 22:14:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:18.327 22:14:43 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:05:18.327 22:14:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:18.327 22:14:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:18.327 22:14:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:18.327 22:14:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:18.327 22:14:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:18.327 22:14:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:18.327 22:14:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:18.327 22:14:43 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:18.327 22:14:43 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:18.327 22:14:43 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:18.327 22:14:43 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:19.702 22:14:45 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:19.702 22:14:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:19.702 22:14:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:19.702 22:14:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:19.702 22:14:45 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:19.702 22:14:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:19.702 22:14:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:19.702 22:14:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:19.702 22:14:45 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:19.702 22:14:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:19.702 22:14:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:19.702 22:14:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:19.702 22:14:45 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:19.702 22:14:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:19.702 22:14:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:19.702 22:14:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:19.702 22:14:45 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:19.702 22:14:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:19.702 22:14:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:19.702 22:14:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:19.702 22:14:45 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:19.702 22:14:45 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:19.702 22:14:45 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:19.702 22:14:45 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:19.702 22:14:45 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:19.702 22:14:45 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:19.702 22:14:45 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:19.702 00:05:19.702 real 0m1.419s 00:05:19.702 user 0m1.287s 00:05:19.702 sys 0m0.133s 00:05:19.702 22:14:45 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:19.702 22:14:45 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:05:19.702 ************************************ 00:05:19.702 END TEST accel_decomp 00:05:19.702 ************************************ 00:05:19.702 22:14:45 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:19.702 22:14:45 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:19.702 22:14:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.702 22:14:45 accel -- common/autotest_common.sh@10 -- # set +x 00:05:19.702 ************************************ 00:05:19.702 START TEST accel_decomp_full 00:05:19.702 ************************************ 00:05:19.702 22:14:45 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:19.702 22:14:45 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:05:19.702 22:14:45 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:05:19.702 22:14:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:19.702 22:14:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:19.702 22:14:45 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:19.702 22:14:45 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:19.702 22:14:45 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:05:19.702 22:14:45 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:19.702 22:14:45 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:19.702 22:14:45 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:19.702 22:14:45 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:19.702 22:14:45 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:19.702 22:14:45 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:05:19.702 22:14:45 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:05:19.702 [2024-07-24 22:14:45.207054] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:05:19.702 [2024-07-24 22:14:45.207137] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3756297 ] 00:05:19.702 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.702 [2024-07-24 22:14:45.267387] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.702 [2024-07-24 22:14:45.388082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.961 22:14:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:19.961 22:14:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:19.961 22:14:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:19.961 22:14:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:19.961 22:14:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:19.961 22:14:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:19.961 22:14:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:19.961 22:14:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:19.961 22:14:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:19.961 22:14:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:19.961 22:14:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:19.961 22:14:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:19.961 22:14:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:05:19.961 22:14:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:19.961 22:14:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:19.961 22:14:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:19.961 22:14:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:19.962 22:14:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:19.962 22:14:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:19.962 22:14:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:19.962 22:14:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:19.962 22:14:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:19.962 22:14:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:19.962 22:14:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:19.962 22:14:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:05:19.962 22:14:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:19.962 22:14:45 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:19.962 22:14:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:19.962 22:14:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:19.962 22:14:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:19.962 22:14:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:19.962 22:14:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:19.962 22:14:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:19.962 22:14:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:19.962 22:14:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:19.962 22:14:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:19.962 22:14:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:19.962 22:14:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:05:19.962 22:14:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:19.962 22:14:45 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:05:19.962 22:14:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:19.962 22:14:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:19.962 22:14:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:19.962 22:14:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:19.962 22:14:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:19.962 22:14:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:19.962 22:14:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:19.962 22:14:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:19.962 22:14:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:19.962 22:14:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:19.962 22:14:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:19.962 22:14:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:19.962 22:14:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:19.962 22:14:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:19.962 22:14:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:05:19.962 22:14:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:19.962 22:14:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:19.962 22:14:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:19.962 22:14:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:05:19.962 22:14:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:19.962 22:14:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:19.962 22:14:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:19.962 22:14:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:05:19.962 22:14:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:19.962 22:14:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:19.962 22:14:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:19.962 22:14:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:19.962 22:14:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:19.962 22:14:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:19.962 22:14:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:19.962 22:14:45 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:19.962 22:14:45 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:19.962 22:14:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:19.962 22:14:45 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:21.337 22:14:46 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:21.337 22:14:46 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:21.337 22:14:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:21.337 22:14:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:21.337 22:14:46 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:21.337 22:14:46 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:21.337 22:14:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:21.337 22:14:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:21.337 22:14:46 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:21.337 22:14:46 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:21.337 22:14:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:21.337 22:14:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:21.337 22:14:46 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:21.337 22:14:46 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:21.337 22:14:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:21.337 22:14:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:21.337 22:14:46 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:21.337 22:14:46 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:21.337 22:14:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:21.337 22:14:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:21.337 22:14:46 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:21.337 22:14:46 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:21.337 22:14:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:21.337 22:14:46 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:21.337 22:14:46 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:21.337 22:14:46 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:21.337 22:14:46 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:21.337 00:05:21.337 real 0m1.430s 00:05:21.337 user 0m1.295s 00:05:21.337 sys 0m0.136s 00:05:21.337 22:14:46 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:21.337 22:14:46 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:05:21.337 ************************************ 00:05:21.337 END TEST accel_decomp_full 00:05:21.337 ************************************ 00:05:21.337 22:14:46 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:21.337 22:14:46 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:21.337 22:14:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.337 22:14:46 accel -- common/autotest_common.sh@10 -- # set +x 00:05:21.337 ************************************ 00:05:21.337 START TEST accel_decomp_mcore 00:05:21.337 ************************************ 00:05:21.337 22:14:46 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:21.337 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:21.337 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:21.337 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:21.337 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:21.337 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:21.337 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:21.337 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:21.337 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:21.337 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:21.337 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:21.337 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:21.337 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:21.337 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:21.337 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:21.337 [2024-07-24 22:14:46.695456] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:05:21.337 [2024-07-24 22:14:46.695536] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3756501 ] 00:05:21.337 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.337 [2024-07-24 22:14:46.756102] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:21.337 [2024-07-24 22:14:46.879162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:21.337 [2024-07-24 22:14:46.879295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:21.337 [2024-07-24 22:14:46.879299] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.337 [2024-07-24 22:14:46.879242] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:21.337 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:21.337 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:21.337 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:21.337 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:21.337 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:21.337 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:21.337 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:21.337 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:21.337 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:21.337 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:21.337 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:21.337 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:21.337 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:21.337 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:21.337 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:21.337 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:21.337 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:21.337 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:21.337 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:21.337 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:21.337 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:21.337 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:21.337 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:21.337 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:21.337 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:21.337 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:21.337 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:21.337 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:21.337 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:21.337 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:21.337 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:21.337 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:21.337 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:21.337 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:21.337 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:21.337 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:21.337 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:21.337 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:05:21.337 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:21.337 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:21.337 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:21.337 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:21.337 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:21.337 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:21.337 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:21.337 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:21.337 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:21.337 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:21.337 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:21.337 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:21.337 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:21.337 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:21.337 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:21.337 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:21.337 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:05:21.338 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:21.338 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:21.338 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:21.338 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:21.338 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:21.338 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:21.338 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:21.338 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:21.338 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:21.338 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:21.338 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:21.338 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:21.338 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:21.338 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:21.338 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:21.338 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:21.338 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:21.338 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:21.338 22:14:46 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:22.712 22:14:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:22.712 22:14:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:22.712 22:14:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:22.712 22:14:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:22.712 22:14:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:22.712 22:14:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:22.712 22:14:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:22.712 22:14:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:22.712 22:14:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:22.712 22:14:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:22.712 22:14:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:22.712 22:14:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:22.712 22:14:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:22.712 22:14:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:22.712 22:14:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:22.712 22:14:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:22.712 22:14:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:22.712 22:14:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:22.712 22:14:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:22.712 22:14:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:22.712 22:14:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:22.712 22:14:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:22.712 22:14:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:22.712 22:14:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:22.712 22:14:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:22.712 22:14:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:22.712 22:14:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:22.712 22:14:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:22.712 22:14:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:22.712 22:14:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:22.712 22:14:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:22.712 22:14:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:22.712 22:14:48 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:22.712 22:14:48 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:22.712 22:14:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:22.712 22:14:48 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:22.712 22:14:48 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:22.712 22:14:48 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:22.712 22:14:48 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:22.712 00:05:22.712 real 0m1.437s 00:05:22.712 user 0m4.627s 00:05:22.712 sys 0m0.140s 00:05:22.712 22:14:48 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:22.712 22:14:48 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:22.712 ************************************ 00:05:22.712 END TEST accel_decomp_mcore 00:05:22.712 ************************************ 00:05:22.712 22:14:48 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:22.712 22:14:48 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:22.712 22:14:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.712 22:14:48 accel -- common/autotest_common.sh@10 -- # set +x 00:05:22.712 ************************************ 00:05:22.712 START TEST accel_decomp_full_mcore 00:05:22.712 ************************************ 00:05:22.712 22:14:48 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:22.712 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:22.712 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:22.712 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:22.712 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:22.712 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:22.712 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:22.712 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:22.712 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:22.712 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:22.712 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:22.712 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:22.712 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:22.712 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:22.712 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:22.712 [2024-07-24 22:14:48.186364] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:05:22.712 [2024-07-24 22:14:48.186435] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3756625 ] 00:05:22.712 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.712 [2024-07-24 22:14:48.246980] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:22.712 [2024-07-24 22:14:48.369103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:22.712 [2024-07-24 22:14:48.369227] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:22.712 [2024-07-24 22:14:48.369231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.712 [2024-07-24 22:14:48.369178] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:22.971 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:22.971 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:22.971 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:22.971 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:22.971 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:22.971 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:22.971 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:22.971 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:22.971 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:22.971 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:22.971 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:22.971 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:22.971 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:22.971 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:22.971 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:22.971 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:22.971 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:22.971 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:22.971 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:22.971 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:22.971 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:22.971 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:22.971 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:22.971 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:22.971 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:22.971 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:22.971 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:22.971 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:22.971 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:22.971 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:22.971 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:22.971 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:22.971 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:22.971 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:22.971 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:22.971 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:22.971 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:22.971 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:05:22.971 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:22.971 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:22.971 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:22.971 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:22.971 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:22.971 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:22.971 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:22.971 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:22.971 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:22.971 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:22.971 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:22.971 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:22.971 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:22.971 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:22.971 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:22.971 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:22.971 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:05:22.971 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:22.971 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:22.971 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:22.971 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:22.971 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:22.971 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:22.971 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:22.971 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:22.971 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:22.971 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:22.971 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:22.971 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:22.971 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:22.971 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:22.971 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:22.971 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:22.971 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:22.971 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:22.971 22:14:48 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:23.929 22:14:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:23.929 22:14:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:23.929 22:14:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:23.929 22:14:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:23.929 22:14:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:23.929 22:14:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:23.929 22:14:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:23.929 22:14:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:23.929 22:14:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:23.929 22:14:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:23.929 22:14:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:23.929 22:14:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:23.929 22:14:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:23.929 22:14:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:23.929 22:14:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:23.929 22:14:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:23.929 22:14:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:23.929 22:14:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:23.929 22:14:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:23.929 22:14:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:23.929 22:14:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:23.929 22:14:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:23.929 22:14:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:23.929 22:14:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:23.929 22:14:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:23.929 22:14:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:23.929 22:14:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:23.929 22:14:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:23.929 22:14:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:23.929 22:14:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:23.929 22:14:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:23.929 22:14:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:23.929 22:14:49 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:23.929 22:14:49 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:23.929 22:14:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:23.929 22:14:49 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:23.929 22:14:49 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:23.929 22:14:49 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:23.929 22:14:49 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:23.929 00:05:23.929 real 0m1.447s 00:05:23.929 user 0m4.692s 00:05:23.929 sys 0m0.136s 00:05:23.929 22:14:49 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.929 22:14:49 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:23.929 ************************************ 00:05:23.929 END TEST accel_decomp_full_mcore 00:05:23.929 ************************************ 00:05:24.188 22:14:49 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:24.188 22:14:49 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:24.188 22:14:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.188 22:14:49 accel -- common/autotest_common.sh@10 -- # set +x 00:05:24.188 ************************************ 00:05:24.188 START TEST accel_decomp_mthread 00:05:24.188 ************************************ 00:05:24.188 22:14:49 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:24.188 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:24.188 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:24.188 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:24.188 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:24.188 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:24.188 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:24.188 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:24.188 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:24.188 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:24.188 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:24.188 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:24.188 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:24.188 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:24.188 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:24.188 [2024-07-24 22:14:49.680904] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:05:24.188 [2024-07-24 22:14:49.680989] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3756840 ] 00:05:24.188 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.188 [2024-07-24 22:14:49.741168] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.188 [2024-07-24 22:14:49.861508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.446 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:24.446 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:24.446 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:24.446 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:24.446 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:24.446 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:24.446 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:24.446 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:24.446 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:24.446 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:24.446 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:24.446 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:24.446 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:24.446 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:24.446 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:24.446 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:24.446 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:24.446 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:24.446 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:24.446 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:24.446 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:24.446 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:24.446 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:24.446 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:24.446 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:24.446 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:24.446 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:24.446 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:24.446 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:24.446 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:24.446 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:24.446 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:24.446 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:24.446 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:24.446 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:24.446 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:24.446 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:24.446 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:05:24.446 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:24.446 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:24.446 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:24.446 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:24.446 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:24.446 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:24.446 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:24.446 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:24.446 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:24.446 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:24.446 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:24.446 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:24.446 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:24.446 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:24.446 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:24.446 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:24.446 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:05:24.446 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:24.446 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:24.446 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:24.446 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:24.446 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:24.446 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:24.446 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:24.446 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:24.446 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:24.446 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:24.446 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:24.446 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:24.446 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:24.446 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:24.446 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:24.446 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:24.446 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:24.446 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:24.446 22:14:49 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:25.380 22:14:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:25.380 22:14:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:25.380 22:14:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:25.380 22:14:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:25.380 22:14:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:25.380 22:14:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:25.380 22:14:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:25.380 22:14:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:25.380 22:14:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:25.380 22:14:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:25.380 22:14:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:25.640 22:14:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:25.640 22:14:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:25.640 22:14:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:25.640 22:14:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:25.640 22:14:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:25.640 22:14:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:25.640 22:14:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:25.640 22:14:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:25.640 22:14:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:25.640 22:14:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:25.640 22:14:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:25.640 22:14:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:25.640 22:14:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:25.640 22:14:51 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:25.640 22:14:51 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:25.640 22:14:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:25.640 22:14:51 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:25.640 22:14:51 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:25.640 22:14:51 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:25.640 22:14:51 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:25.640 00:05:25.640 real 0m1.427s 00:05:25.640 user 0m1.300s 00:05:25.640 sys 0m0.128s 00:05:25.640 22:14:51 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:25.640 22:14:51 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:25.640 ************************************ 00:05:25.640 END TEST accel_decomp_mthread 00:05:25.640 ************************************ 00:05:25.640 22:14:51 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:25.640 22:14:51 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:25.640 22:14:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.640 22:14:51 accel -- common/autotest_common.sh@10 -- # set +x 00:05:25.640 ************************************ 00:05:25.640 START TEST accel_decomp_full_mthread 00:05:25.640 ************************************ 00:05:25.640 22:14:51 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:25.640 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:25.640 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:25.640 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:25.640 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:25.640 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:25.640 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:25.640 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:25.640 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:25.640 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:25.640 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:25.640 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:25.640 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:25.640 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:25.640 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:25.640 [2024-07-24 22:14:51.163882] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:05:25.640 [2024-07-24 22:14:51.163952] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3756969 ] 00:05:25.640 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.640 [2024-07-24 22:14:51.223709] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.640 [2024-07-24 22:14:51.344201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.899 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:25.899 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:25.899 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:25.899 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:25.899 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:25.899 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:25.899 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:25.899 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:25.899 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:25.899 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:25.899 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:25.899 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:25.899 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:25.899 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:25.899 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:25.899 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:25.899 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:25.899 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:25.899 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:25.899 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:25.899 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:25.899 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:25.899 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:25.899 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:25.899 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:25.899 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:25.899 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:25.899 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:25.899 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:25.899 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:25.899 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:25.899 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:25.899 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:25.899 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:25.899 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:25.899 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:25.899 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:25.899 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:05:25.899 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:25.899 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:25.899 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:25.899 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:25.899 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:25.899 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:25.899 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:25.899 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:25.899 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:25.899 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:25.899 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:25.899 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:25.899 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:25.899 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:25.899 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:25.899 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:25.899 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:05:25.899 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:25.899 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:25.899 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:25.899 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:25.899 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:25.899 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:25.899 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:25.899 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:25.899 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:25.899 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:25.899 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:25.899 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:25.899 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:25.899 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:25.899 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:25.899 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:25.899 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:25.899 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:25.899 22:14:51 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:27.274 22:14:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:27.274 22:14:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:27.274 22:14:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:27.274 22:14:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:27.274 22:14:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:27.274 22:14:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:27.274 22:14:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:27.274 22:14:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:27.274 22:14:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:27.274 22:14:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:27.274 22:14:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:27.274 22:14:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:27.274 22:14:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:27.274 22:14:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:27.274 22:14:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:27.274 22:14:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:27.274 22:14:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:27.274 22:14:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:27.274 22:14:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:27.274 22:14:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:27.274 22:14:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:27.274 22:14:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:27.274 22:14:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:27.274 22:14:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:27.274 22:14:52 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:27.274 22:14:52 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:27.274 22:14:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:27.274 22:14:52 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:27.274 22:14:52 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:27.274 22:14:52 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:27.274 22:14:52 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:27.274 00:05:27.274 real 0m1.458s 00:05:27.274 user 0m1.325s 00:05:27.274 sys 0m0.135s 00:05:27.274 22:14:52 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.274 22:14:52 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:27.274 ************************************ 00:05:27.274 END TEST accel_decomp_full_mthread 00:05:27.274 ************************************ 00:05:27.274 22:14:52 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:05:27.274 22:14:52 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:27.274 22:14:52 accel -- accel/accel.sh@137 -- # build_accel_config 00:05:27.274 22:14:52 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:27.274 22:14:52 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:27.274 22:14:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.274 22:14:52 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:27.274 22:14:52 accel -- common/autotest_common.sh@10 -- # set +x 00:05:27.274 22:14:52 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:27.274 22:14:52 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:27.274 22:14:52 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:27.274 22:14:52 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:27.274 22:14:52 accel -- accel/accel.sh@41 -- # jq -r . 00:05:27.274 ************************************ 00:05:27.274 START TEST accel_dif_functional_tests 00:05:27.274 ************************************ 00:05:27.274 22:14:52 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:27.274 [2024-07-24 22:14:52.702427] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:05:27.274 [2024-07-24 22:14:52.702535] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3757095 ] 00:05:27.274 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.274 [2024-07-24 22:14:52.762271] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:27.274 [2024-07-24 22:14:52.884813] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:27.274 [2024-07-24 22:14:52.884931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:27.274 [2024-07-24 22:14:52.884964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.274 00:05:27.274 00:05:27.274 CUnit - A unit testing framework for C - Version 2.1-3 00:05:27.274 http://cunit.sourceforge.net/ 00:05:27.274 00:05:27.275 00:05:27.275 Suite: accel_dif 00:05:27.275 Test: verify: DIF generated, GUARD check ...passed 00:05:27.275 Test: verify: DIF generated, APPTAG check ...passed 00:05:27.275 Test: verify: DIF generated, REFTAG check ...passed 00:05:27.275 Test: verify: DIF not generated, GUARD check ...[2024-07-24 22:14:52.967708] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:27.275 passed 00:05:27.275 Test: verify: DIF not generated, APPTAG check ...[2024-07-24 22:14:52.967778] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:27.275 passed 00:05:27.275 Test: verify: DIF not generated, REFTAG check ...[2024-07-24 22:14:52.967818] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:27.275 passed 00:05:27.275 Test: verify: APPTAG correct, APPTAG check ...passed 00:05:27.275 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-24 22:14:52.967891] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:05:27.275 passed 00:05:27.275 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:05:27.275 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:05:27.275 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:05:27.275 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-24 22:14:52.968048] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:05:27.275 passed 00:05:27.275 Test: verify copy: DIF generated, GUARD check ...passed 00:05:27.275 Test: verify copy: DIF generated, APPTAG check ...passed 00:05:27.275 Test: verify copy: DIF generated, REFTAG check ...passed 00:05:27.275 Test: verify copy: DIF not generated, GUARD check ...[2024-07-24 22:14:52.968227] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:27.275 passed 00:05:27.275 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-24 22:14:52.968269] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:27.275 passed 00:05:27.275 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-24 22:14:52.968306] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:27.275 passed 00:05:27.275 Test: generate copy: DIF generated, GUARD check ...passed 00:05:27.275 Test: generate copy: DIF generated, APTTAG check ...passed 00:05:27.275 Test: generate copy: DIF generated, REFTAG check ...passed 00:05:27.275 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:05:27.275 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:05:27.275 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:05:27.275 Test: generate copy: iovecs-len validate ...[2024-07-24 22:14:52.968585] dif.c:1225:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:05:27.275 passed 00:05:27.275 Test: generate copy: buffer alignment validate ...passed 00:05:27.275 00:05:27.275 Run Summary: Type Total Ran Passed Failed Inactive 00:05:27.275 suites 1 1 n/a 0 0 00:05:27.275 tests 26 26 26 0 0 00:05:27.275 asserts 115 115 115 0 n/a 00:05:27.275 00:05:27.275 Elapsed time = 0.005 seconds 00:05:27.534 00:05:27.534 real 0m0.512s 00:05:27.534 user 0m0.704s 00:05:27.534 sys 0m0.164s 00:05:27.534 22:14:53 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.534 22:14:53 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:05:27.534 ************************************ 00:05:27.534 END TEST accel_dif_functional_tests 00:05:27.534 ************************************ 00:05:27.534 00:05:27.534 real 0m32.155s 00:05:27.534 user 0m35.488s 00:05:27.534 sys 0m4.357s 00:05:27.534 22:14:53 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.534 22:14:53 accel -- common/autotest_common.sh@10 -- # set +x 00:05:27.534 ************************************ 00:05:27.534 END TEST accel 00:05:27.534 ************************************ 00:05:27.534 22:14:53 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:05:27.534 22:14:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:27.534 22:14:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.534 22:14:53 -- common/autotest_common.sh@10 -- # set +x 00:05:27.794 ************************************ 00:05:27.794 START TEST accel_rpc 00:05:27.794 ************************************ 00:05:27.794 22:14:53 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:05:27.794 * Looking for test storage... 00:05:27.794 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:27.794 22:14:53 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:27.794 22:14:53 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3757251 00:05:27.794 22:14:53 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 3757251 00:05:27.794 22:14:53 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 3757251 ']' 00:05:27.794 22:14:53 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:05:27.794 22:14:53 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.794 22:14:53 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:27.794 22:14:53 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.794 22:14:53 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:27.794 22:14:53 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.794 [2024-07-24 22:14:53.361977] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:05:27.794 [2024-07-24 22:14:53.362079] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3757251 ] 00:05:27.794 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.794 [2024-07-24 22:14:53.422072] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.055 [2024-07-24 22:14:53.538833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.055 22:14:53 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:28.055 22:14:53 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:28.055 22:14:53 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:05:28.055 22:14:53 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:05:28.055 22:14:53 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:05:28.055 22:14:53 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:05:28.055 22:14:53 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:05:28.055 22:14:53 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:28.055 22:14:53 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.055 22:14:53 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.055 ************************************ 00:05:28.055 START TEST accel_assign_opcode 00:05:28.055 ************************************ 00:05:28.055 22:14:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:05:28.055 22:14:53 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:05:28.055 22:14:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:28.055 22:14:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:28.055 [2024-07-24 22:14:53.631514] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:05:28.055 22:14:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:28.055 22:14:53 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:05:28.055 22:14:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:28.055 22:14:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:28.055 [2024-07-24 22:14:53.639532] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:05:28.055 22:14:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:28.055 22:14:53 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:05:28.055 22:14:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:28.055 22:14:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:28.315 22:14:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:28.315 22:14:53 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:05:28.315 22:14:53 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:05:28.315 22:14:53 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:05:28.315 22:14:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:28.315 22:14:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:28.315 22:14:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:28.315 software 00:05:28.315 00:05:28.315 real 0m0.272s 00:05:28.315 user 0m0.041s 00:05:28.315 sys 0m0.007s 00:05:28.315 22:14:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.315 22:14:53 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:28.315 ************************************ 00:05:28.315 END TEST accel_assign_opcode 00:05:28.315 ************************************ 00:05:28.315 22:14:53 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 3757251 00:05:28.315 22:14:53 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 3757251 ']' 00:05:28.315 22:14:53 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 3757251 00:05:28.315 22:14:53 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:05:28.315 22:14:53 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:28.315 22:14:53 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3757251 00:05:28.315 22:14:53 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:28.315 22:14:53 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:28.315 22:14:53 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3757251' 00:05:28.315 killing process with pid 3757251 00:05:28.315 22:14:53 accel_rpc -- common/autotest_common.sh@967 -- # kill 3757251 00:05:28.315 22:14:53 accel_rpc -- common/autotest_common.sh@972 -- # wait 3757251 00:05:28.575 00:05:28.575 real 0m1.023s 00:05:28.575 user 0m1.023s 00:05:28.575 sys 0m0.399s 00:05:28.575 22:14:54 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.575 22:14:54 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.575 ************************************ 00:05:28.575 END TEST accel_rpc 00:05:28.575 ************************************ 00:05:28.834 22:14:54 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:28.834 22:14:54 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:28.834 22:14:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.834 22:14:54 -- common/autotest_common.sh@10 -- # set +x 00:05:28.834 ************************************ 00:05:28.834 START TEST app_cmdline 00:05:28.834 ************************************ 00:05:28.834 22:14:54 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:28.834 * Looking for test storage... 00:05:28.834 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:28.834 22:14:54 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:28.834 22:14:54 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3757423 00:05:28.834 22:14:54 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3757423 00:05:28.834 22:14:54 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:28.834 22:14:54 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 3757423 ']' 00:05:28.834 22:14:54 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.834 22:14:54 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:28.834 22:14:54 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.834 22:14:54 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:28.834 22:14:54 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:28.834 [2024-07-24 22:14:54.443927] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:05:28.834 [2024-07-24 22:14:54.444032] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3757423 ] 00:05:28.834 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.834 [2024-07-24 22:14:54.506024] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.092 [2024-07-24 22:14:54.623104] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.351 22:14:54 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:29.351 22:14:54 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:05:29.351 22:14:54 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:29.609 { 00:05:29.609 "version": "SPDK v24.09-pre git sha1 643864934", 00:05:29.609 "fields": { 00:05:29.609 "major": 24, 00:05:29.609 "minor": 9, 00:05:29.609 "patch": 0, 00:05:29.609 "suffix": "-pre", 00:05:29.609 "commit": "643864934" 00:05:29.609 } 00:05:29.609 } 00:05:29.609 22:14:55 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:29.609 22:14:55 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:29.609 22:14:55 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:29.609 22:14:55 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:29.609 22:14:55 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:29.610 22:14:55 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:29.610 22:14:55 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:29.610 22:14:55 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:29.610 22:14:55 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:29.610 22:14:55 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:29.610 22:14:55 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:29.610 22:14:55 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:29.610 22:14:55 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:29.610 22:14:55 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:05:29.610 22:14:55 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:29.610 22:14:55 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:29.610 22:14:55 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:29.610 22:14:55 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:29.610 22:14:55 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:29.610 22:14:55 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:29.610 22:14:55 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:29.610 22:14:55 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:29.610 22:14:55 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:29.610 22:14:55 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:29.868 request: 00:05:29.868 { 00:05:29.868 "method": "env_dpdk_get_mem_stats", 00:05:29.868 "req_id": 1 00:05:29.868 } 00:05:29.868 Got JSON-RPC error response 00:05:29.868 response: 00:05:29.868 { 00:05:29.868 "code": -32601, 00:05:29.868 "message": "Method not found" 00:05:29.868 } 00:05:29.868 22:14:55 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:05:29.868 22:14:55 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:29.868 22:14:55 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:29.868 22:14:55 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:29.868 22:14:55 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3757423 00:05:29.868 22:14:55 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 3757423 ']' 00:05:29.868 22:14:55 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 3757423 00:05:29.868 22:14:55 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:05:29.868 22:14:55 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:29.868 22:14:55 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3757423 00:05:29.868 22:14:55 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:29.868 22:14:55 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:29.868 22:14:55 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3757423' 00:05:29.868 killing process with pid 3757423 00:05:29.868 22:14:55 app_cmdline -- common/autotest_common.sh@967 -- # kill 3757423 00:05:29.868 22:14:55 app_cmdline -- common/autotest_common.sh@972 -- # wait 3757423 00:05:30.437 00:05:30.437 real 0m1.502s 00:05:30.437 user 0m1.961s 00:05:30.437 sys 0m0.461s 00:05:30.437 22:14:55 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.437 22:14:55 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:30.437 ************************************ 00:05:30.437 END TEST app_cmdline 00:05:30.437 ************************************ 00:05:30.437 22:14:55 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:30.437 22:14:55 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:30.437 22:14:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.437 22:14:55 -- common/autotest_common.sh@10 -- # set +x 00:05:30.437 ************************************ 00:05:30.437 START TEST version 00:05:30.437 ************************************ 00:05:30.437 22:14:55 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:30.437 * Looking for test storage... 00:05:30.437 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:30.437 22:14:55 version -- app/version.sh@17 -- # get_header_version major 00:05:30.437 22:14:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:30.437 22:14:55 version -- app/version.sh@14 -- # cut -f2 00:05:30.437 22:14:55 version -- app/version.sh@14 -- # tr -d '"' 00:05:30.437 22:14:55 version -- app/version.sh@17 -- # major=24 00:05:30.437 22:14:55 version -- app/version.sh@18 -- # get_header_version minor 00:05:30.437 22:14:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:30.437 22:14:55 version -- app/version.sh@14 -- # cut -f2 00:05:30.437 22:14:55 version -- app/version.sh@14 -- # tr -d '"' 00:05:30.437 22:14:55 version -- app/version.sh@18 -- # minor=9 00:05:30.437 22:14:55 version -- app/version.sh@19 -- # get_header_version patch 00:05:30.437 22:14:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:30.437 22:14:55 version -- app/version.sh@14 -- # cut -f2 00:05:30.437 22:14:55 version -- app/version.sh@14 -- # tr -d '"' 00:05:30.437 22:14:55 version -- app/version.sh@19 -- # patch=0 00:05:30.437 22:14:55 version -- app/version.sh@20 -- # get_header_version suffix 00:05:30.437 22:14:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:30.437 22:14:55 version -- app/version.sh@14 -- # cut -f2 00:05:30.437 22:14:55 version -- app/version.sh@14 -- # tr -d '"' 00:05:30.437 22:14:55 version -- app/version.sh@20 -- # suffix=-pre 00:05:30.437 22:14:55 version -- app/version.sh@22 -- # version=24.9 00:05:30.437 22:14:55 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:30.437 22:14:55 version -- app/version.sh@28 -- # version=24.9rc0 00:05:30.437 22:14:55 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:30.437 22:14:55 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:30.437 22:14:55 version -- app/version.sh@30 -- # py_version=24.9rc0 00:05:30.437 22:14:56 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:05:30.437 00:05:30.437 real 0m0.114s 00:05:30.437 user 0m0.069s 00:05:30.437 sys 0m0.067s 00:05:30.437 22:14:56 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.437 22:14:56 version -- common/autotest_common.sh@10 -- # set +x 00:05:30.437 ************************************ 00:05:30.437 END TEST version 00:05:30.437 ************************************ 00:05:30.437 22:14:56 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:05:30.437 22:14:56 -- spdk/autotest.sh@198 -- # uname -s 00:05:30.437 22:14:56 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:05:30.437 22:14:56 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:05:30.437 22:14:56 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:05:30.437 22:14:56 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:05:30.437 22:14:56 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:05:30.437 22:14:56 -- spdk/autotest.sh@260 -- # timing_exit lib 00:05:30.437 22:14:56 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:30.437 22:14:56 -- common/autotest_common.sh@10 -- # set +x 00:05:30.437 22:14:56 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:05:30.437 22:14:56 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:05:30.437 22:14:56 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:05:30.437 22:14:56 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:05:30.437 22:14:56 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:05:30.437 22:14:56 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:05:30.437 22:14:56 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:30.437 22:14:56 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:30.437 22:14:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.437 22:14:56 -- common/autotest_common.sh@10 -- # set +x 00:05:30.437 ************************************ 00:05:30.437 START TEST nvmf_tcp 00:05:30.437 ************************************ 00:05:30.437 22:14:56 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:30.437 * Looking for test storage... 00:05:30.437 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:30.437 22:14:56 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:30.437 22:14:56 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:30.437 22:14:56 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:30.437 22:14:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:30.437 22:14:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.437 22:14:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:30.696 ************************************ 00:05:30.696 START TEST nvmf_target_core 00:05:30.696 ************************************ 00:05:30.696 22:14:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:30.696 * Looking for test storage... 00:05:30.696 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:30.696 22:14:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:30.696 22:14:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:30.696 22:14:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:30.696 22:14:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:30.696 22:14:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:30.696 22:14:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:30.696 22:14:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:30.696 22:14:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:30.696 22:14:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:30.697 ************************************ 00:05:30.697 START TEST nvmf_abort 00:05:30.697 ************************************ 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:30.697 * Looking for test storage... 00:05:30.697 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:05:30.697 22:14:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:05:30.698 22:14:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:30.698 22:14:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:30.698 22:14:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:30.698 22:14:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:05:30.698 22:14:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:05:30.698 22:14:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:05:30.698 22:14:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:05:32.605 Found 0000:08:00.0 (0x8086 - 0x159b) 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:05:32.605 Found 0000:08:00.1 (0x8086 - 0x159b) 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:05:32.605 Found net devices under 0000:08:00.0: cvl_0_0 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:05:32.605 Found net devices under 0000:08:00.1: cvl_0_1 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:32.605 22:14:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:32.605 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:32.605 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:05:32.605 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:32.605 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:05:32.606 00:05:32.606 --- 10.0.0.2 ping statistics --- 00:05:32.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:32.606 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:05:32.606 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:32.606 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:32.606 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:05:32.606 00:05:32.606 --- 10.0.0.1 ping statistics --- 00:05:32.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:32.606 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:05:32.606 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:32.606 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:05:32.606 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:05:32.606 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:32.606 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:05:32.606 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:05:32.606 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:32.606 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:05:32.606 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:05:32.606 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:32.606 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:05:32.606 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:32.606 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:32.606 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=3758949 00:05:32.606 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:32.606 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 3758949 00:05:32.606 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 3758949 ']' 00:05:32.606 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.606 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:32.606 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.606 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:32.606 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:32.606 [2024-07-24 22:14:58.099285] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:05:32.606 [2024-07-24 22:14:58.099381] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:32.606 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.606 [2024-07-24 22:14:58.168769] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:32.606 [2024-07-24 22:14:58.290632] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:32.606 [2024-07-24 22:14:58.290703] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:32.606 [2024-07-24 22:14:58.290719] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:32.606 [2024-07-24 22:14:58.290732] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:32.606 [2024-07-24 22:14:58.290743] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:32.606 [2024-07-24 22:14:58.290837] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:32.606 [2024-07-24 22:14:58.290890] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:32.606 [2024-07-24 22:14:58.290893] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.864 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:32.864 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:05:32.864 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:05:32.864 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:32.864 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:32.864 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:32.864 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:32.864 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.864 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:32.864 [2024-07-24 22:14:58.428421] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:32.864 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.864 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:32.864 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.864 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:32.864 Malloc0 00:05:32.864 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.864 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:32.864 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.864 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:32.864 Delay0 00:05:32.864 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.864 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:32.864 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.864 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:32.864 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.864 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:32.864 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.864 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:32.864 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.865 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:32.865 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.865 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:32.865 [2024-07-24 22:14:58.492263] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:32.865 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.865 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:32.865 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.865 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:32.865 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.865 22:14:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:32.865 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.123 [2024-07-24 22:14:58.648582] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:35.654 Initializing NVMe Controllers 00:05:35.654 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:35.654 controller IO queue size 128 less than required 00:05:35.654 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:35.654 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:35.654 Initialization complete. Launching workers. 00:05:35.654 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28314 00:05:35.654 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28375, failed to submit 62 00:05:35.654 success 28318, unsuccess 57, failed 0 00:05:35.654 22:15:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:35.654 22:15:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.654 22:15:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:35.654 22:15:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.654 22:15:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:35.654 22:15:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:35.654 22:15:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:05:35.654 22:15:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:05:35.654 22:15:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:05:35.654 22:15:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:05:35.654 22:15:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:05:35.654 22:15:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:05:35.654 rmmod nvme_tcp 00:05:35.654 rmmod nvme_fabrics 00:05:35.654 rmmod nvme_keyring 00:05:35.654 22:15:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:05:35.654 22:15:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:05:35.654 22:15:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:05:35.654 22:15:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 3758949 ']' 00:05:35.654 22:15:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 3758949 00:05:35.654 22:15:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 3758949 ']' 00:05:35.654 22:15:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 3758949 00:05:35.654 22:15:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:05:35.654 22:15:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:35.654 22:15:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3758949 00:05:35.654 22:15:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:05:35.654 22:15:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:05:35.655 22:15:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3758949' 00:05:35.655 killing process with pid 3758949 00:05:35.655 22:15:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@967 -- # kill 3758949 00:05:35.655 22:15:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # wait 3758949 00:05:35.655 22:15:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:05:35.655 22:15:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:05:35.655 22:15:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:05:35.655 22:15:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:05:35.655 22:15:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:05:35.655 22:15:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:35.655 22:15:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:35.655 22:15:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:37.568 22:15:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:05:37.568 00:05:37.568 real 0m6.873s 00:05:37.568 user 0m10.602s 00:05:37.568 sys 0m2.146s 00:05:37.568 22:15:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:37.568 22:15:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:37.568 ************************************ 00:05:37.568 END TEST nvmf_abort 00:05:37.568 ************************************ 00:05:37.568 22:15:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:37.568 22:15:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:37.568 22:15:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.568 22:15:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:37.568 ************************************ 00:05:37.568 START TEST nvmf_ns_hotplug_stress 00:05:37.568 ************************************ 00:05:37.568 22:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:37.568 * Looking for test storage... 00:05:37.568 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:37.568 22:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:37.568 22:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:37.568 22:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:37.568 22:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:37.568 22:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:37.568 22:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:37.568 22:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:37.568 22:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:37.568 22:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:37.568 22:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:37.568 22:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:37.568 22:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:37.568 22:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:05:37.568 22:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:05:37.568 22:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:37.568 22:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:37.568 22:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:37.568 22:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:37.568 22:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:37.568 22:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:37.568 22:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:37.568 22:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:37.568 22:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:37.568 22:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:37.568 22:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:37.568 22:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:37.568 22:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:37.568 22:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:05:37.568 22:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:37.568 22:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:37.568 22:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:37.568 22:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:37.568 22:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:37.568 22:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:37.568 22:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:37.568 22:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:37.568 22:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:37.568 22:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:37.568 22:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:05:37.568 22:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:37.568 22:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:05:37.568 22:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:05:37.568 22:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:05:37.568 22:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:37.568 22:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:37.568 22:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:37.568 22:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:05:37.568 22:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:05:37.568 22:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:05:37.568 22:15:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:39.478 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:39.478 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:05:39.478 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:05:39.478 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:05:39.478 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:05:39.478 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:05:39.478 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:05:39.478 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:05:39.478 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:05:39.478 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:05:39.478 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:05:39.478 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:05:39.478 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:05:39.478 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:05:39.478 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:05:39.478 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:39.478 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:39.478 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:39.478 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:39.478 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:39.478 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:39.478 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:39.478 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:39.478 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:39.478 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:39.478 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:39.478 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:05:39.478 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:05:39.478 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:05:39.478 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:05:39.478 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:05:39.478 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:05:39.478 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:39.478 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:05:39.478 Found 0000:08:00.0 (0x8086 - 0x159b) 00:05:39.478 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:39.478 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:39.478 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:39.478 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:39.478 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:39.478 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:39.478 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:05:39.478 Found 0000:08:00.1 (0x8086 - 0x159b) 00:05:39.478 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:39.478 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:39.478 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:39.478 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:39.478 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:39.478 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:05:39.478 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:05:39.478 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:05:39.478 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:39.478 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:39.478 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:39.478 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:39.478 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:39.478 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:39.478 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:39.478 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:05:39.478 Found net devices under 0000:08:00.0: cvl_0_0 00:05:39.478 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:39.478 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:39.479 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:39.479 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:39.479 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:39.479 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:39.479 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:39.479 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:39.479 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:05:39.479 Found net devices under 0000:08:00.1: cvl_0_1 00:05:39.479 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:39.479 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:05:39.479 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:05:39.479 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:05:39.479 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:05:39.479 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:05:39.479 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:39.479 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:39.479 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:39.479 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:05:39.479 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:39.479 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:39.479 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:05:39.479 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:39.479 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:39.479 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:05:39.479 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:05:39.479 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:05:39.479 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:39.479 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:39.479 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:39.479 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:05:39.479 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:39.479 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:39.479 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:39.479 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:05:39.479 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:39.479 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.416 ms 00:05:39.479 00:05:39.479 --- 10.0.0.2 ping statistics --- 00:05:39.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:39.479 rtt min/avg/max/mdev = 0.416/0.416/0.416/0.000 ms 00:05:39.479 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:39.479 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:39.479 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:05:39.479 00:05:39.479 --- 10.0.0.1 ping statistics --- 00:05:39.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:39.479 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:05:39.479 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:39.479 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:05:39.479 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:05:39.479 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:39.479 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:05:39.479 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:05:39.479 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:39.479 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:05:39.479 22:15:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:05:39.479 22:15:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:39.479 22:15:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:05:39.479 22:15:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:39.479 22:15:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:39.479 22:15:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=3760790 00:05:39.479 22:15:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:39.479 22:15:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 3760790 00:05:39.479 22:15:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 3760790 ']' 00:05:39.479 22:15:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.479 22:15:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:39.479 22:15:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.479 22:15:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:39.479 22:15:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:39.479 [2024-07-24 22:15:05.063056] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:05:39.479 [2024-07-24 22:15:05.063142] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:39.479 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.479 [2024-07-24 22:15:05.129586] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:39.738 [2024-07-24 22:15:05.249182] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:39.738 [2024-07-24 22:15:05.249254] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:39.738 [2024-07-24 22:15:05.249272] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:39.738 [2024-07-24 22:15:05.249286] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:39.738 [2024-07-24 22:15:05.249297] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:39.738 [2024-07-24 22:15:05.249380] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:39.738 [2024-07-24 22:15:05.249432] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:39.738 [2024-07-24 22:15:05.249436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.738 22:15:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:39.738 22:15:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:05:39.738 22:15:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:05:39.738 22:15:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:39.738 22:15:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:39.739 22:15:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:39.739 22:15:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:39.739 22:15:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:39.997 [2024-07-24 22:15:05.651392] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:39.997 22:15:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:40.565 22:15:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:40.565 [2024-07-24 22:15:06.250581] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:40.824 22:15:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:41.083 22:15:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:41.341 Malloc0 00:05:41.341 22:15:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:41.599 Delay0 00:05:41.599 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:41.857 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:42.116 NULL1 00:05:42.116 22:15:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:42.374 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3761157 00:05:42.374 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:42.374 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3761157 00:05:42.374 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:42.632 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.890 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:43.149 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:43.149 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:43.407 true 00:05:43.407 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3761157 00:05:43.407 22:15:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:43.666 22:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:43.924 22:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:43.924 22:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:44.182 true 00:05:44.182 22:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3761157 00:05:44.182 22:15:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.751 22:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:45.010 22:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:45.010 22:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:45.268 true 00:05:45.268 22:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3761157 00:05:45.268 22:15:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:45.527 22:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:45.785 22:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:45.785 22:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:46.044 true 00:05:46.044 22:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3761157 00:05:46.044 22:15:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.981 Read completed with error (sct=0, sc=11) 00:05:46.981 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:47.239 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:47.239 22:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:47.497 true 00:05:47.497 22:15:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3761157 00:05:47.497 22:15:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.755 22:15:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.012 22:15:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:48.012 22:15:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:48.272 true 00:05:48.531 22:15:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3761157 00:05:48.531 22:15:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.803 22:15:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:49.141 22:15:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:49.141 22:15:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:49.399 true 00:05:49.399 22:15:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3761157 00:05:49.399 22:15:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.968 22:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:50.226 22:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:50.226 22:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:50.793 true 00:05:50.794 22:15:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3761157 00:05:50.794 22:15:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.053 22:15:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:51.311 22:15:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:51.311 22:15:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:51.569 true 00:05:51.569 22:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3761157 00:05:51.569 22:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.828 22:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:52.086 22:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:52.086 22:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:52.342 true 00:05:52.342 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3761157 00:05:52.342 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.277 22:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:53.277 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:53.277 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:53.535 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:53.535 22:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:53.535 22:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:53.793 true 00:05:53.793 22:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3761157 00:05:53.793 22:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.051 22:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:54.615 22:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:54.615 22:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:54.873 true 00:05:54.873 22:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3761157 00:05:54.873 22:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.130 22:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:55.387 22:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:55.387 22:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:55.644 true 00:05:55.644 22:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3761157 00:05:55.644 22:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.582 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:56.582 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:56.840 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:56.840 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:57.098 true 00:05:57.098 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3761157 00:05:57.098 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.355 22:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:57.613 22:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:57.613 22:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:57.870 true 00:05:57.870 22:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3761157 00:05:57.870 22:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.126 22:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:58.384 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:58.384 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:58.641 true 00:05:58.641 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3761157 00:05:58.641 22:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.020 22:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:00.020 22:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:00.020 22:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:00.278 true 00:06:00.278 22:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3761157 00:06:00.278 22:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.537 22:15:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:00.796 22:15:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:00.796 22:15:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:01.363 true 00:06:01.363 22:15:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3761157 00:06:01.363 22:15:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:01.622 22:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:01.880 22:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:01.880 22:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:02.138 true 00:06:02.138 22:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3761157 00:06:02.138 22:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.397 22:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:02.655 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:02.655 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:02.914 true 00:06:02.914 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3761157 00:06:02.914 22:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.854 22:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:04.112 22:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:04.112 22:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:04.370 true 00:06:04.370 22:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3761157 00:06:04.370 22:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.628 22:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:05.195 22:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:05.195 22:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:05.195 true 00:06:05.195 22:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3761157 00:06:05.195 22:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.762 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:06.021 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:06.021 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:06.280 true 00:06:06.280 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3761157 00:06:06.280 22:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.538 22:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:06.798 22:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:06.798 22:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:07.056 true 00:06:07.056 22:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3761157 00:06:07.056 22:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.992 22:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:07.992 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:07.992 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:07.992 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:08.250 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:08.250 22:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:08.250 22:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:08.509 true 00:06:08.509 22:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3761157 00:06:08.509 22:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:08.767 22:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:09.333 22:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:09.333 22:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:09.333 true 00:06:09.333 22:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3761157 00:06:09.333 22:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.269 22:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:10.269 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:10.527 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:10.527 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:10.785 true 00:06:10.785 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3761157 00:06:10.785 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:11.044 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:11.302 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:11.302 22:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:11.562 true 00:06:11.821 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3761157 00:06:11.821 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.080 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:12.364 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:06:12.364 22:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:06:12.684 true 00:06:12.684 22:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3761157 00:06:12.684 22:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:13.622 Initializing NVMe Controllers 00:06:13.622 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:13.622 Controller IO queue size 128, less than required. 00:06:13.622 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:13.622 Controller IO queue size 128, less than required. 00:06:13.622 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:13.622 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:13.622 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:13.622 Initialization complete. Launching workers. 00:06:13.622 ======================================================== 00:06:13.622 Latency(us) 00:06:13.622 Device Information : IOPS MiB/s Average min max 00:06:13.622 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 343.47 0.17 124089.96 3527.34 1015752.75 00:06:13.622 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 5383.93 2.63 23692.61 5373.86 530393.21 00:06:13.622 ======================================================== 00:06:13.622 Total : 5727.40 2.80 29713.35 3527.34 1015752.75 00:06:13.622 00:06:13.622 22:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:13.622 22:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:06:13.622 22:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:06:13.882 true 00:06:14.141 22:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3761157 00:06:14.141 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3761157) - No such process 00:06:14.141 22:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3761157 00:06:14.141 22:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.399 22:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:14.658 22:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:14.658 22:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:14.658 22:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:14.658 22:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:14.658 22:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:14.917 null0 00:06:14.917 22:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:14.917 22:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:14.917 22:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:15.175 null1 00:06:15.175 22:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:15.175 22:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:15.175 22:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:15.434 null2 00:06:15.434 22:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:15.434 22:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:15.434 22:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:15.692 null3 00:06:15.692 22:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:15.692 22:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:15.692 22:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:16.260 null4 00:06:16.260 22:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:16.260 22:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:16.260 22:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:16.518 null5 00:06:16.518 22:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:16.518 22:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:16.518 22:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:16.776 null6 00:06:16.776 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:16.776 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:16.776 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:17.035 null7 00:06:17.035 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:17.035 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:17.035 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:17.035 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:17.035 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:17.035 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:17.035 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:17.035 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:17.035 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:17.035 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:17.035 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.035 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:17.035 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:17.035 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:17.035 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:17.035 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:17.035 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:17.035 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:17.035 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.035 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:17.035 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:17.035 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:17.035 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:17.035 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:17.035 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:17.035 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:17.035 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.035 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:17.035 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:17.035 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:17.035 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:17.035 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:17.035 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:17.035 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:17.035 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.035 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:17.035 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:17.035 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:17.035 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:17.035 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:17.035 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:17.035 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:17.035 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.035 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:17.035 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:17.035 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:17.035 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:17.035 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:17.035 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:17.035 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:17.035 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.035 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:17.035 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:17.035 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:17.035 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:17.035 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:17.035 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:17.035 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:17.035 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.035 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:17.035 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:17.035 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:17.035 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:17.035 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:17.035 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:17.035 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:17.035 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3764920 3764921 3764923 3764925 3764927 3764929 3764931 3764933 00:06:17.035 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.035 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:17.293 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:17.293 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:17.293 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:17.293 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:17.293 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:17.294 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.294 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:17.294 22:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:17.552 22:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.552 22:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.552 22:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:17.552 22:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.552 22:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.552 22:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:17.552 22:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.552 22:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.552 22:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:17.552 22:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.552 22:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.552 22:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:17.552 22:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.552 22:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.552 22:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:17.552 22:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.552 22:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.552 22:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:17.552 22:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.552 22:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.552 22:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:17.552 22:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:17.552 22:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.552 22:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:17.811 22:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:17.811 22:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:17.811 22:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:17.811 22:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:17.811 22:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:17.811 22:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:17.811 22:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.811 22:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:18.069 22:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.069 22:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.069 22:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:18.069 22:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.069 22:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.069 22:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:18.069 22:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.069 22:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.069 22:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:18.069 22:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.069 22:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.069 22:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:18.069 22:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.069 22:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.069 22:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:18.069 22:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.069 22:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.069 22:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:18.069 22:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.069 22:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.069 22:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:18.327 22:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.327 22:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.327 22:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:18.327 22:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:18.327 22:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:18.327 22:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:18.327 22:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:18.327 22:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:18.585 22:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:18.585 22:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:18.585 22:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:18.585 22:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.585 22:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.585 22:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:18.585 22:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.585 22:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.585 22:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:18.585 22:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.585 22:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.585 22:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:18.585 22:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.585 22:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.585 22:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:18.585 22:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.585 22:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.585 22:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:18.843 22:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.843 22:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.843 22:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:18.843 22:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.843 22:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.843 22:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:18.843 22:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.843 22:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.843 22:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:18.843 22:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:18.843 22:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:18.843 22:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:18.844 22:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:19.102 22:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.102 22:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:19.102 22:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:19.102 22:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:19.102 22:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.102 22:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.102 22:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:19.102 22:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.102 22:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.102 22:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:19.102 22:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.102 22:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.102 22:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:19.102 22:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.102 22:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.102 22:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:19.360 22:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.360 22:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.360 22:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:19.361 22:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.361 22:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.361 22:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:19.361 22:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.361 22:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.361 22:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:19.361 22:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.361 22:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.361 22:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:19.361 22:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:19.361 22:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:19.361 22:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:19.361 22:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:19.619 22:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.619 22:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:19.619 22:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:19.619 22:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:19.619 22:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.619 22:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.619 22:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:19.619 22:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.619 22:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.619 22:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:19.619 22:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.619 22:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.619 22:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:19.619 22:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.619 22:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.619 22:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:19.877 22:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.877 22:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.878 22:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:19.878 22:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.878 22:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.878 22:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:19.878 22:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.878 22:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.878 22:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:19.878 22:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.878 22:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.878 22:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:19.878 22:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:19.878 22:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:19.878 22:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:19.878 22:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:20.136 22:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:20.136 22:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:20.136 22:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:20.136 22:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:20.136 22:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.136 22:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.136 22:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:20.136 22:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.136 22:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.136 22:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:20.136 22:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.136 22:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.136 22:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:20.395 22:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.395 22:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.395 22:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:20.395 22:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.395 22:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.395 22:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:20.395 22:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.395 22:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.395 22:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:20.395 22:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.395 22:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.395 22:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:20.395 22:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.395 22:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.395 22:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:20.395 22:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:20.395 22:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:20.653 22:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:20.653 22:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:20.653 22:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:20.653 22:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:20.653 22:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:20.653 22:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:20.653 22:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.653 22:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.653 22:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:20.653 22:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.653 22:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.653 22:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:20.911 22:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.911 22:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.911 22:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:20.911 22:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.911 22:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.911 22:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:20.911 22:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.911 22:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.911 22:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:20.911 22:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.911 22:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.911 22:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:20.911 22:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.911 22:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.911 22:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:20.911 22:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.911 22:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.911 22:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:20.911 22:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:20.911 22:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:21.169 22:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:21.169 22:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:21.169 22:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:21.169 22:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:21.169 22:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.169 22:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:21.169 22:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.169 22:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.169 22:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:21.169 22:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.169 22:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.169 22:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:21.428 22:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.428 22:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.428 22:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:21.428 22:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.428 22:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.428 22:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:21.428 22:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.428 22:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.428 22:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:21.428 22:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.428 22:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.428 22:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:21.428 22:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.428 22:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.428 22:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:21.428 22:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:21.428 22:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.428 22:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.428 22:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:21.428 22:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:21.686 22:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:21.686 22:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:21.686 22:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:21.686 22:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:21.686 22:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.686 22:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.686 22:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:21.686 22:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:21.686 22:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.686 22:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.686 22:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.686 22:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:21.945 22:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.945 22:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.945 22:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:21.945 22:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.945 22:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.945 22:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:21.945 22:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.945 22:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.945 22:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:21.945 22:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.945 22:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.945 22:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:21.945 22:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:21.945 22:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.945 22:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.945 22:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:21.945 22:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.945 22:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.945 22:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:21.945 22:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:22.203 22:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:22.203 22:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:22.203 22:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:22.203 22:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:22.203 22:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.203 22:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.203 22:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:22.203 22:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.462 22:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.462 22:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.462 22:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.462 22:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.462 22:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.462 22:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.462 22:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.462 22:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.462 22:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.462 22:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.462 22:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.462 22:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.721 22:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.721 22:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.721 22:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:22.721 22:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:22.721 22:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:22.721 22:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:06:22.721 22:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:22.721 22:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:06:22.721 22:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:22.721 22:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:22.721 rmmod nvme_tcp 00:06:22.721 rmmod nvme_fabrics 00:06:22.721 rmmod nvme_keyring 00:06:22.721 22:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:22.721 22:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:06:22.721 22:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:06:22.721 22:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 3760790 ']' 00:06:22.721 22:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 3760790 00:06:22.721 22:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 3760790 ']' 00:06:22.721 22:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 3760790 00:06:22.721 22:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:06:22.721 22:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:22.721 22:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3760790 00:06:22.721 22:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:06:22.721 22:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:06:22.721 22:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3760790' 00:06:22.721 killing process with pid 3760790 00:06:22.721 22:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 3760790 00:06:22.721 22:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 3760790 00:06:22.980 22:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:22.980 22:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:22.980 22:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:22.980 22:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:22.980 22:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:22.980 22:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:22.980 22:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:22.980 22:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:24.885 22:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:24.885 00:06:24.885 real 0m47.370s 00:06:24.885 user 3m39.994s 00:06:24.885 sys 0m16.292s 00:06:24.885 22:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.885 22:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:24.885 ************************************ 00:06:24.885 END TEST nvmf_ns_hotplug_stress 00:06:24.885 ************************************ 00:06:24.885 22:15:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:24.885 22:15:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:24.885 22:15:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.885 22:15:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:25.143 ************************************ 00:06:25.143 START TEST nvmf_delete_subsystem 00:06:25.143 ************************************ 00:06:25.143 22:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:25.143 * Looking for test storage... 00:06:25.143 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:25.144 22:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:25.144 22:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:25.144 22:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:25.144 22:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:25.144 22:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:25.144 22:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:25.144 22:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:25.144 22:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:25.144 22:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:25.144 22:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:25.144 22:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:25.144 22:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:25.144 22:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:06:25.144 22:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:06:25.144 22:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:25.144 22:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:25.144 22:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:25.144 22:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:25.144 22:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:25.144 22:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:25.144 22:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:25.144 22:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:25.144 22:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.144 22:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.144 22:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.144 22:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:25.144 22:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.144 22:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:06:25.144 22:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:25.144 22:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:25.144 22:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:25.144 22:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:25.144 22:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:25.144 22:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:25.144 22:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:25.144 22:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:25.144 22:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:25.144 22:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:25.144 22:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:25.144 22:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:25.144 22:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:25.144 22:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:25.144 22:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:25.144 22:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:25.144 22:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:25.144 22:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:25.144 22:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:25.144 22:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:06:25.144 22:15:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:27.049 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:27.049 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:06:27.049 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:27.049 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:27.049 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:27.049 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:27.049 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:27.049 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:06:27.049 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:27.049 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:06:27.049 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:06:27.049 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:06:27.049 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:06:27.049 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:06:27.049 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:06:27.049 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:27.049 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:27.049 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:27.049 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:27.049 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:27.049 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:27.049 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:27.049 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:27.049 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:27.049 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:27.049 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:27.049 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:27.049 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:27.049 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:27.049 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:27.049 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:27.049 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:27.049 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:27.049 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:06:27.049 Found 0000:08:00.0 (0x8086 - 0x159b) 00:06:27.049 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:27.049 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:06:27.050 Found 0000:08:00.1 (0x8086 - 0x159b) 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:06:27.050 Found net devices under 0000:08:00.0: cvl_0_0 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:06:27.050 Found net devices under 0000:08:00.1: cvl_0_1 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:27.050 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:27.050 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.295 ms 00:06:27.050 00:06:27.050 --- 10.0.0.2 ping statistics --- 00:06:27.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:27.050 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:27.050 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:27.050 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:06:27.050 00:06:27.050 --- 10.0.0.1 ping statistics --- 00:06:27.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:27.050 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=3767086 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 3767086 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 3767086 ']' 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:27.050 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:27.050 [2024-07-24 22:15:52.510749] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:06:27.050 [2024-07-24 22:15:52.510847] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:27.050 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.050 [2024-07-24 22:15:52.577233] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:27.050 [2024-07-24 22:15:52.696004] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:27.050 [2024-07-24 22:15:52.696074] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:27.050 [2024-07-24 22:15:52.696090] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:27.050 [2024-07-24 22:15:52.696113] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:27.050 [2024-07-24 22:15:52.696124] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:27.050 [2024-07-24 22:15:52.697503] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.050 [2024-07-24 22:15:52.697553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.309 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:27.309 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:06:27.309 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:27.309 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:27.309 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:27.309 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:27.309 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:27.309 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.309 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:27.309 [2024-07-24 22:15:52.840598] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:27.309 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.309 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:27.309 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.309 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:27.309 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.309 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:27.309 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.309 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:27.309 [2024-07-24 22:15:52.856782] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:27.309 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.309 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:27.309 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.309 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:27.309 NULL1 00:06:27.309 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.309 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:27.309 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.309 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:27.309 Delay0 00:06:27.309 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.309 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:27.309 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.309 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:27.309 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.309 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3767111 00:06:27.309 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:27.309 22:15:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:27.309 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.309 [2024-07-24 22:15:52.941517] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:29.206 22:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:29.206 22:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.206 22:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:29.464 Read completed with error (sct=0, sc=8) 00:06:29.464 Write completed with error (sct=0, sc=8) 00:06:29.464 Read completed with error (sct=0, sc=8) 00:06:29.464 Read completed with error (sct=0, sc=8) 00:06:29.464 starting I/O failed: -6 00:06:29.464 Read completed with error (sct=0, sc=8) 00:06:29.464 Write completed with error (sct=0, sc=8) 00:06:29.464 Write completed with error (sct=0, sc=8) 00:06:29.464 Write completed with error (sct=0, sc=8) 00:06:29.464 starting I/O failed: -6 00:06:29.464 Write completed with error (sct=0, sc=8) 00:06:29.464 Read completed with error (sct=0, sc=8) 00:06:29.464 Read completed with error (sct=0, sc=8) 00:06:29.464 Read completed with error (sct=0, sc=8) 00:06:29.464 starting I/O failed: -6 00:06:29.464 Read completed with error (sct=0, sc=8) 00:06:29.464 Read completed with error (sct=0, sc=8) 00:06:29.464 Read completed with error (sct=0, sc=8) 00:06:29.464 Read completed with error (sct=0, sc=8) 00:06:29.464 Read completed with error (sct=0, sc=8) 00:06:29.464 Read completed with error (sct=0, sc=8) 00:06:29.464 starting I/O failed: -6 00:06:29.464 Write completed with error (sct=0, sc=8) 00:06:29.464 Write completed with error (sct=0, sc=8) 00:06:29.464 starting I/O failed: -6 00:06:29.464 Read completed with error (sct=0, sc=8) 00:06:29.464 Write completed with error (sct=0, sc=8) 00:06:29.464 Read completed with error (sct=0, sc=8) 00:06:29.464 Write completed with error (sct=0, sc=8) 00:06:29.464 Read completed with error (sct=0, sc=8) 00:06:29.464 starting I/O failed: -6 00:06:29.464 Read completed with error (sct=0, sc=8) 00:06:29.464 Read completed with error (sct=0, sc=8) 00:06:29.464 Read completed with error (sct=0, sc=8) 00:06:29.464 starting I/O failed: -6 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Write completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 starting I/O failed: -6 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Write completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 starting I/O failed: -6 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Write completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Write completed with error (sct=0, sc=8) 00:06:29.465 starting I/O failed: -6 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Write completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 starting I/O failed: -6 00:06:29.465 Write completed with error (sct=0, sc=8) 00:06:29.465 Write completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Write completed with error (sct=0, sc=8) 00:06:29.465 starting I/O failed: -6 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Write completed with error (sct=0, sc=8) 00:06:29.465 starting I/O failed: -6 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Write completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 starting I/O failed: -6 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 starting I/O failed: -6 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Write completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 starting I/O failed: -6 00:06:29.465 Write completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Write completed with error (sct=0, sc=8) 00:06:29.465 starting I/O failed: -6 00:06:29.465 Write completed with error (sct=0, sc=8) 00:06:29.465 Write completed with error (sct=0, sc=8) 00:06:29.465 Write completed with error (sct=0, sc=8) 00:06:29.465 starting I/O failed: -6 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Write completed with error (sct=0, sc=8) 00:06:29.465 Write completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 starting I/O failed: -6 00:06:29.465 Write completed with error (sct=0, sc=8) 00:06:29.465 starting I/O failed: -6 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Write completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 starting I/O failed: -6 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 starting I/O failed: -6 00:06:29.465 starting I/O failed: -6 00:06:29.465 starting I/O failed: -6 00:06:29.465 [2024-07-24 22:15:55.079922] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20e1130 is same with the state(5) to be set 00:06:29.465 Write completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 starting I/O failed: -6 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Write completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Write completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 starting I/O failed: -6 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Write completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 starting I/O failed: -6 00:06:29.465 Write completed with error (sct=0, sc=8) 00:06:29.465 Write completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 starting I/O failed: -6 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 starting I/O failed: -6 00:06:29.465 Write completed with error (sct=0, sc=8) 00:06:29.465 Write completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Write completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 starting I/O failed: -6 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Write completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Write completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 starting I/O failed: -6 00:06:29.465 Write completed with error (sct=0, sc=8) 00:06:29.465 Write completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 starting I/O failed: -6 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Write completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Write completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 starting I/O failed: -6 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 starting I/O failed: -6 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 starting I/O failed: -6 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Write completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Write completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 starting I/O failed: -6 00:06:29.465 Write completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Write completed with error (sct=0, sc=8) 00:06:29.465 starting I/O failed: -6 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Write completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Write completed with error (sct=0, sc=8) 00:06:29.465 starting I/O failed: -6 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 starting I/O failed: -6 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Write completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 starting I/O failed: -6 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 starting I/O failed: -6 00:06:29.465 Write completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 starting I/O failed: -6 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Write completed with error (sct=0, sc=8) 00:06:29.465 starting I/O failed: -6 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 starting I/O failed: -6 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 starting I/O failed: -6 00:06:29.465 Write completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 starting I/O failed: -6 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 starting I/O failed: -6 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 starting I/O failed: -6 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 starting I/O failed: -6 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 Read completed with error (sct=0, sc=8) 00:06:29.465 starting I/O failed: -6 00:06:29.466 Read completed with error (sct=0, sc=8) 00:06:29.466 Read completed with error (sct=0, sc=8) 00:06:29.466 starting I/O failed: -6 00:06:29.466 Write completed with error (sct=0, sc=8) 00:06:29.466 Read completed with error (sct=0, sc=8) 00:06:29.466 starting I/O failed: -6 00:06:29.466 Read completed with error (sct=0, sc=8) 00:06:29.466 [2024-07-24 22:15:55.080979] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2748000c00 is same with the state(5) to be set 00:06:29.466 starting I/O failed: -6 00:06:29.466 starting I/O failed: -6 00:06:29.466 starting I/O failed: -6 00:06:29.466 starting I/O failed: -6 00:06:29.466 starting I/O failed: -6 00:06:30.400 [2024-07-24 22:15:56.041732] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20be600 is same with the state(5) to be set 00:06:30.400 Read completed with error (sct=0, sc=8) 00:06:30.400 Read completed with error (sct=0, sc=8) 00:06:30.400 Read completed with error (sct=0, sc=8) 00:06:30.400 Read completed with error (sct=0, sc=8) 00:06:30.400 Write completed with error (sct=0, sc=8) 00:06:30.400 Write completed with error (sct=0, sc=8) 00:06:30.400 Write completed with error (sct=0, sc=8) 00:06:30.400 Write completed with error (sct=0, sc=8) 00:06:30.400 Read completed with error (sct=0, sc=8) 00:06:30.400 Read completed with error (sct=0, sc=8) 00:06:30.400 Read completed with error (sct=0, sc=8) 00:06:30.400 Read completed with error (sct=0, sc=8) 00:06:30.400 Read completed with error (sct=0, sc=8) 00:06:30.400 Read completed with error (sct=0, sc=8) 00:06:30.400 Write completed with error (sct=0, sc=8) 00:06:30.400 Read completed with error (sct=0, sc=8) 00:06:30.400 Write completed with error (sct=0, sc=8) 00:06:30.400 Read completed with error (sct=0, sc=8) 00:06:30.400 Read completed with error (sct=0, sc=8) 00:06:30.400 Read completed with error (sct=0, sc=8) 00:06:30.400 Read completed with error (sct=0, sc=8) 00:06:30.400 [2024-07-24 22:15:56.078154] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20df2d0 is same with the state(5) to be set 00:06:30.400 Write completed with error (sct=0, sc=8) 00:06:30.400 Read completed with error (sct=0, sc=8) 00:06:30.400 Read completed with error (sct=0, sc=8) 00:06:30.400 Read completed with error (sct=0, sc=8) 00:06:30.400 Read completed with error (sct=0, sc=8) 00:06:30.400 Write completed with error (sct=0, sc=8) 00:06:30.400 Read completed with error (sct=0, sc=8) 00:06:30.400 Read completed with error (sct=0, sc=8) 00:06:30.400 Read completed with error (sct=0, sc=8) 00:06:30.400 Read completed with error (sct=0, sc=8) 00:06:30.400 Read completed with error (sct=0, sc=8) 00:06:30.400 Write completed with error (sct=0, sc=8) 00:06:30.400 Write completed with error (sct=0, sc=8) 00:06:30.400 Read completed with error (sct=0, sc=8) 00:06:30.400 Read completed with error (sct=0, sc=8) 00:06:30.400 Read completed with error (sct=0, sc=8) 00:06:30.400 Read completed with error (sct=0, sc=8) 00:06:30.400 Read completed with error (sct=0, sc=8) 00:06:30.400 Write completed with error (sct=0, sc=8) 00:06:30.400 Read completed with error (sct=0, sc=8) 00:06:30.400 Read completed with error (sct=0, sc=8) 00:06:30.400 [2024-07-24 22:15:56.078363] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20dedc0 is same with the state(5) to be set 00:06:30.400 Read completed with error (sct=0, sc=8) 00:06:30.400 Read completed with error (sct=0, sc=8) 00:06:30.400 Read completed with error (sct=0, sc=8) 00:06:30.400 Read completed with error (sct=0, sc=8) 00:06:30.400 Read completed with error (sct=0, sc=8) 00:06:30.400 Read completed with error (sct=0, sc=8) 00:06:30.400 Read completed with error (sct=0, sc=8) 00:06:30.400 Read completed with error (sct=0, sc=8) 00:06:30.400 Write completed with error (sct=0, sc=8) 00:06:30.400 Read completed with error (sct=0, sc=8) 00:06:30.400 Write completed with error (sct=0, sc=8) 00:06:30.400 Read completed with error (sct=0, sc=8) 00:06:30.400 Read completed with error (sct=0, sc=8) 00:06:30.400 Read completed with error (sct=0, sc=8) 00:06:30.400 Write completed with error (sct=0, sc=8) 00:06:30.400 Read completed with error (sct=0, sc=8) 00:06:30.400 Read completed with error (sct=0, sc=8) 00:06:30.400 Read completed with error (sct=0, sc=8) 00:06:30.400 Read completed with error (sct=0, sc=8) 00:06:30.400 Write completed with error (sct=0, sc=8) 00:06:30.400 Read completed with error (sct=0, sc=8) 00:06:30.400 Read completed with error (sct=0, sc=8) 00:06:30.400 Read completed with error (sct=0, sc=8) 00:06:30.401 Read completed with error (sct=0, sc=8) 00:06:30.401 Read completed with error (sct=0, sc=8) 00:06:30.401 Read completed with error (sct=0, sc=8) 00:06:30.401 Read completed with error (sct=0, sc=8) 00:06:30.401 Read completed with error (sct=0, sc=8) 00:06:30.401 Write completed with error (sct=0, sc=8) 00:06:30.401 Write completed with error (sct=0, sc=8) 00:06:30.401 Read completed with error (sct=0, sc=8) 00:06:30.401 Read completed with error (sct=0, sc=8) 00:06:30.401 Write completed with error (sct=0, sc=8) 00:06:30.401 Read completed with error (sct=0, sc=8) 00:06:30.401 Read completed with error (sct=0, sc=8) 00:06:30.401 Write completed with error (sct=0, sc=8) 00:06:30.401 Read completed with error (sct=0, sc=8) 00:06:30.401 Read completed with error (sct=0, sc=8) 00:06:30.401 Read completed with error (sct=0, sc=8) 00:06:30.401 Write completed with error (sct=0, sc=8) 00:06:30.401 Read completed with error (sct=0, sc=8) 00:06:30.401 [2024-07-24 22:15:56.080425] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f274800d660 is same with the state(5) to be set 00:06:30.401 Write completed with error (sct=0, sc=8) 00:06:30.401 Read completed with error (sct=0, sc=8) 00:06:30.401 Read completed with error (sct=0, sc=8) 00:06:30.401 Read completed with error (sct=0, sc=8) 00:06:30.401 Read completed with error (sct=0, sc=8) 00:06:30.401 Read completed with error (sct=0, sc=8) 00:06:30.401 Read completed with error (sct=0, sc=8) 00:06:30.401 Write completed with error (sct=0, sc=8) 00:06:30.401 Read completed with error (sct=0, sc=8) 00:06:30.401 Read completed with error (sct=0, sc=8) 00:06:30.401 Read completed with error (sct=0, sc=8) 00:06:30.401 Read completed with error (sct=0, sc=8) 00:06:30.401 Write completed with error (sct=0, sc=8) 00:06:30.401 Write completed with error (sct=0, sc=8) 00:06:30.401 Write completed with error (sct=0, sc=8) 00:06:30.401 Read completed with error (sct=0, sc=8) 00:06:30.401 Read completed with error (sct=0, sc=8) 00:06:30.401 Read completed with error (sct=0, sc=8) 00:06:30.401 Read completed with error (sct=0, sc=8) 00:06:30.401 Read completed with error (sct=0, sc=8) 00:06:30.401 Read completed with error (sct=0, sc=8) 00:06:30.401 Read completed with error (sct=0, sc=8) 00:06:30.401 Read completed with error (sct=0, sc=8) 00:06:30.401 Read completed with error (sct=0, sc=8) 00:06:30.401 Read completed with error (sct=0, sc=8) 00:06:30.401 Write completed with error (sct=0, sc=8) 00:06:30.401 Read completed with error (sct=0, sc=8) 00:06:30.401 Read completed with error (sct=0, sc=8) 00:06:30.401 Read completed with error (sct=0, sc=8) 00:06:30.401 Write completed with error (sct=0, sc=8) 00:06:30.401 Read completed with error (sct=0, sc=8) 00:06:30.401 Write completed with error (sct=0, sc=8) 00:06:30.401 Read completed with error (sct=0, sc=8) 00:06:30.401 Read completed with error (sct=0, sc=8) 00:06:30.401 Read completed with error (sct=0, sc=8) 00:06:30.401 Read completed with error (sct=0, sc=8) 00:06:30.401 Write completed with error (sct=0, sc=8) 00:06:30.401 Read completed with error (sct=0, sc=8) 00:06:30.401 Read completed with error (sct=0, sc=8) 00:06:30.401 Read completed with error (sct=0, sc=8) 00:06:30.401 Read completed with error (sct=0, sc=8) 00:06:30.401 [2024-07-24 22:15:56.082720] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f274800d000 is same with the state(5) to be set 00:06:30.401 Initializing NVMe Controllers 00:06:30.401 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:30.401 Controller IO queue size 128, less than required. 00:06:30.401 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:30.401 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:30.401 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:30.401 Initialization complete. Launching workers. 00:06:30.401 ======================================================== 00:06:30.401 Latency(us) 00:06:30.401 Device Information : IOPS MiB/s Average min max 00:06:30.401 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 163.61 0.08 909304.56 788.41 1014422.51 00:06:30.401 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 185.92 0.09 906982.94 960.47 1012546.13 00:06:30.401 ======================================================== 00:06:30.401 Total : 349.53 0.17 908069.66 788.41 1014422.51 00:06:30.401 00:06:30.401 [2024-07-24 22:15:56.083180] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20be600 (9): Bad file descriptor 00:06:30.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:30.401 22:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.401 22:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:30.401 22:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3767111 00:06:30.401 22:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:30.968 22:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:30.968 22:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3767111 00:06:30.968 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3767111) - No such process 00:06:30.968 22:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3767111 00:06:30.968 22:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:06:30.968 22:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 3767111 00:06:30.968 22:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:06:30.968 22:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:30.968 22:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:06:30.968 22:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:30.968 22:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 3767111 00:06:30.968 22:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:06:30.968 22:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:30.968 22:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:30.968 22:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:30.968 22:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:30.968 22:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.968 22:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:30.968 22:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.968 22:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:30.968 22:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.968 22:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:30.968 [2024-07-24 22:15:56.604558] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:30.968 22:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.968 22:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:30.968 22:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:30.968 22:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:30.968 22:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:30.968 22:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3767515 00:06:30.968 22:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:30.968 22:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:30.968 22:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3767515 00:06:30.968 22:15:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:30.968 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.968 [2024-07-24 22:15:56.668907] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:31.534 22:15:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:31.534 22:15:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3767515 00:06:31.534 22:15:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:32.100 22:15:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:32.100 22:15:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3767515 00:06:32.100 22:15:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:32.665 22:15:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:32.665 22:15:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3767515 00:06:32.665 22:15:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:33.230 22:15:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:33.230 22:15:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3767515 00:06:33.230 22:15:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:33.489 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:33.489 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3767515 00:06:33.489 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:34.055 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:34.055 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3767515 00:06:34.055 22:15:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:34.313 Initializing NVMe Controllers 00:06:34.313 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:34.313 Controller IO queue size 128, less than required. 00:06:34.313 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:34.313 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:34.313 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:34.313 Initialization complete. Launching workers. 00:06:34.313 ======================================================== 00:06:34.313 Latency(us) 00:06:34.313 Device Information : IOPS MiB/s Average min max 00:06:34.313 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004257.14 1000202.43 1043624.42 00:06:34.313 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004550.86 1000220.33 1012596.87 00:06:34.313 ======================================================== 00:06:34.313 Total : 256.00 0.12 1004404.00 1000202.43 1043624.42 00:06:34.313 00:06:34.572 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:34.572 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3767515 00:06:34.572 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3767515) - No such process 00:06:34.572 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3767515 00:06:34.572 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:34.572 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:34.572 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:34.572 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:06:34.572 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:34.572 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:06:34.572 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:34.572 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:34.572 rmmod nvme_tcp 00:06:34.572 rmmod nvme_fabrics 00:06:34.572 rmmod nvme_keyring 00:06:34.572 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:34.572 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:06:34.572 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:06:34.572 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 3767086 ']' 00:06:34.572 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 3767086 00:06:34.572 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 3767086 ']' 00:06:34.572 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 3767086 00:06:34.572 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:06:34.572 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:34.572 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3767086 00:06:34.572 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:34.572 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:34.572 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3767086' 00:06:34.572 killing process with pid 3767086 00:06:34.572 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 3767086 00:06:34.572 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 3767086 00:06:34.831 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:34.831 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:34.831 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:34.831 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:34.831 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:34.831 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:34.831 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:34.831 22:16:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:37.373 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:37.373 00:06:37.373 real 0m11.892s 00:06:37.373 user 0m27.771s 00:06:37.373 sys 0m2.675s 00:06:37.373 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:37.373 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:37.373 ************************************ 00:06:37.373 END TEST nvmf_delete_subsystem 00:06:37.373 ************************************ 00:06:37.373 22:16:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:37.373 22:16:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:37.373 22:16:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.373 22:16:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:37.373 ************************************ 00:06:37.373 START TEST nvmf_host_management 00:06:37.373 ************************************ 00:06:37.373 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:37.373 * Looking for test storage... 00:06:37.373 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:37.373 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:37.373 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:37.373 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:37.373 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:37.373 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:37.373 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:37.373 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:37.373 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:37.373 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:37.373 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:37.373 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:37.373 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:37.373 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:06:37.373 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:06:37.373 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:37.373 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:37.373 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:37.374 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:37.374 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:37.374 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:37.374 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:37.374 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:37.374 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.374 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.374 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.374 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:37.374 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.374 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:06:37.374 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:37.374 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:37.374 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:37.374 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:37.374 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:37.374 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:37.374 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:37.374 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:37.374 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:37.374 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:37.374 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:37.374 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:37.374 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:37.374 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:37.374 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:37.374 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:37.374 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:37.374 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:37.374 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:37.374 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:37.374 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:37.374 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:06:37.374 22:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:38.810 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:38.810 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:06:38.810 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:38.810 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:38.810 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:38.810 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:38.810 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:38.810 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:06:38.810 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:38.810 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:06:38.810 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:06:38.810 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:06:38.810 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:06:38.810 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:06:38.810 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:06:38.810 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:38.810 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:38.810 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:38.810 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:38.810 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:38.810 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:38.810 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:38.810 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:38.810 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:38.810 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:38.810 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:38.810 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:38.810 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:38.810 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:38.810 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:38.810 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:38.810 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:38.810 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:38.810 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:06:38.810 Found 0000:08:00.0 (0x8086 - 0x159b) 00:06:38.810 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:38.810 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:38.810 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:38.810 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:38.810 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:06:38.811 Found 0000:08:00.1 (0x8086 - 0x159b) 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:06:38.811 Found net devices under 0000:08:00.0: cvl_0_0 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:06:38.811 Found net devices under 0000:08:00.1: cvl_0_1 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:38.811 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:38.811 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:06:38.811 00:06:38.811 --- 10.0.0.2 ping statistics --- 00:06:38.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:38.811 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:38.811 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:38.811 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:06:38.811 00:06:38.811 --- 10.0.0.1 ping statistics --- 00:06:38.811 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:38.811 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=3769322 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 3769322 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 3769322 ']' 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.811 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:38.811 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:38.811 [2024-07-24 22:16:04.403802] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:06:38.811 [2024-07-24 22:16:04.403895] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:38.811 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.811 [2024-07-24 22:16:04.474654] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:39.070 [2024-07-24 22:16:04.595974] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:39.070 [2024-07-24 22:16:04.596042] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:39.070 [2024-07-24 22:16:04.596058] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:39.070 [2024-07-24 22:16:04.596071] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:39.070 [2024-07-24 22:16:04.596084] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:39.070 [2024-07-24 22:16:04.596172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:39.070 [2024-07-24 22:16:04.596275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:39.070 [2024-07-24 22:16:04.596278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.070 [2024-07-24 22:16:04.596225] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:39.070 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:39.070 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:06:39.070 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:39.070 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:39.070 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:39.070 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:39.070 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:39.070 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.070 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:39.070 [2024-07-24 22:16:04.745784] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:39.070 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.070 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:39.070 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:39.071 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:39.071 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:39.071 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:39.071 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:39.071 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.071 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:39.329 Malloc0 00:06:39.329 [2024-07-24 22:16:04.808354] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:39.329 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.329 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:39.329 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:39.329 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:39.329 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3769374 00:06:39.329 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3769374 /var/tmp/bdevperf.sock 00:06:39.329 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 3769374 ']' 00:06:39.329 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:39.329 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:39.329 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:39.329 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:39.329 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:39.329 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:06:39.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:39.329 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:39.329 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:06:39.329 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:39.329 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:06:39.329 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:06:39.329 { 00:06:39.329 "params": { 00:06:39.329 "name": "Nvme$subsystem", 00:06:39.329 "trtype": "$TEST_TRANSPORT", 00:06:39.329 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:39.329 "adrfam": "ipv4", 00:06:39.329 "trsvcid": "$NVMF_PORT", 00:06:39.329 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:39.329 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:39.329 "hdgst": ${hdgst:-false}, 00:06:39.329 "ddgst": ${ddgst:-false} 00:06:39.329 }, 00:06:39.329 "method": "bdev_nvme_attach_controller" 00:06:39.329 } 00:06:39.329 EOF 00:06:39.329 )") 00:06:39.329 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:06:39.329 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:06:39.329 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:06:39.329 22:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:06:39.329 "params": { 00:06:39.329 "name": "Nvme0", 00:06:39.329 "trtype": "tcp", 00:06:39.329 "traddr": "10.0.0.2", 00:06:39.329 "adrfam": "ipv4", 00:06:39.329 "trsvcid": "4420", 00:06:39.329 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:39.329 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:39.329 "hdgst": false, 00:06:39.329 "ddgst": false 00:06:39.329 }, 00:06:39.329 "method": "bdev_nvme_attach_controller" 00:06:39.329 }' 00:06:39.329 [2024-07-24 22:16:04.892765] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:06:39.329 [2024-07-24 22:16:04.892856] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3769374 ] 00:06:39.329 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.329 [2024-07-24 22:16:04.954090] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.588 [2024-07-24 22:16:05.071244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.588 Running I/O for 10 seconds... 00:06:39.588 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:39.588 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:06:39.588 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:39.588 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.588 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:39.846 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.846 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:39.846 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:39.846 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:39.846 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:39.846 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:39.846 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:39.846 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:39.846 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:39.846 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:39.846 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:39.846 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.846 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:39.846 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.846 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:06:39.846 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:06:39.846 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:06:40.105 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:06:40.105 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:40.105 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:40.105 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:40.105 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.105 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:40.105 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.105 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=472 00:06:40.105 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 472 -ge 100 ']' 00:06:40.105 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:40.106 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:40.106 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:40.106 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:40.106 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.106 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:40.106 [2024-07-24 22:16:05.643126] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb9e500 is same with the state(5) to be set 00:06:40.106 [2024-07-24 22:16:05.646503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:68224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.106 [2024-07-24 22:16:05.646546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.106 [2024-07-24 22:16:05.646578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:68352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.106 [2024-07-24 22:16:05.646596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.106 [2024-07-24 22:16:05.646615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:68480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.106 [2024-07-24 22:16:05.646631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.106 [2024-07-24 22:16:05.646649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:68608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.106 [2024-07-24 22:16:05.646666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.106 [2024-07-24 22:16:05.646683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:68736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.106 [2024-07-24 22:16:05.646699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.106 [2024-07-24 22:16:05.646717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:68864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.106 [2024-07-24 22:16:05.646732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.106 [2024-07-24 22:16:05.646750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:68992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.106 [2024-07-24 22:16:05.646766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.106 [2024-07-24 22:16:05.646784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:69120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.106 [2024-07-24 22:16:05.646800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.106 [2024-07-24 22:16:05.646817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:69248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.106 [2024-07-24 22:16:05.646837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.106 [2024-07-24 22:16:05.646866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:69376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.106 [2024-07-24 22:16:05.646891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.106 [2024-07-24 22:16:05.646918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:69504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.106 [2024-07-24 22:16:05.646955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.106 [2024-07-24 22:16:05.646984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:69632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.106 [2024-07-24 22:16:05.647009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.106 [2024-07-24 22:16:05.647037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:69760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.106 [2024-07-24 22:16:05.647062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.106 [2024-07-24 22:16:05.647089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:69888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.106 [2024-07-24 22:16:05.647116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.106 [2024-07-24 22:16:05.647139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.106 [2024-07-24 22:16:05.647156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.106 [2024-07-24 22:16:05.647173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:70144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.106 [2024-07-24 22:16:05.647189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.106 [2024-07-24 22:16:05.647207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:70272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.106 [2024-07-24 22:16:05.647223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.106 [2024-07-24 22:16:05.647241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:70400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.106 [2024-07-24 22:16:05.647257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.106 [2024-07-24 22:16:05.647274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:70528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.106 [2024-07-24 22:16:05.647290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.106 [2024-07-24 22:16:05.647307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:70656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.106 [2024-07-24 22:16:05.647323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.106 [2024-07-24 22:16:05.647341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:70784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.106 [2024-07-24 22:16:05.647357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.106 [2024-07-24 22:16:05.647374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:70912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.106 [2024-07-24 22:16:05.647390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.106 [2024-07-24 22:16:05.647408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:71040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.106 [2024-07-24 22:16:05.647423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.106 [2024-07-24 22:16:05.647445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:71168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.106 [2024-07-24 22:16:05.647462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.106 [2024-07-24 22:16:05.647492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:71296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.106 [2024-07-24 22:16:05.647515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.106 [2024-07-24 22:16:05.647535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:71424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.106 [2024-07-24 22:16:05.647551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.106 [2024-07-24 22:16:05.647569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:71552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.106 [2024-07-24 22:16:05.647584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.106 [2024-07-24 22:16:05.647603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:71680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.106 [2024-07-24 22:16:05.647618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.106 [2024-07-24 22:16:05.647637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:71808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.106 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.106 [2024-07-24 22:16:05.647652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.106 [2024-07-24 22:16:05.647670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:71936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.106 [2024-07-24 22:16:05.647686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.106 [2024-07-24 22:16:05.647703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:72064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.106 [2024-07-24 22:16:05.647719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.106 [2024-07-24 22:16:05.647737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:72192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.106 [2024-07-24 22:16:05.647752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.106 [2024-07-24 22:16:05.647770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:72320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.106 [2024-07-24 22:16:05.647786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.106 [2024-07-24 22:16:05.647804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:72448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.106 [2024-07-24 22:16:05.647820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.106 [2024-07-24 22:16:05.647838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:72576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.107 [2024-07-24 22:16:05.647854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.107 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:40.107 [2024-07-24 22:16:05.647872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:72704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.107 [2024-07-24 22:16:05.647891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.107 [2024-07-24 22:16:05.647909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:72832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.107 [2024-07-24 22:16:05.647924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.107 [2024-07-24 22:16:05.647942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:72960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.107 [2024-07-24 22:16:05.647958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.107 [2024-07-24 22:16:05.647975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:73088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.107 [2024-07-24 22:16:05.647991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.107 [2024-07-24 22:16:05.648008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:73216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.107 [2024-07-24 22:16:05.648024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.107 [2024-07-24 22:16:05.648041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:73344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.107 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.107 [2024-07-24 22:16:05.648057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.107 [2024-07-24 22:16:05.648074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:73472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.107 [2024-07-24 22:16:05.648090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.107 [2024-07-24 22:16:05.648107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:73600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.107 [2024-07-24 22:16:05.648123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.107 [2024-07-24 22:16:05.648140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.107 [2024-07-24 22:16:05.648156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.107 [2024-07-24 22:16:05.648173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.107 [2024-07-24 22:16:05.648190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.107 [2024-07-24 22:16:05.648207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.107 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:40.107 [2024-07-24 22:16:05.648222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.107 [2024-07-24 22:16:05.648240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.107 [2024-07-24 22:16:05.648260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.107 [2024-07-24 22:16:05.648278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.107 [2024-07-24 22:16:05.648293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.107 [2024-07-24 22:16:05.648310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.107 [2024-07-24 22:16:05.648326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.107 [2024-07-24 22:16:05.648349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.107 [2024-07-24 22:16:05.648375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.107 [2024-07-24 22:16:05.648404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.107 [2024-07-24 22:16:05.648431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.107 [2024-07-24 22:16:05.648459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.107 [2024-07-24 22:16:05.648491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.107 [2024-07-24 22:16:05.648524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.107 [2024-07-24 22:16:05.648548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.107 [2024-07-24 22:16:05.648577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.107 [2024-07-24 22:16:05.648603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.107 [2024-07-24 22:16:05.648632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.107 [2024-07-24 22:16:05.648659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.107 [2024-07-24 22:16:05.648688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.107 [2024-07-24 22:16:05.648716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.107 [2024-07-24 22:16:05.648744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.107 [2024-07-24 22:16:05.648772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.107 [2024-07-24 22:16:05.648802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.107 [2024-07-24 22:16:05.648831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.107 [2024-07-24 22:16:05.648862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.107 [2024-07-24 22:16:05.648892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.107 [2024-07-24 22:16:05.648930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.107 [2024-07-24 22:16:05.648959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.107 [2024-07-24 22:16:05.648990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.107 [2024-07-24 22:16:05.649019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.107 [2024-07-24 22:16:05.649051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.107 [2024-07-24 22:16:05.649079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.107 [2024-07-24 22:16:05.649108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.107 [2024-07-24 22:16:05.649134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.107 [2024-07-24 22:16:05.649164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:40.107 [2024-07-24 22:16:05.649192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.107 [2024-07-24 22:16:05.649300] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x10603d0 was disconnected and freed. reset controller. 00:06:40.107 [2024-07-24 22:16:05.649432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:40.107 [2024-07-24 22:16:05.649465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.107 [2024-07-24 22:16:05.649580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:06:40.107 [2024-07-24 22:16:05.649605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.107 [2024-07-24 22:16:05.649635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:06:40.107 [2024-07-24 22:16:05.649657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.107 [2024-07-24 22:16:05.649681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:06:40.107 [2024-07-24 22:16:05.649708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:40.107 [2024-07-24 22:16:05.649733] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc2e8d0 is same with the state(5) to be set 00:06:40.108 [2024-07-24 22:16:05.651081] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:06:40.108 task offset: 68224 on job bdev=Nvme0n1 fails 00:06:40.108 00:06:40.108 Latency(us) 00:06:40.108 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:40.108 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:40.108 Job: Nvme0n1 ended in about 0.41 seconds with error 00:06:40.108 Verification LBA range: start 0x0 length 0x400 00:06:40.108 Nvme0n1 : 0.41 1310.11 81.88 157.31 0.00 42111.22 3713.71 39418.69 00:06:40.108 =================================================================================================================== 00:06:40.108 Total : 1310.11 81.88 157.31 0.00 42111.22 3713.71 39418.69 00:06:40.108 [2024-07-24 22:16:05.653411] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:40.108 [2024-07-24 22:16:05.653446] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc2e8d0 (9): Bad file descriptor 00:06:40.108 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.108 22:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:40.108 [2024-07-24 22:16:05.663769] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:41.042 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3769374 00:06:41.042 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3769374) - No such process 00:06:41.042 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:41.042 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:41.042 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:41.042 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:41.042 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:06:41.042 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:06:41.042 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:06:41.042 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:06:41.042 { 00:06:41.042 "params": { 00:06:41.042 "name": "Nvme$subsystem", 00:06:41.042 "trtype": "$TEST_TRANSPORT", 00:06:41.042 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:41.042 "adrfam": "ipv4", 00:06:41.042 "trsvcid": "$NVMF_PORT", 00:06:41.042 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:41.042 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:41.042 "hdgst": ${hdgst:-false}, 00:06:41.042 "ddgst": ${ddgst:-false} 00:06:41.042 }, 00:06:41.042 "method": "bdev_nvme_attach_controller" 00:06:41.042 } 00:06:41.042 EOF 00:06:41.042 )") 00:06:41.042 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:06:41.042 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:06:41.042 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:06:41.042 22:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:06:41.042 "params": { 00:06:41.042 "name": "Nvme0", 00:06:41.042 "trtype": "tcp", 00:06:41.042 "traddr": "10.0.0.2", 00:06:41.042 "adrfam": "ipv4", 00:06:41.042 "trsvcid": "4420", 00:06:41.042 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:41.042 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:41.042 "hdgst": false, 00:06:41.042 "ddgst": false 00:06:41.042 }, 00:06:41.042 "method": "bdev_nvme_attach_controller" 00:06:41.042 }' 00:06:41.042 [2024-07-24 22:16:06.708983] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:06:41.042 [2024-07-24 22:16:06.709080] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3769582 ] 00:06:41.042 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.300 [2024-07-24 22:16:06.772796] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.300 [2024-07-24 22:16:06.890049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.558 Running I/O for 1 seconds... 00:06:42.493 00:06:42.493 Latency(us) 00:06:42.493 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:42.493 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:42.493 Verification LBA range: start 0x0 length 0x400 00:06:42.493 Nvme0n1 : 1.03 1362.92 85.18 0.00 0.00 46087.64 7815.77 39612.87 00:06:42.493 =================================================================================================================== 00:06:42.493 Total : 1362.92 85.18 0.00 0.00 46087.64 7815.77 39612.87 00:06:42.752 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:42.752 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:42.752 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:42.752 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:42.752 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:42.752 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:42.752 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:06:42.752 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:42.752 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:06:42.752 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:42.752 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:42.752 rmmod nvme_tcp 00:06:42.752 rmmod nvme_fabrics 00:06:42.752 rmmod nvme_keyring 00:06:42.752 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:42.752 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:06:42.752 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:06:42.752 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 3769322 ']' 00:06:42.752 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 3769322 00:06:42.752 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 3769322 ']' 00:06:42.752 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 3769322 00:06:42.752 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:06:42.752 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:42.752 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3769322 00:06:42.752 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:06:42.752 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:06:42.752 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3769322' 00:06:42.752 killing process with pid 3769322 00:06:42.752 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 3769322 00:06:42.752 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 3769322 00:06:43.012 [2024-07-24 22:16:08.634080] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:43.012 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:43.012 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:43.012 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:43.012 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:43.012 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:43.012 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:43.012 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:43.012 22:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:45.557 22:16:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:45.557 22:16:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:45.557 00:06:45.557 real 0m8.158s 00:06:45.557 user 0m19.030s 00:06:45.557 sys 0m2.297s 00:06:45.557 22:16:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.557 22:16:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:45.557 ************************************ 00:06:45.557 END TEST nvmf_host_management 00:06:45.557 ************************************ 00:06:45.557 22:16:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:45.557 22:16:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:45.557 22:16:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.557 22:16:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:45.557 ************************************ 00:06:45.557 START TEST nvmf_lvol 00:06:45.557 ************************************ 00:06:45.557 22:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:45.557 * Looking for test storage... 00:06:45.557 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:45.557 22:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:45.557 22:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:45.557 22:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:45.557 22:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:45.557 22:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:45.557 22:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:45.557 22:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:45.557 22:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:45.557 22:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:45.557 22:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:45.557 22:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:45.557 22:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:45.557 22:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:06:45.557 22:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:06:45.557 22:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:45.557 22:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:45.557 22:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:45.557 22:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:45.557 22:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:45.557 22:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:45.557 22:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:45.557 22:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:45.557 22:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.557 22:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.557 22:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.557 22:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:45.557 22:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.557 22:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:06:45.557 22:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:45.557 22:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:45.557 22:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:45.557 22:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:45.557 22:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:45.557 22:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:45.557 22:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:45.557 22:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:45.557 22:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:45.557 22:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:45.557 22:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:45.557 22:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:45.557 22:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:45.557 22:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:45.557 22:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:45.557 22:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:45.557 22:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:45.557 22:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:45.558 22:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:45.558 22:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:45.558 22:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:45.558 22:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:45.558 22:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:45.558 22:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:45.558 22:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:06:45.558 22:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:46.936 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:46.936 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:06:46.936 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:46.936 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:46.936 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:46.936 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:46.936 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:46.936 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:06:46.936 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:46.936 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:06:46.936 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:06:46.936 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:06:46.936 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:06:46.936 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:06:46.936 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:06:46.936 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:46.936 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:46.936 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:46.936 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:46.936 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:46.936 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:46.936 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:46.936 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:46.936 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:46.936 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:46.936 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:46.936 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:46.936 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:46.936 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:46.936 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:46.936 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:46.936 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:46.936 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:46.936 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:06:46.936 Found 0000:08:00.0 (0x8086 - 0x159b) 00:06:46.936 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:46.936 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:46.936 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:46.936 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:46.936 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:46.936 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:46.936 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:06:46.936 Found 0000:08:00.1 (0x8086 - 0x159b) 00:06:46.936 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:46.936 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:46.936 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:46.936 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:46.936 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:46.936 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:46.936 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:46.936 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:46.936 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:46.936 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:46.936 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:46.936 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:46.936 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:46.936 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:46.936 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:46.936 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:06:46.936 Found net devices under 0000:08:00.0: cvl_0_0 00:06:46.936 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:46.936 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:46.936 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:46.936 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:46.936 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:46.936 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:46.936 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:46.936 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:46.936 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:06:46.936 Found net devices under 0000:08:00.1: cvl_0_1 00:06:46.936 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:46.936 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:46.936 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:06:46.936 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:46.936 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:46.936 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:46.937 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:46.937 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:46.937 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:46.937 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:46.937 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:46.937 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:46.937 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:46.937 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:46.937 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:46.937 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:46.937 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:46.937 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:46.937 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:46.937 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:46.937 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:46.937 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:46.937 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:46.937 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:46.937 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:46.937 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:46.937 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:46.937 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:06:46.937 00:06:46.937 --- 10.0.0.2 ping statistics --- 00:06:46.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:46.937 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:06:46.937 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:46.937 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:46.937 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:06:46.937 00:06:46.937 --- 10.0.0.1 ping statistics --- 00:06:46.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:46.937 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:06:46.937 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:46.937 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:06:46.937 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:46.937 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:46.937 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:46.937 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:46.937 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:46.937 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:46.937 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:46.937 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:46.937 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:46.937 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:46.937 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:46.937 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=3771201 00:06:46.937 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:46.937 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 3771201 00:06:46.937 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 3771201 ']' 00:06:46.937 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.937 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:46.937 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.937 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:46.937 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:47.195 [2024-07-24 22:16:12.655092] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:06:47.195 [2024-07-24 22:16:12.655182] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:47.195 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.195 [2024-07-24 22:16:12.720036] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:47.195 [2024-07-24 22:16:12.836101] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:47.195 [2024-07-24 22:16:12.836164] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:47.195 [2024-07-24 22:16:12.836179] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:47.195 [2024-07-24 22:16:12.836193] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:47.195 [2024-07-24 22:16:12.836204] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:47.195 [2024-07-24 22:16:12.836304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.195 [2024-07-24 22:16:12.836365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:47.195 [2024-07-24 22:16:12.836368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.452 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:47.452 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:06:47.452 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:47.452 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:47.452 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:47.452 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:47.452 22:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:47.710 [2024-07-24 22:16:13.241962] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:47.710 22:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:47.968 22:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:47.968 22:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:48.226 22:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:48.226 22:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:48.792 22:16:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:49.049 22:16:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=e4d1a0e5-c811-4626-8764-7d9be63663cc 00:06:49.049 22:16:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e4d1a0e5-c811-4626-8764-7d9be63663cc lvol 20 00:06:49.306 22:16:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=ed4ffd42-d1e2-4421-987d-c87115c9053f 00:06:49.306 22:16:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:49.564 22:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ed4ffd42-d1e2-4421-987d-c87115c9053f 00:06:49.821 22:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:50.079 [2024-07-24 22:16:15.686871] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:50.079 22:16:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:50.336 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3771533 00:06:50.336 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:50.336 22:16:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:50.336 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.708 22:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot ed4ffd42-d1e2-4421-987d-c87115c9053f MY_SNAPSHOT 00:06:51.708 22:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=257df5fa-ead2-4c36-b98a-e099e472486f 00:06:51.708 22:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize ed4ffd42-d1e2-4421-987d-c87115c9053f 30 00:06:52.273 22:16:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 257df5fa-ead2-4c36-b98a-e099e472486f MY_CLONE 00:06:52.531 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=8e246f34-8075-488f-a7d5-5137c4daab75 00:06:52.531 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 8e246f34-8075-488f-a7d5-5137c4daab75 00:06:53.096 22:16:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3771533 00:07:01.201 Initializing NVMe Controllers 00:07:01.201 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:01.201 Controller IO queue size 128, less than required. 00:07:01.201 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:01.201 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:01.201 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:01.201 Initialization complete. Launching workers. 00:07:01.201 ======================================================== 00:07:01.201 Latency(us) 00:07:01.201 Device Information : IOPS MiB/s Average min max 00:07:01.201 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9594.40 37.48 13351.02 2251.47 89477.79 00:07:01.201 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9680.50 37.81 13228.97 2267.67 80147.95 00:07:01.201 ======================================================== 00:07:01.201 Total : 19274.90 75.29 13289.72 2251.47 89477.79 00:07:01.201 00:07:01.201 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:01.201 22:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ed4ffd42-d1e2-4421-987d-c87115c9053f 00:07:01.459 22:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e4d1a0e5-c811-4626-8764-7d9be63663cc 00:07:01.717 22:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:01.717 22:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:01.717 22:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:01.717 22:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:01.717 22:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:07:01.717 22:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:01.717 22:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:07:01.717 22:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:01.717 22:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:01.717 rmmod nvme_tcp 00:07:01.717 rmmod nvme_fabrics 00:07:01.717 rmmod nvme_keyring 00:07:01.717 22:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:01.717 22:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:07:01.717 22:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:07:01.717 22:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 3771201 ']' 00:07:01.717 22:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 3771201 00:07:01.717 22:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 3771201 ']' 00:07:01.717 22:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 3771201 00:07:01.717 22:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:07:01.717 22:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:01.717 22:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3771201 00:07:01.717 22:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:01.717 22:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:01.717 22:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3771201' 00:07:01.717 killing process with pid 3771201 00:07:01.717 22:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 3771201 00:07:01.717 22:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 3771201 00:07:01.976 22:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:01.976 22:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:01.976 22:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:01.976 22:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:01.976 22:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:01.976 22:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:01.976 22:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:01.976 22:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:04.514 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:04.514 00:07:04.514 real 0m18.941s 00:07:04.514 user 1m6.627s 00:07:04.514 sys 0m5.075s 00:07:04.514 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:04.514 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:04.514 ************************************ 00:07:04.514 END TEST nvmf_lvol 00:07:04.514 ************************************ 00:07:04.514 22:16:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:04.514 22:16:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:04.514 22:16:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.514 22:16:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:04.514 ************************************ 00:07:04.514 START TEST nvmf_lvs_grow 00:07:04.514 ************************************ 00:07:04.514 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:04.514 * Looking for test storage... 00:07:04.514 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:04.514 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:04.514 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:04.514 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:04.514 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:04.514 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:04.514 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:04.514 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:04.514 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:04.514 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:04.514 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:04.514 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:04.514 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:04.514 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:07:04.514 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:07:04.514 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:04.514 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:04.514 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:04.514 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:04.514 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:04.514 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:04.514 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:04.514 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:04.514 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.514 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.514 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.514 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:04.514 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:04.514 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:07:04.514 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:04.514 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:04.514 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:04.514 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:04.514 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:04.514 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:04.514 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:04.514 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:04.514 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:04.514 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:04.514 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:04.514 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:04.514 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:04.514 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:04.515 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:04.515 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:04.515 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:04.515 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:04.515 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:04.515 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:04.515 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:04.515 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:07:04.515 22:16:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:07:05.892 Found 0000:08:00.0 (0x8086 - 0x159b) 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:07:05.892 Found 0000:08:00.1 (0x8086 - 0x159b) 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:07:05.892 Found net devices under 0000:08:00.0: cvl_0_0 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:07:05.892 Found net devices under 0000:08:00.1: cvl_0_1 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:05.892 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:05.893 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:05.893 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:05.893 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:05.893 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:05.893 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.280 ms 00:07:05.893 00:07:05.893 --- 10.0.0.2 ping statistics --- 00:07:05.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:05.893 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:07:05.893 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:05.893 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:05.893 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:07:05.893 00:07:05.893 --- 10.0.0.1 ping statistics --- 00:07:05.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:05.893 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:07:05.893 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:05.893 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:07:05.893 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:05.893 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:05.893 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:05.893 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:05.893 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:05.893 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:05.893 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:05.893 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:05.893 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:05.893 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:05.893 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:05.893 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=3774099 00:07:05.893 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:05.893 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 3774099 00:07:05.893 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 3774099 ']' 00:07:05.893 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.893 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:05.893 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.893 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:05.893 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:06.151 [2024-07-24 22:16:31.635916] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:07:06.151 [2024-07-24 22:16:31.636017] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:06.151 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.151 [2024-07-24 22:16:31.701989] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.151 [2024-07-24 22:16:31.820367] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:06.151 [2024-07-24 22:16:31.820433] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:06.151 [2024-07-24 22:16:31.820449] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:06.151 [2024-07-24 22:16:31.820461] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:06.151 [2024-07-24 22:16:31.820473] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:06.151 [2024-07-24 22:16:31.820511] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.409 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:06.409 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:07:06.409 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:06.409 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:06.409 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:06.409 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:06.409 22:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:06.667 [2024-07-24 22:16:32.229400] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:06.668 22:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:06.668 22:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:06.668 22:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.668 22:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:06.668 ************************************ 00:07:06.668 START TEST lvs_grow_clean 00:07:06.668 ************************************ 00:07:06.668 22:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:07:06.668 22:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:06.668 22:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:06.668 22:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:06.668 22:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:06.668 22:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:06.668 22:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:06.668 22:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:06.668 22:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:06.668 22:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:06.926 22:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:06.926 22:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:07.494 22:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=eda72353-016b-476e-8d08-08856c62ff0c 00:07:07.494 22:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eda72353-016b-476e-8d08-08856c62ff0c 00:07:07.494 22:16:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:07.494 22:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:07.494 22:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:07.494 22:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u eda72353-016b-476e-8d08-08856c62ff0c lvol 150 00:07:08.059 22:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=0d8da5d8-0256-4ac5-8886-4cb5f688454b 00:07:08.059 22:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:08.059 22:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:08.316 [2024-07-24 22:16:33.773272] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:08.316 [2024-07-24 22:16:33.773354] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:08.316 true 00:07:08.316 22:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eda72353-016b-476e-8d08-08856c62ff0c 00:07:08.316 22:16:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:08.572 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:08.572 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:08.831 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0d8da5d8-0256-4ac5-8886-4cb5f688454b 00:07:09.121 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:09.404 [2024-07-24 22:16:34.964933] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:09.404 22:16:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:09.662 22:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3774482 00:07:09.662 22:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:09.662 22:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3774482 /var/tmp/bdevperf.sock 00:07:09.662 22:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 3774482 ']' 00:07:09.662 22:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:09.662 22:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:09.662 22:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:09.662 22:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:09.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:09.662 22:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:09.662 22:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:09.662 [2024-07-24 22:16:35.330229] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:07:09.662 [2024-07-24 22:16:35.330331] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3774482 ] 00:07:09.662 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.920 [2024-07-24 22:16:35.391206] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.920 [2024-07-24 22:16:35.507840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:09.920 22:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:09.920 22:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:07:09.920 22:16:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:10.485 Nvme0n1 00:07:10.485 22:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:10.743 [ 00:07:10.743 { 00:07:10.743 "name": "Nvme0n1", 00:07:10.743 "aliases": [ 00:07:10.743 "0d8da5d8-0256-4ac5-8886-4cb5f688454b" 00:07:10.743 ], 00:07:10.743 "product_name": "NVMe disk", 00:07:10.743 "block_size": 4096, 00:07:10.743 "num_blocks": 38912, 00:07:10.743 "uuid": "0d8da5d8-0256-4ac5-8886-4cb5f688454b", 00:07:10.743 "assigned_rate_limits": { 00:07:10.743 "rw_ios_per_sec": 0, 00:07:10.743 "rw_mbytes_per_sec": 0, 00:07:10.743 "r_mbytes_per_sec": 0, 00:07:10.743 "w_mbytes_per_sec": 0 00:07:10.743 }, 00:07:10.743 "claimed": false, 00:07:10.743 "zoned": false, 00:07:10.743 "supported_io_types": { 00:07:10.743 "read": true, 00:07:10.743 "write": true, 00:07:10.743 "unmap": true, 00:07:10.743 "flush": true, 00:07:10.743 "reset": true, 00:07:10.743 "nvme_admin": true, 00:07:10.743 "nvme_io": true, 00:07:10.743 "nvme_io_md": false, 00:07:10.743 "write_zeroes": true, 00:07:10.743 "zcopy": false, 00:07:10.743 "get_zone_info": false, 00:07:10.743 "zone_management": false, 00:07:10.743 "zone_append": false, 00:07:10.743 "compare": true, 00:07:10.743 "compare_and_write": true, 00:07:10.743 "abort": true, 00:07:10.743 "seek_hole": false, 00:07:10.743 "seek_data": false, 00:07:10.743 "copy": true, 00:07:10.743 "nvme_iov_md": false 00:07:10.743 }, 00:07:10.743 "memory_domains": [ 00:07:10.743 { 00:07:10.743 "dma_device_id": "system", 00:07:10.743 "dma_device_type": 1 00:07:10.743 } 00:07:10.743 ], 00:07:10.743 "driver_specific": { 00:07:10.743 "nvme": [ 00:07:10.743 { 00:07:10.743 "trid": { 00:07:10.743 "trtype": "TCP", 00:07:10.743 "adrfam": "IPv4", 00:07:10.743 "traddr": "10.0.0.2", 00:07:10.744 "trsvcid": "4420", 00:07:10.744 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:10.744 }, 00:07:10.744 "ctrlr_data": { 00:07:10.744 "cntlid": 1, 00:07:10.744 "vendor_id": "0x8086", 00:07:10.744 "model_number": "SPDK bdev Controller", 00:07:10.744 "serial_number": "SPDK0", 00:07:10.744 "firmware_revision": "24.09", 00:07:10.744 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:10.744 "oacs": { 00:07:10.744 "security": 0, 00:07:10.744 "format": 0, 00:07:10.744 "firmware": 0, 00:07:10.744 "ns_manage": 0 00:07:10.744 }, 00:07:10.744 "multi_ctrlr": true, 00:07:10.744 "ana_reporting": false 00:07:10.744 }, 00:07:10.744 "vs": { 00:07:10.744 "nvme_version": "1.3" 00:07:10.744 }, 00:07:10.744 "ns_data": { 00:07:10.744 "id": 1, 00:07:10.744 "can_share": true 00:07:10.744 } 00:07:10.744 } 00:07:10.744 ], 00:07:10.744 "mp_policy": "active_passive" 00:07:10.744 } 00:07:10.744 } 00:07:10.744 ] 00:07:10.744 22:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3774586 00:07:10.744 22:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:10.744 22:16:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:11.002 Running I/O for 10 seconds... 00:07:11.935 Latency(us) 00:07:11.935 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:11.935 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:11.935 Nvme0n1 : 1.00 13590.00 53.09 0.00 0.00 0.00 0.00 0.00 00:07:11.935 =================================================================================================================== 00:07:11.935 Total : 13590.00 53.09 0.00 0.00 0.00 0.00 0.00 00:07:11.935 00:07:12.868 22:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u eda72353-016b-476e-8d08-08856c62ff0c 00:07:12.868 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:12.868 Nvme0n1 : 2.00 13781.50 53.83 0.00 0.00 0.00 0.00 0.00 00:07:12.868 =================================================================================================================== 00:07:12.868 Total : 13781.50 53.83 0.00 0.00 0.00 0.00 0.00 00:07:12.868 00:07:13.126 true 00:07:13.126 22:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eda72353-016b-476e-8d08-08856c62ff0c 00:07:13.126 22:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:13.384 22:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:13.384 22:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:13.384 22:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3774586 00:07:13.950 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:13.950 Nvme0n1 : 3.00 13866.00 54.16 0.00 0.00 0.00 0.00 0.00 00:07:13.950 =================================================================================================================== 00:07:13.950 Total : 13866.00 54.16 0.00 0.00 0.00 0.00 0.00 00:07:13.950 00:07:14.884 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:14.884 Nvme0n1 : 4.00 13923.75 54.39 0.00 0.00 0.00 0.00 0.00 00:07:14.884 =================================================================================================================== 00:07:14.884 Total : 13923.75 54.39 0.00 0.00 0.00 0.00 0.00 00:07:14.884 00:07:15.818 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:15.818 Nvme0n1 : 5.00 13958.40 54.52 0.00 0.00 0.00 0.00 0.00 00:07:15.818 =================================================================================================================== 00:07:15.818 Total : 13958.40 54.52 0.00 0.00 0.00 0.00 0.00 00:07:15.818 00:07:17.192 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:17.192 Nvme0n1 : 6.00 14003.17 54.70 0.00 0.00 0.00 0.00 0.00 00:07:17.192 =================================================================================================================== 00:07:17.192 Total : 14003.17 54.70 0.00 0.00 0.00 0.00 0.00 00:07:17.192 00:07:18.127 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:18.127 Nvme0n1 : 7.00 14036.71 54.83 0.00 0.00 0.00 0.00 0.00 00:07:18.127 =================================================================================================================== 00:07:18.127 Total : 14036.71 54.83 0.00 0.00 0.00 0.00 0.00 00:07:18.127 00:07:19.061 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:19.061 Nvme0n1 : 8.00 14068.00 54.95 0.00 0.00 0.00 0.00 0.00 00:07:19.061 =================================================================================================================== 00:07:19.061 Total : 14068.00 54.95 0.00 0.00 0.00 0.00 0.00 00:07:19.061 00:07:19.994 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:19.994 Nvme0n1 : 9.00 14085.33 55.02 0.00 0.00 0.00 0.00 0.00 00:07:19.994 =================================================================================================================== 00:07:19.994 Total : 14085.33 55.02 0.00 0.00 0.00 0.00 0.00 00:07:19.994 00:07:20.925 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:20.925 Nvme0n1 : 10.00 14111.90 55.12 0.00 0.00 0.00 0.00 0.00 00:07:20.925 =================================================================================================================== 00:07:20.925 Total : 14111.90 55.12 0.00 0.00 0.00 0.00 0.00 00:07:20.925 00:07:20.925 00:07:20.925 Latency(us) 00:07:20.925 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:20.925 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:20.925 Nvme0n1 : 10.01 14112.70 55.13 0.00 0.00 9064.44 5582.70 19612.25 00:07:20.925 =================================================================================================================== 00:07:20.925 Total : 14112.70 55.13 0.00 0.00 9064.44 5582.70 19612.25 00:07:20.925 0 00:07:20.925 22:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3774482 00:07:20.925 22:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 3774482 ']' 00:07:20.925 22:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 3774482 00:07:20.925 22:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:07:20.925 22:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:20.925 22:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3774482 00:07:20.925 22:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:07:20.925 22:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:07:20.925 22:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3774482' 00:07:20.925 killing process with pid 3774482 00:07:20.925 22:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 3774482 00:07:20.925 Received shutdown signal, test time was about 10.000000 seconds 00:07:20.925 00:07:20.925 Latency(us) 00:07:20.925 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:20.925 =================================================================================================================== 00:07:20.925 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:20.925 22:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 3774482 00:07:21.184 22:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:21.441 22:16:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:21.699 22:16:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:21.699 22:16:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eda72353-016b-476e-8d08-08856c62ff0c 00:07:22.265 22:16:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:22.265 22:16:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:22.265 22:16:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:22.265 [2024-07-24 22:16:47.969572] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:22.524 22:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eda72353-016b-476e-8d08-08856c62ff0c 00:07:22.524 22:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:07:22.524 22:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eda72353-016b-476e-8d08-08856c62ff0c 00:07:22.524 22:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:22.524 22:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:22.524 22:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:22.524 22:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:22.524 22:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:22.524 22:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:22.524 22:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:22.524 22:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:22.524 22:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eda72353-016b-476e-8d08-08856c62ff0c 00:07:22.782 request: 00:07:22.782 { 00:07:22.782 "uuid": "eda72353-016b-476e-8d08-08856c62ff0c", 00:07:22.782 "method": "bdev_lvol_get_lvstores", 00:07:22.782 "req_id": 1 00:07:22.782 } 00:07:22.782 Got JSON-RPC error response 00:07:22.782 response: 00:07:22.782 { 00:07:22.782 "code": -19, 00:07:22.782 "message": "No such device" 00:07:22.782 } 00:07:22.782 22:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:07:22.782 22:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:22.782 22:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:22.782 22:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:22.782 22:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:23.040 aio_bdev 00:07:23.040 22:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 0d8da5d8-0256-4ac5-8886-4cb5f688454b 00:07:23.040 22:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=0d8da5d8-0256-4ac5-8886-4cb5f688454b 00:07:23.040 22:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:07:23.040 22:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:07:23.040 22:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:07:23.040 22:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:07:23.040 22:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:23.298 22:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0d8da5d8-0256-4ac5-8886-4cb5f688454b -t 2000 00:07:23.557 [ 00:07:23.557 { 00:07:23.557 "name": "0d8da5d8-0256-4ac5-8886-4cb5f688454b", 00:07:23.557 "aliases": [ 00:07:23.557 "lvs/lvol" 00:07:23.557 ], 00:07:23.557 "product_name": "Logical Volume", 00:07:23.557 "block_size": 4096, 00:07:23.557 "num_blocks": 38912, 00:07:23.557 "uuid": "0d8da5d8-0256-4ac5-8886-4cb5f688454b", 00:07:23.557 "assigned_rate_limits": { 00:07:23.557 "rw_ios_per_sec": 0, 00:07:23.557 "rw_mbytes_per_sec": 0, 00:07:23.557 "r_mbytes_per_sec": 0, 00:07:23.557 "w_mbytes_per_sec": 0 00:07:23.557 }, 00:07:23.557 "claimed": false, 00:07:23.557 "zoned": false, 00:07:23.557 "supported_io_types": { 00:07:23.557 "read": true, 00:07:23.557 "write": true, 00:07:23.557 "unmap": true, 00:07:23.557 "flush": false, 00:07:23.557 "reset": true, 00:07:23.557 "nvme_admin": false, 00:07:23.557 "nvme_io": false, 00:07:23.557 "nvme_io_md": false, 00:07:23.557 "write_zeroes": true, 00:07:23.557 "zcopy": false, 00:07:23.557 "get_zone_info": false, 00:07:23.557 "zone_management": false, 00:07:23.557 "zone_append": false, 00:07:23.557 "compare": false, 00:07:23.557 "compare_and_write": false, 00:07:23.557 "abort": false, 00:07:23.557 "seek_hole": true, 00:07:23.557 "seek_data": true, 00:07:23.557 "copy": false, 00:07:23.557 "nvme_iov_md": false 00:07:23.557 }, 00:07:23.557 "driver_specific": { 00:07:23.557 "lvol": { 00:07:23.557 "lvol_store_uuid": "eda72353-016b-476e-8d08-08856c62ff0c", 00:07:23.557 "base_bdev": "aio_bdev", 00:07:23.557 "thin_provision": false, 00:07:23.557 "num_allocated_clusters": 38, 00:07:23.557 "snapshot": false, 00:07:23.557 "clone": false, 00:07:23.557 "esnap_clone": false 00:07:23.557 } 00:07:23.557 } 00:07:23.557 } 00:07:23.557 ] 00:07:23.557 22:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:07:23.557 22:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eda72353-016b-476e-8d08-08856c62ff0c 00:07:23.557 22:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:23.815 22:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:23.815 22:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:23.815 22:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u eda72353-016b-476e-8d08-08856c62ff0c 00:07:24.381 22:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:24.381 22:16:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0d8da5d8-0256-4ac5-8886-4cb5f688454b 00:07:24.647 22:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u eda72353-016b-476e-8d08-08856c62ff0c 00:07:24.909 22:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:25.167 22:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:25.167 00:07:25.167 real 0m18.465s 00:07:25.167 user 0m18.015s 00:07:25.167 sys 0m1.957s 00:07:25.167 22:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:25.167 22:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:25.167 ************************************ 00:07:25.167 END TEST lvs_grow_clean 00:07:25.167 ************************************ 00:07:25.167 22:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:25.167 22:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:25.167 22:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.167 22:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:25.167 ************************************ 00:07:25.167 START TEST lvs_grow_dirty 00:07:25.167 ************************************ 00:07:25.167 22:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:07:25.167 22:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:25.167 22:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:25.167 22:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:25.167 22:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:25.167 22:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:25.167 22:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:25.167 22:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:25.167 22:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:25.167 22:16:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:25.425 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:25.425 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:25.991 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=52e110cb-9c21-4011-bd3f-e07054807e1f 00:07:25.991 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 52e110cb-9c21-4011-bd3f-e07054807e1f 00:07:25.991 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:26.249 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:26.249 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:26.249 22:16:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 52e110cb-9c21-4011-bd3f-e07054807e1f lvol 150 00:07:26.507 22:16:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=c23358af-342b-472e-a815-ce0d50076a38 00:07:26.507 22:16:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:26.507 22:16:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:26.765 [2024-07-24 22:16:52.296202] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:26.765 [2024-07-24 22:16:52.296280] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:26.765 true 00:07:26.765 22:16:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:26.765 22:16:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 52e110cb-9c21-4011-bd3f-e07054807e1f 00:07:27.022 22:16:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:27.023 22:16:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:27.280 22:16:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c23358af-342b-472e-a815-ce0d50076a38 00:07:27.538 22:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:27.796 [2024-07-24 22:16:53.491859] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:28.053 22:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:28.311 22:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3776249 00:07:28.311 22:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:28.311 22:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3776249 /var/tmp/bdevperf.sock 00:07:28.311 22:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:28.311 22:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 3776249 ']' 00:07:28.311 22:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:28.311 22:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:28.311 22:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:28.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:28.311 22:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:28.311 22:16:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:28.311 [2024-07-24 22:16:53.861543] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:07:28.311 [2024-07-24 22:16:53.861643] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3776249 ] 00:07:28.311 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.311 [2024-07-24 22:16:53.922387] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.570 [2024-07-24 22:16:54.039924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:28.570 22:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:28.570 22:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:07:28.570 22:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:28.827 Nvme0n1 00:07:29.083 22:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:29.083 [ 00:07:29.083 { 00:07:29.083 "name": "Nvme0n1", 00:07:29.083 "aliases": [ 00:07:29.083 "c23358af-342b-472e-a815-ce0d50076a38" 00:07:29.083 ], 00:07:29.083 "product_name": "NVMe disk", 00:07:29.083 "block_size": 4096, 00:07:29.083 "num_blocks": 38912, 00:07:29.083 "uuid": "c23358af-342b-472e-a815-ce0d50076a38", 00:07:29.083 "assigned_rate_limits": { 00:07:29.083 "rw_ios_per_sec": 0, 00:07:29.083 "rw_mbytes_per_sec": 0, 00:07:29.083 "r_mbytes_per_sec": 0, 00:07:29.083 "w_mbytes_per_sec": 0 00:07:29.083 }, 00:07:29.083 "claimed": false, 00:07:29.083 "zoned": false, 00:07:29.083 "supported_io_types": { 00:07:29.083 "read": true, 00:07:29.083 "write": true, 00:07:29.083 "unmap": true, 00:07:29.083 "flush": true, 00:07:29.083 "reset": true, 00:07:29.083 "nvme_admin": true, 00:07:29.083 "nvme_io": true, 00:07:29.083 "nvme_io_md": false, 00:07:29.083 "write_zeroes": true, 00:07:29.083 "zcopy": false, 00:07:29.083 "get_zone_info": false, 00:07:29.083 "zone_management": false, 00:07:29.083 "zone_append": false, 00:07:29.083 "compare": true, 00:07:29.083 "compare_and_write": true, 00:07:29.083 "abort": true, 00:07:29.083 "seek_hole": false, 00:07:29.083 "seek_data": false, 00:07:29.083 "copy": true, 00:07:29.083 "nvme_iov_md": false 00:07:29.083 }, 00:07:29.083 "memory_domains": [ 00:07:29.083 { 00:07:29.083 "dma_device_id": "system", 00:07:29.083 "dma_device_type": 1 00:07:29.083 } 00:07:29.083 ], 00:07:29.083 "driver_specific": { 00:07:29.083 "nvme": [ 00:07:29.083 { 00:07:29.083 "trid": { 00:07:29.083 "trtype": "TCP", 00:07:29.083 "adrfam": "IPv4", 00:07:29.083 "traddr": "10.0.0.2", 00:07:29.083 "trsvcid": "4420", 00:07:29.083 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:29.083 }, 00:07:29.083 "ctrlr_data": { 00:07:29.083 "cntlid": 1, 00:07:29.083 "vendor_id": "0x8086", 00:07:29.084 "model_number": "SPDK bdev Controller", 00:07:29.084 "serial_number": "SPDK0", 00:07:29.084 "firmware_revision": "24.09", 00:07:29.084 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:29.084 "oacs": { 00:07:29.084 "security": 0, 00:07:29.084 "format": 0, 00:07:29.084 "firmware": 0, 00:07:29.084 "ns_manage": 0 00:07:29.084 }, 00:07:29.084 "multi_ctrlr": true, 00:07:29.084 "ana_reporting": false 00:07:29.084 }, 00:07:29.084 "vs": { 00:07:29.084 "nvme_version": "1.3" 00:07:29.084 }, 00:07:29.084 "ns_data": { 00:07:29.084 "id": 1, 00:07:29.084 "can_share": true 00:07:29.084 } 00:07:29.084 } 00:07:29.084 ], 00:07:29.084 "mp_policy": "active_passive" 00:07:29.084 } 00:07:29.084 } 00:07:29.084 ] 00:07:29.341 22:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3776350 00:07:29.341 22:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:29.341 22:16:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:29.341 Running I/O for 10 seconds... 00:07:30.274 Latency(us) 00:07:30.274 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:30.274 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:30.274 Nvme0n1 : 1.00 13717.00 53.58 0.00 0.00 0.00 0.00 0.00 00:07:30.274 =================================================================================================================== 00:07:30.274 Total : 13717.00 53.58 0.00 0.00 0.00 0.00 0.00 00:07:30.274 00:07:31.208 22:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 52e110cb-9c21-4011-bd3f-e07054807e1f 00:07:31.208 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:31.208 Nvme0n1 : 2.00 13843.50 54.08 0.00 0.00 0.00 0.00 0.00 00:07:31.208 =================================================================================================================== 00:07:31.208 Total : 13843.50 54.08 0.00 0.00 0.00 0.00 0.00 00:07:31.208 00:07:31.466 true 00:07:31.466 22:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 52e110cb-9c21-4011-bd3f-e07054807e1f 00:07:31.466 22:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:31.723 22:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:31.723 22:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:31.724 22:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3776350 00:07:32.289 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:32.289 Nvme0n1 : 3.00 13907.00 54.32 0.00 0.00 0.00 0.00 0.00 00:07:32.289 =================================================================================================================== 00:07:32.289 Total : 13907.00 54.32 0.00 0.00 0.00 0.00 0.00 00:07:32.289 00:07:33.224 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:33.224 Nvme0n1 : 4.00 13971.00 54.57 0.00 0.00 0.00 0.00 0.00 00:07:33.224 =================================================================================================================== 00:07:33.224 Total : 13971.00 54.57 0.00 0.00 0.00 0.00 0.00 00:07:33.224 00:07:34.214 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:34.214 Nvme0n1 : 5.00 13996.20 54.67 0.00 0.00 0.00 0.00 0.00 00:07:34.214 =================================================================================================================== 00:07:34.214 Total : 13996.20 54.67 0.00 0.00 0.00 0.00 0.00 00:07:34.214 00:07:35.668 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:35.668 Nvme0n1 : 6.00 14034.17 54.82 0.00 0.00 0.00 0.00 0.00 00:07:35.668 =================================================================================================================== 00:07:35.668 Total : 14034.17 54.82 0.00 0.00 0.00 0.00 0.00 00:07:35.668 00:07:36.233 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:36.233 Nvme0n1 : 7.00 14061.29 54.93 0.00 0.00 0.00 0.00 0.00 00:07:36.233 =================================================================================================================== 00:07:36.233 Total : 14061.29 54.93 0.00 0.00 0.00 0.00 0.00 00:07:36.233 00:07:37.608 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:37.608 Nvme0n1 : 8.00 14081.62 55.01 0.00 0.00 0.00 0.00 0.00 00:07:37.608 =================================================================================================================== 00:07:37.608 Total : 14081.62 55.01 0.00 0.00 0.00 0.00 0.00 00:07:37.608 00:07:38.539 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:38.539 Nvme0n1 : 9.00 14111.56 55.12 0.00 0.00 0.00 0.00 0.00 00:07:38.539 =================================================================================================================== 00:07:38.539 Total : 14111.56 55.12 0.00 0.00 0.00 0.00 0.00 00:07:38.539 00:07:39.473 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:39.473 Nvme0n1 : 10.00 14129.20 55.19 0.00 0.00 0.00 0.00 0.00 00:07:39.473 =================================================================================================================== 00:07:39.473 Total : 14129.20 55.19 0.00 0.00 0.00 0.00 0.00 00:07:39.473 00:07:39.473 00:07:39.473 Latency(us) 00:07:39.473 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:39.473 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:39.473 Nvme0n1 : 10.01 14127.38 55.19 0.00 0.00 9054.19 4684.61 16699.54 00:07:39.473 =================================================================================================================== 00:07:39.473 Total : 14127.38 55.19 0.00 0.00 9054.19 4684.61 16699.54 00:07:39.473 0 00:07:39.473 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3776249 00:07:39.473 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 3776249 ']' 00:07:39.473 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 3776249 00:07:39.473 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:07:39.473 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:39.473 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3776249 00:07:39.473 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:07:39.473 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:07:39.473 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3776249' 00:07:39.473 killing process with pid 3776249 00:07:39.473 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 3776249 00:07:39.473 Received shutdown signal, test time was about 10.000000 seconds 00:07:39.473 00:07:39.473 Latency(us) 00:07:39.473 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:39.473 =================================================================================================================== 00:07:39.473 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:39.473 22:17:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 3776249 00:07:39.473 22:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:40.040 22:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:40.298 22:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:40.298 22:17:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 52e110cb-9c21-4011-bd3f-e07054807e1f 00:07:40.557 22:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:40.557 22:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:40.557 22:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3774099 00:07:40.557 22:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3774099 00:07:40.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3774099 Killed "${NVMF_APP[@]}" "$@" 00:07:40.557 22:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:40.557 22:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:40.557 22:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:40.557 22:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:40.557 22:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:40.557 22:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=3777374 00:07:40.557 22:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:40.557 22:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 3777374 00:07:40.557 22:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 3777374 ']' 00:07:40.557 22:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.557 22:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:40.557 22:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.557 22:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:40.557 22:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:40.557 [2024-07-24 22:17:06.172427] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:07:40.557 [2024-07-24 22:17:06.172533] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:40.557 EAL: No free 2048 kB hugepages reported on node 1 00:07:40.557 [2024-07-24 22:17:06.239763] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.815 [2024-07-24 22:17:06.355259] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:40.815 [2024-07-24 22:17:06.355328] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:40.815 [2024-07-24 22:17:06.355344] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:40.815 [2024-07-24 22:17:06.355357] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:40.815 [2024-07-24 22:17:06.355368] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:40.815 [2024-07-24 22:17:06.355404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.816 22:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:40.816 22:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:07:40.816 22:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:40.816 22:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:40.816 22:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:40.816 22:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:40.816 22:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:41.074 [2024-07-24 22:17:06.756907] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:41.074 [2024-07-24 22:17:06.757055] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:41.074 [2024-07-24 22:17:06.757108] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:41.074 22:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:41.074 22:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev c23358af-342b-472e-a815-ce0d50076a38 00:07:41.074 22:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=c23358af-342b-472e-a815-ce0d50076a38 00:07:41.074 22:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:07:41.074 22:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:07:41.074 22:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:07:41.074 22:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:07:41.074 22:17:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:41.639 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c23358af-342b-472e-a815-ce0d50076a38 -t 2000 00:07:41.897 [ 00:07:41.897 { 00:07:41.897 "name": "c23358af-342b-472e-a815-ce0d50076a38", 00:07:41.897 "aliases": [ 00:07:41.897 "lvs/lvol" 00:07:41.897 ], 00:07:41.897 "product_name": "Logical Volume", 00:07:41.897 "block_size": 4096, 00:07:41.897 "num_blocks": 38912, 00:07:41.897 "uuid": "c23358af-342b-472e-a815-ce0d50076a38", 00:07:41.897 "assigned_rate_limits": { 00:07:41.897 "rw_ios_per_sec": 0, 00:07:41.897 "rw_mbytes_per_sec": 0, 00:07:41.897 "r_mbytes_per_sec": 0, 00:07:41.897 "w_mbytes_per_sec": 0 00:07:41.897 }, 00:07:41.897 "claimed": false, 00:07:41.897 "zoned": false, 00:07:41.897 "supported_io_types": { 00:07:41.897 "read": true, 00:07:41.897 "write": true, 00:07:41.897 "unmap": true, 00:07:41.897 "flush": false, 00:07:41.897 "reset": true, 00:07:41.897 "nvme_admin": false, 00:07:41.897 "nvme_io": false, 00:07:41.897 "nvme_io_md": false, 00:07:41.897 "write_zeroes": true, 00:07:41.897 "zcopy": false, 00:07:41.897 "get_zone_info": false, 00:07:41.897 "zone_management": false, 00:07:41.897 "zone_append": false, 00:07:41.897 "compare": false, 00:07:41.897 "compare_and_write": false, 00:07:41.897 "abort": false, 00:07:41.897 "seek_hole": true, 00:07:41.897 "seek_data": true, 00:07:41.897 "copy": false, 00:07:41.897 "nvme_iov_md": false 00:07:41.897 }, 00:07:41.897 "driver_specific": { 00:07:41.897 "lvol": { 00:07:41.897 "lvol_store_uuid": "52e110cb-9c21-4011-bd3f-e07054807e1f", 00:07:41.897 "base_bdev": "aio_bdev", 00:07:41.897 "thin_provision": false, 00:07:41.897 "num_allocated_clusters": 38, 00:07:41.897 "snapshot": false, 00:07:41.897 "clone": false, 00:07:41.897 "esnap_clone": false 00:07:41.897 } 00:07:41.897 } 00:07:41.897 } 00:07:41.897 ] 00:07:41.897 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:07:41.897 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:41.897 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 52e110cb-9c21-4011-bd3f-e07054807e1f 00:07:42.154 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:42.154 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 52e110cb-9c21-4011-bd3f-e07054807e1f 00:07:42.154 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:42.412 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:42.412 22:17:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:42.669 [2024-07-24 22:17:08.242441] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:42.669 22:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 52e110cb-9c21-4011-bd3f-e07054807e1f 00:07:42.669 22:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:07:42.669 22:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 52e110cb-9c21-4011-bd3f-e07054807e1f 00:07:42.669 22:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:42.669 22:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:42.669 22:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:42.669 22:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:42.669 22:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:42.669 22:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:42.669 22:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:42.669 22:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:42.669 22:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 52e110cb-9c21-4011-bd3f-e07054807e1f 00:07:42.926 request: 00:07:42.926 { 00:07:42.926 "uuid": "52e110cb-9c21-4011-bd3f-e07054807e1f", 00:07:42.926 "method": "bdev_lvol_get_lvstores", 00:07:42.926 "req_id": 1 00:07:42.926 } 00:07:42.926 Got JSON-RPC error response 00:07:42.926 response: 00:07:42.926 { 00:07:42.926 "code": -19, 00:07:42.926 "message": "No such device" 00:07:42.926 } 00:07:42.926 22:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:07:42.926 22:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:42.926 22:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:42.926 22:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:42.926 22:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:43.184 aio_bdev 00:07:43.184 22:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c23358af-342b-472e-a815-ce0d50076a38 00:07:43.184 22:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=c23358af-342b-472e-a815-ce0d50076a38 00:07:43.184 22:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:07:43.184 22:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:07:43.184 22:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:07:43.184 22:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:07:43.184 22:17:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:43.749 22:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c23358af-342b-472e-a815-ce0d50076a38 -t 2000 00:07:43.749 [ 00:07:43.749 { 00:07:43.749 "name": "c23358af-342b-472e-a815-ce0d50076a38", 00:07:43.749 "aliases": [ 00:07:43.749 "lvs/lvol" 00:07:43.749 ], 00:07:43.749 "product_name": "Logical Volume", 00:07:43.749 "block_size": 4096, 00:07:43.749 "num_blocks": 38912, 00:07:43.749 "uuid": "c23358af-342b-472e-a815-ce0d50076a38", 00:07:43.749 "assigned_rate_limits": { 00:07:43.749 "rw_ios_per_sec": 0, 00:07:43.749 "rw_mbytes_per_sec": 0, 00:07:43.749 "r_mbytes_per_sec": 0, 00:07:43.749 "w_mbytes_per_sec": 0 00:07:43.749 }, 00:07:43.749 "claimed": false, 00:07:43.749 "zoned": false, 00:07:43.749 "supported_io_types": { 00:07:43.749 "read": true, 00:07:43.749 "write": true, 00:07:43.749 "unmap": true, 00:07:43.749 "flush": false, 00:07:43.749 "reset": true, 00:07:43.749 "nvme_admin": false, 00:07:43.750 "nvme_io": false, 00:07:43.750 "nvme_io_md": false, 00:07:43.750 "write_zeroes": true, 00:07:43.750 "zcopy": false, 00:07:43.750 "get_zone_info": false, 00:07:43.750 "zone_management": false, 00:07:43.750 "zone_append": false, 00:07:43.750 "compare": false, 00:07:43.750 "compare_and_write": false, 00:07:43.750 "abort": false, 00:07:43.750 "seek_hole": true, 00:07:43.750 "seek_data": true, 00:07:43.750 "copy": false, 00:07:43.750 "nvme_iov_md": false 00:07:43.750 }, 00:07:43.750 "driver_specific": { 00:07:43.750 "lvol": { 00:07:43.750 "lvol_store_uuid": "52e110cb-9c21-4011-bd3f-e07054807e1f", 00:07:43.750 "base_bdev": "aio_bdev", 00:07:43.750 "thin_provision": false, 00:07:43.750 "num_allocated_clusters": 38, 00:07:43.750 "snapshot": false, 00:07:43.750 "clone": false, 00:07:43.750 "esnap_clone": false 00:07:43.750 } 00:07:43.750 } 00:07:43.750 } 00:07:43.750 ] 00:07:44.007 22:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:07:44.007 22:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 52e110cb-9c21-4011-bd3f-e07054807e1f 00:07:44.007 22:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:44.265 22:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:44.265 22:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:44.265 22:17:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 52e110cb-9c21-4011-bd3f-e07054807e1f 00:07:44.522 22:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:44.522 22:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c23358af-342b-472e-a815-ce0d50076a38 00:07:44.780 22:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 52e110cb-9c21-4011-bd3f-e07054807e1f 00:07:45.038 22:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:45.298 22:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:45.298 00:07:45.298 real 0m20.001s 00:07:45.298 user 0m51.093s 00:07:45.298 sys 0m4.461s 00:07:45.298 22:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:45.298 22:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:45.298 ************************************ 00:07:45.298 END TEST lvs_grow_dirty 00:07:45.298 ************************************ 00:07:45.298 22:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:45.298 22:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:07:45.298 22:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:07:45.298 22:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:07:45.298 22:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:45.298 22:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:07:45.298 22:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:07:45.298 22:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:07:45.298 22:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:45.298 nvmf_trace.0 00:07:45.298 22:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:07:45.298 22:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:45.298 22:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:45.298 22:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:07:45.298 22:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:45.298 22:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:07:45.298 22:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:45.298 22:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:45.298 rmmod nvme_tcp 00:07:45.299 rmmod nvme_fabrics 00:07:45.299 rmmod nvme_keyring 00:07:45.299 22:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:45.299 22:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:07:45.299 22:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:07:45.299 22:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 3777374 ']' 00:07:45.299 22:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 3777374 00:07:45.299 22:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 3777374 ']' 00:07:45.299 22:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 3777374 00:07:45.299 22:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:07:45.299 22:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:45.299 22:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3777374 00:07:45.299 22:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:45.299 22:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:45.299 22:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3777374' 00:07:45.299 killing process with pid 3777374 00:07:45.299 22:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 3777374 00:07:45.299 22:17:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 3777374 00:07:45.559 22:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:45.559 22:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:45.559 22:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:45.559 22:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:45.559 22:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:45.559 22:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:45.559 22:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:45.559 22:17:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:48.099 22:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:48.099 00:07:48.099 real 0m43.446s 00:07:48.099 user 1m15.138s 00:07:48.099 sys 0m8.044s 00:07:48.099 22:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:48.099 22:17:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:48.099 ************************************ 00:07:48.099 END TEST nvmf_lvs_grow 00:07:48.099 ************************************ 00:07:48.099 22:17:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:48.099 22:17:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:48.099 22:17:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:48.099 22:17:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:48.099 ************************************ 00:07:48.099 START TEST nvmf_bdev_io_wait 00:07:48.099 ************************************ 00:07:48.099 22:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:48.099 * Looking for test storage... 00:07:48.099 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:48.099 22:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:48.099 22:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:48.099 22:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:48.099 22:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:48.099 22:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:48.099 22:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:48.099 22:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:48.099 22:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:48.099 22:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:48.099 22:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:48.099 22:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:48.099 22:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:48.099 22:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:07:48.099 22:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:07:48.099 22:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:48.099 22:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:48.099 22:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:48.099 22:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:48.100 22:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:48.100 22:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:48.100 22:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:48.100 22:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:48.100 22:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.100 22:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.100 22:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.100 22:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:48.100 22:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.100 22:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:07:48.100 22:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:48.100 22:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:48.100 22:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:48.100 22:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:48.100 22:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:48.100 22:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:48.100 22:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:48.100 22:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:48.100 22:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:48.100 22:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:48.100 22:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:48.100 22:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:48.100 22:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:48.100 22:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:48.100 22:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:48.100 22:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:48.100 22:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:48.100 22:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:48.100 22:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:48.100 22:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:48.100 22:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:48.100 22:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:07:48.100 22:17:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:49.479 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:49.479 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:07:49.479 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:49.479 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:49.479 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:49.479 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:49.479 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:49.479 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:07:49.480 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:49.480 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:07:49.480 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:07:49.480 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:07:49.480 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:07:49.480 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:07:49.480 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:07:49.480 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:49.480 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:49.480 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:49.480 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:49.480 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:49.480 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:49.480 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:49.480 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:49.480 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:49.480 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:49.480 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:49.480 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:49.480 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:49.480 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:49.480 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:49.480 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:49.480 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:49.480 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:49.480 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:07:49.480 Found 0000:08:00.0 (0x8086 - 0x159b) 00:07:49.480 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:49.480 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:49.480 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:49.480 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:49.480 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:49.480 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:49.480 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:07:49.480 Found 0000:08:00.1 (0x8086 - 0x159b) 00:07:49.480 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:49.480 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:49.480 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:49.480 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:49.480 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:49.480 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:49.480 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:49.480 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:49.480 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:49.480 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:49.480 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:49.480 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:49.480 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:49.480 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:49.480 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:49.480 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:07:49.480 Found net devices under 0000:08:00.0: cvl_0_0 00:07:49.480 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:49.480 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:49.480 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:49.480 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:49.480 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:49.480 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:49.480 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:49.480 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:49.480 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:07:49.480 Found net devices under 0000:08:00.1: cvl_0_1 00:07:49.480 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:49.480 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:49.480 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:07:49.480 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:49.480 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:49.480 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:49.480 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:49.480 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:49.480 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:49.480 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:49.480 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:49.480 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:49.480 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:49.480 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:49.480 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:49.481 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:49.481 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:49.481 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:49.481 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:49.481 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:49.481 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:49.481 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:49.481 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:49.481 22:17:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:49.481 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:49.481 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:49.481 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:49.481 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.366 ms 00:07:49.481 00:07:49.481 --- 10.0.0.2 ping statistics --- 00:07:49.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:49.481 rtt min/avg/max/mdev = 0.366/0.366/0.366/0.000 ms 00:07:49.481 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:49.481 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:49.481 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:07:49.481 00:07:49.481 --- 10.0.0.1 ping statistics --- 00:07:49.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:49.481 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:07:49.481 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:49.481 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:07:49.481 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:49.481 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:49.481 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:49.481 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:49.481 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:49.481 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:49.481 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:49.481 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:49.481 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:49.481 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:49.481 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:49.481 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=3779338 00:07:49.481 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:49.481 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 3779338 00:07:49.481 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 3779338 ']' 00:07:49.481 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.481 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:49.481 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.481 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:49.481 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:49.481 [2024-07-24 22:17:15.101108] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:07:49.481 [2024-07-24 22:17:15.101210] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:49.481 EAL: No free 2048 kB hugepages reported on node 1 00:07:49.481 [2024-07-24 22:17:15.167598] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:49.742 [2024-07-24 22:17:15.289147] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:49.742 [2024-07-24 22:17:15.289215] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:49.742 [2024-07-24 22:17:15.289231] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:49.742 [2024-07-24 22:17:15.289244] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:49.742 [2024-07-24 22:17:15.289255] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:49.742 [2024-07-24 22:17:15.289342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:49.742 [2024-07-24 22:17:15.289398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:49.742 [2024-07-24 22:17:15.289446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:49.742 [2024-07-24 22:17:15.289448] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.742 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:49.742 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:07:49.742 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:49.742 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:49.742 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:49.742 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:49.742 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:07:49.742 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.742 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:49.742 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:49.742 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:07:49.742 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:49.742 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:50.002 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.002 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:50.002 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.002 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:50.002 [2024-07-24 22:17:15.453899] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:50.002 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.002 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:50.002 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.002 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:50.002 Malloc0 00:07:50.002 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.002 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:50.002 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.002 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:50.002 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.002 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:50.002 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.002 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:50.002 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.002 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:50.002 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:50.002 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:50.002 [2024-07-24 22:17:15.523468] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:50.002 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:50.002 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3779374 00:07:50.002 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3779376 00:07:50.002 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:07:50.002 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:07:50.002 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:07:50.002 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:07:50.002 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:50.003 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:50.003 { 00:07:50.003 "params": { 00:07:50.003 "name": "Nvme$subsystem", 00:07:50.003 "trtype": "$TEST_TRANSPORT", 00:07:50.003 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:50.003 "adrfam": "ipv4", 00:07:50.003 "trsvcid": "$NVMF_PORT", 00:07:50.003 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:50.003 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:50.003 "hdgst": ${hdgst:-false}, 00:07:50.003 "ddgst": ${ddgst:-false} 00:07:50.003 }, 00:07:50.003 "method": "bdev_nvme_attach_controller" 00:07:50.003 } 00:07:50.003 EOF 00:07:50.003 )") 00:07:50.003 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3779379 00:07:50.003 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:07:50.003 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:07:50.003 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:07:50.003 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:07:50.003 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:50.003 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:50.003 { 00:07:50.003 "params": { 00:07:50.003 "name": "Nvme$subsystem", 00:07:50.003 "trtype": "$TEST_TRANSPORT", 00:07:50.003 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:50.003 "adrfam": "ipv4", 00:07:50.003 "trsvcid": "$NVMF_PORT", 00:07:50.003 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:50.003 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:50.003 "hdgst": ${hdgst:-false}, 00:07:50.003 "ddgst": ${ddgst:-false} 00:07:50.003 }, 00:07:50.003 "method": "bdev_nvme_attach_controller" 00:07:50.003 } 00:07:50.003 EOF 00:07:50.003 )") 00:07:50.003 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:07:50.003 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3779383 00:07:50.003 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:07:50.003 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:07:50.003 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:07:50.003 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:07:50.003 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:07:50.003 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:50.003 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:50.003 { 00:07:50.003 "params": { 00:07:50.003 "name": "Nvme$subsystem", 00:07:50.003 "trtype": "$TEST_TRANSPORT", 00:07:50.003 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:50.003 "adrfam": "ipv4", 00:07:50.003 "trsvcid": "$NVMF_PORT", 00:07:50.003 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:50.003 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:50.003 "hdgst": ${hdgst:-false}, 00:07:50.003 "ddgst": ${ddgst:-false} 00:07:50.003 }, 00:07:50.003 "method": "bdev_nvme_attach_controller" 00:07:50.003 } 00:07:50.003 EOF 00:07:50.003 )") 00:07:50.003 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:07:50.003 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:07:50.003 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:07:50.003 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:07:50.003 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:07:50.003 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:50.003 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:50.003 { 00:07:50.003 "params": { 00:07:50.003 "name": "Nvme$subsystem", 00:07:50.003 "trtype": "$TEST_TRANSPORT", 00:07:50.003 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:50.003 "adrfam": "ipv4", 00:07:50.003 "trsvcid": "$NVMF_PORT", 00:07:50.003 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:50.003 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:50.003 "hdgst": ${hdgst:-false}, 00:07:50.003 "ddgst": ${ddgst:-false} 00:07:50.003 }, 00:07:50.003 "method": "bdev_nvme_attach_controller" 00:07:50.003 } 00:07:50.003 EOF 00:07:50.003 )") 00:07:50.003 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:07:50.003 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3779374 00:07:50.003 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:07:50.003 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:07:50.003 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:07:50.003 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:07:50.003 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:07:50.003 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:50.003 "params": { 00:07:50.003 "name": "Nvme1", 00:07:50.003 "trtype": "tcp", 00:07:50.003 "traddr": "10.0.0.2", 00:07:50.003 "adrfam": "ipv4", 00:07:50.003 "trsvcid": "4420", 00:07:50.003 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:50.003 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:50.003 "hdgst": false, 00:07:50.003 "ddgst": false 00:07:50.003 }, 00:07:50.003 "method": "bdev_nvme_attach_controller" 00:07:50.003 }' 00:07:50.003 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:07:50.003 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:07:50.003 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:50.003 "params": { 00:07:50.003 "name": "Nvme1", 00:07:50.003 "trtype": "tcp", 00:07:50.003 "traddr": "10.0.0.2", 00:07:50.003 "adrfam": "ipv4", 00:07:50.003 "trsvcid": "4420", 00:07:50.003 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:50.003 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:50.003 "hdgst": false, 00:07:50.003 "ddgst": false 00:07:50.003 }, 00:07:50.003 "method": "bdev_nvme_attach_controller" 00:07:50.003 }' 00:07:50.003 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:07:50.003 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:50.003 "params": { 00:07:50.003 "name": "Nvme1", 00:07:50.003 "trtype": "tcp", 00:07:50.003 "traddr": "10.0.0.2", 00:07:50.003 "adrfam": "ipv4", 00:07:50.003 "trsvcid": "4420", 00:07:50.003 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:50.003 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:50.003 "hdgst": false, 00:07:50.003 "ddgst": false 00:07:50.003 }, 00:07:50.003 "method": "bdev_nvme_attach_controller" 00:07:50.003 }' 00:07:50.003 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:07:50.003 22:17:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:50.003 "params": { 00:07:50.003 "name": "Nvme1", 00:07:50.003 "trtype": "tcp", 00:07:50.003 "traddr": "10.0.0.2", 00:07:50.003 "adrfam": "ipv4", 00:07:50.003 "trsvcid": "4420", 00:07:50.003 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:50.003 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:50.003 "hdgst": false, 00:07:50.003 "ddgst": false 00:07:50.003 }, 00:07:50.003 "method": "bdev_nvme_attach_controller" 00:07:50.003 }' 00:07:50.003 [2024-07-24 22:17:15.575108] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:07:50.003 [2024-07-24 22:17:15.575108] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:07:50.003 [2024-07-24 22:17:15.575198] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-24 22:17:15.575198] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:07:50.003 --proc-type=auto ] 00:07:50.003 [2024-07-24 22:17:15.576251] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:07:50.003 [2024-07-24 22:17:15.576249] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:07:50.003 [2024-07-24 22:17:15.576332] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-24 22:17:15.576332] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:07:50.003 --proc-type=auto ] 00:07:50.003 EAL: No free 2048 kB hugepages reported on node 1 00:07:50.003 EAL: No free 2048 kB hugepages reported on node 1 00:07:50.262 [2024-07-24 22:17:15.720723] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.262 EAL: No free 2048 kB hugepages reported on node 1 00:07:50.262 [2024-07-24 22:17:15.791381] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.262 [2024-07-24 22:17:15.817327] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:50.262 [2024-07-24 22:17:15.856891] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.262 EAL: No free 2048 kB hugepages reported on node 1 00:07:50.262 [2024-07-24 22:17:15.888318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:07:50.262 [2024-07-24 22:17:15.948276] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.262 [2024-07-24 22:17:15.952359] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:07:50.522 [2024-07-24 22:17:16.042828] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:07:50.522 Running I/O for 1 seconds... 00:07:50.522 Running I/O for 1 seconds... 00:07:50.782 Running I/O for 1 seconds... 00:07:50.782 Running I/O for 1 seconds... 00:07:51.720 00:07:51.721 Latency(us) 00:07:51.721 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:51.721 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:07:51.721 Nvme1n1 : 1.02 5710.38 22.31 0.00 0.00 22201.78 9709.04 33981.63 00:07:51.721 =================================================================================================================== 00:07:51.721 Total : 5710.38 22.31 0.00 0.00 22201.78 9709.04 33981.63 00:07:51.721 00:07:51.721 Latency(us) 00:07:51.721 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:51.721 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:07:51.721 Nvme1n1 : 1.00 158908.15 620.73 0.00 0.00 801.94 335.27 958.77 00:07:51.721 =================================================================================================================== 00:07:51.721 Total : 158908.15 620.73 0.00 0.00 801.94 335.27 958.77 00:07:51.721 00:07:51.721 Latency(us) 00:07:51.721 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:51.721 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:07:51.721 Nvme1n1 : 1.01 5339.64 20.86 0.00 0.00 23852.86 9029.40 42913.94 00:07:51.721 =================================================================================================================== 00:07:51.721 Total : 5339.64 20.86 0.00 0.00 23852.86 9029.40 42913.94 00:07:51.721 00:07:51.721 Latency(us) 00:07:51.721 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:51.721 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:07:51.721 Nvme1n1 : 1.01 8993.20 35.13 0.00 0.00 14171.63 7330.32 25826.04 00:07:51.721 =================================================================================================================== 00:07:51.721 Total : 8993.20 35.13 0.00 0.00 14171.63 7330.32 25826.04 00:07:51.721 22:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3779376 00:07:51.980 22:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3779379 00:07:51.980 22:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3779383 00:07:51.980 22:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:51.980 22:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.980 22:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:51.980 22:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.980 22:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:07:51.980 22:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:07:51.980 22:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:51.980 22:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:07:51.980 22:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:51.980 22:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:07:51.980 22:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:51.980 22:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:51.980 rmmod nvme_tcp 00:07:51.980 rmmod nvme_fabrics 00:07:51.980 rmmod nvme_keyring 00:07:51.980 22:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:51.980 22:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:07:51.980 22:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:07:51.980 22:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 3779338 ']' 00:07:51.980 22:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 3779338 00:07:51.980 22:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 3779338 ']' 00:07:51.980 22:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 3779338 00:07:51.980 22:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:07:51.980 22:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:51.980 22:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3779338 00:07:52.240 22:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:52.240 22:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:52.240 22:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3779338' 00:07:52.240 killing process with pid 3779338 00:07:52.240 22:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 3779338 00:07:52.240 22:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 3779338 00:07:52.240 22:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:52.240 22:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:52.240 22:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:52.240 22:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:52.240 22:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:52.240 22:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:52.240 22:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:52.240 22:17:17 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:54.782 22:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:54.782 00:07:54.782 real 0m6.699s 00:07:54.782 user 0m16.484s 00:07:54.782 sys 0m3.056s 00:07:54.782 22:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:54.782 22:17:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:54.782 ************************************ 00:07:54.782 END TEST nvmf_bdev_io_wait 00:07:54.782 ************************************ 00:07:54.782 22:17:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:54.782 22:17:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:54.782 22:17:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:54.782 22:17:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:54.782 ************************************ 00:07:54.782 START TEST nvmf_queue_depth 00:07:54.782 ************************************ 00:07:54.782 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:54.782 * Looking for test storage... 00:07:54.782 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:54.782 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:54.782 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:07:54.782 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:54.782 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:54.782 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:54.782 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:54.782 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:54.782 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:54.782 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:54.782 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:54.782 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:54.782 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:54.783 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:07:54.783 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:07:54.783 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:54.783 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:54.783 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:54.783 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:54.783 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:54.783 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:54.783 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:54.783 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:54.783 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.783 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.783 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.783 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:07:54.783 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.783 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:07:54.783 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:54.783 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:54.783 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:54.783 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:54.783 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:54.783 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:54.783 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:54.783 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:54.783 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:07:54.783 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:07:54.783 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:54.783 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:07:54.783 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:54.783 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:54.783 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:54.783 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:54.783 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:54.783 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:54.783 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:54.783 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:54.783 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:54.783 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:54.783 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:07:54.783 22:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:56.164 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:56.164 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:07:56.164 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:56.164 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:56.164 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:56.164 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:56.164 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:56.164 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:07:56.164 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:56.164 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:07:56.164 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:07:56.164 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:07:56.164 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:07:56.164 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:07:56.164 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:07:56.164 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:56.164 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:56.164 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:56.164 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:56.164 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:56.164 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:56.164 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:56.164 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:56.164 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:56.164 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:56.164 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:56.164 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:56.164 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:56.164 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:56.164 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:56.164 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:56.164 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:56.164 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:56.164 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:07:56.164 Found 0000:08:00.0 (0x8086 - 0x159b) 00:07:56.164 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:56.164 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:56.164 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:56.164 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:56.164 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:56.164 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:56.164 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:07:56.164 Found 0000:08:00.1 (0x8086 - 0x159b) 00:07:56.164 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:56.164 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:56.164 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:56.164 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:56.164 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:56.164 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:56.164 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:56.164 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:56.164 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:56.164 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:56.164 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:56.164 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:56.164 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:56.164 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:56.164 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:56.164 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:07:56.164 Found net devices under 0000:08:00.0: cvl_0_0 00:07:56.165 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:56.165 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:56.165 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:56.165 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:56.165 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:56.165 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:56.165 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:56.165 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:56.165 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:07:56.165 Found net devices under 0000:08:00.1: cvl_0_1 00:07:56.165 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:56.165 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:56.165 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:07:56.165 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:56.165 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:56.165 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:56.165 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:56.165 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:56.165 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:56.165 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:56.165 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:56.165 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:56.165 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:56.165 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:56.165 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:56.165 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:56.165 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:56.165 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:56.165 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:56.165 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:56.165 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:56.165 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:56.165 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:56.165 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:56.165 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:56.165 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:56.165 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:56.165 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:07:56.165 00:07:56.165 --- 10.0.0.2 ping statistics --- 00:07:56.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:56.165 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:07:56.165 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:56.165 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:56.165 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:07:56.165 00:07:56.165 --- 10.0.0.1 ping statistics --- 00:07:56.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:56.165 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:07:56.165 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:56.165 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:07:56.165 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:56.165 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:56.165 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:56.165 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:56.165 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:56.165 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:56.165 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:56.165 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:07:56.165 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:56.165 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:56.165 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:56.165 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=3781090 00:07:56.165 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:07:56.165 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 3781090 00:07:56.165 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 3781090 ']' 00:07:56.165 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.165 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:56.165 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.165 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:56.165 22:17:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:56.424 [2024-07-24 22:17:21.908756] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:07:56.424 [2024-07-24 22:17:21.908851] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:56.424 EAL: No free 2048 kB hugepages reported on node 1 00:07:56.424 [2024-07-24 22:17:21.973572] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.424 [2024-07-24 22:17:22.089130] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:56.424 [2024-07-24 22:17:22.089192] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:56.424 [2024-07-24 22:17:22.089208] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:56.424 [2024-07-24 22:17:22.089222] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:56.424 [2024-07-24 22:17:22.089234] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:56.424 [2024-07-24 22:17:22.089270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:56.683 22:17:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:56.683 22:17:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:07:56.683 22:17:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:56.683 22:17:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:56.683 22:17:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:56.683 22:17:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:56.683 22:17:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:56.683 22:17:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.683 22:17:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:56.683 [2024-07-24 22:17:22.221015] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:56.683 22:17:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.683 22:17:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:56.683 22:17:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.683 22:17:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:56.683 Malloc0 00:07:56.683 22:17:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.683 22:17:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:56.683 22:17:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.683 22:17:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:56.683 22:17:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.683 22:17:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:56.683 22:17:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.683 22:17:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:56.683 22:17:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.683 22:17:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:56.683 22:17:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.683 22:17:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:56.683 [2024-07-24 22:17:22.284869] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:56.683 22:17:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:56.683 22:17:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3781204 00:07:56.683 22:17:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:07:56.683 22:17:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:56.683 22:17:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3781204 /var/tmp/bdevperf.sock 00:07:56.683 22:17:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 3781204 ']' 00:07:56.683 22:17:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:56.683 22:17:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:56.683 22:17:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:56.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:56.683 22:17:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:56.683 22:17:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:56.683 [2024-07-24 22:17:22.336688] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:07:56.683 [2024-07-24 22:17:22.336778] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3781204 ] 00:07:56.683 EAL: No free 2048 kB hugepages reported on node 1 00:07:56.943 [2024-07-24 22:17:22.398440] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.943 [2024-07-24 22:17:22.515126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.943 22:17:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:56.943 22:17:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:07:56.943 22:17:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:07:56.943 22:17:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:56.943 22:17:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:57.202 NVMe0n1 00:07:57.202 22:17:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.202 22:17:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:57.202 Running I/O for 10 seconds... 00:08:09.420 00:08:09.420 Latency(us) 00:08:09.420 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:09.420 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:09.420 Verification LBA range: start 0x0 length 0x4000 00:08:09.420 NVMe0n1 : 10.14 7835.89 30.61 0.00 0.00 129460.51 28350.39 84662.80 00:08:09.420 =================================================================================================================== 00:08:09.420 Total : 7835.89 30.61 0.00 0.00 129460.51 28350.39 84662.80 00:08:09.420 0 00:08:09.420 22:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3781204 00:08:09.420 22:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 3781204 ']' 00:08:09.420 22:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 3781204 00:08:09.420 22:17:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:08:09.420 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:09.420 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3781204 00:08:09.420 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:09.420 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:09.420 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3781204' 00:08:09.420 killing process with pid 3781204 00:08:09.420 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 3781204 00:08:09.420 Received shutdown signal, test time was about 10.000000 seconds 00:08:09.420 00:08:09.420 Latency(us) 00:08:09.420 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:09.420 =================================================================================================================== 00:08:09.420 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:09.420 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 3781204 00:08:09.420 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:09.420 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:09.420 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:09.420 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:08:09.420 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:09.420 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:08:09.421 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:09.421 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:09.421 rmmod nvme_tcp 00:08:09.421 rmmod nvme_fabrics 00:08:09.421 rmmod nvme_keyring 00:08:09.421 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:09.421 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:08:09.421 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:08:09.421 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 3781090 ']' 00:08:09.421 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 3781090 00:08:09.421 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 3781090 ']' 00:08:09.421 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 3781090 00:08:09.421 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:08:09.421 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:09.421 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3781090 00:08:09.421 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:09.421 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:09.421 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3781090' 00:08:09.421 killing process with pid 3781090 00:08:09.421 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 3781090 00:08:09.421 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 3781090 00:08:09.421 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:09.421 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:09.421 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:09.421 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:09.421 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:09.421 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:09.421 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:09.421 22:17:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:09.991 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:09.991 00:08:09.991 real 0m15.609s 00:08:09.991 user 0m22.585s 00:08:09.991 sys 0m2.625s 00:08:09.991 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:09.991 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:09.991 ************************************ 00:08:09.991 END TEST nvmf_queue_depth 00:08:09.991 ************************************ 00:08:09.991 22:17:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:09.991 22:17:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:09.991 22:17:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:09.991 22:17:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:09.991 ************************************ 00:08:09.991 START TEST nvmf_target_multipath 00:08:09.991 ************************************ 00:08:09.991 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:10.251 * Looking for test storage... 00:08:10.251 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:10.251 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:10.251 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:10.251 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:10.251 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:10.251 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:10.251 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:10.251 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:10.251 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:10.251 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:10.251 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:10.251 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:10.251 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:10.251 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:08:10.251 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:08:10.251 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:10.251 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:10.251 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:10.251 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:10.251 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:10.251 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:10.251 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:10.251 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:10.251 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.251 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.251 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.251 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:10.251 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.251 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:08:10.251 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:10.251 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:10.251 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:10.251 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:10.251 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:10.251 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:10.251 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:10.251 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:10.251 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:10.251 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:10.251 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:10.251 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:10.251 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:10.251 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:10.251 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:10.251 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:10.251 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:10.251 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:10.251 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:10.251 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:10.251 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:10.251 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:10.251 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:10.251 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:08:10.251 22:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:11.632 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:11.632 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:08:11.632 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:11.632 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:11.632 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:11.632 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:11.632 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:11.632 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:08:11.632 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:11.632 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:08:11.632 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:08:11.632 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:08:11.632 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:08:11.632 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:08:11.632 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:08:11.632 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:11.632 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:11.632 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:11.632 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:11.632 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:11.632 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:11.632 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:11.632 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:11.632 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:11.632 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:11.632 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:11.632 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:11.632 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:11.632 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:11.632 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:11.632 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:11.632 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:11.632 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:11.632 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:08:11.632 Found 0000:08:00.0 (0x8086 - 0x159b) 00:08:11.632 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:11.633 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:11.633 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:11.633 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:11.633 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:11.633 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:11.633 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:08:11.633 Found 0000:08:00.1 (0x8086 - 0x159b) 00:08:11.633 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:11.633 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:11.633 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:11.633 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:11.633 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:11.633 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:11.633 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:11.633 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:11.633 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:11.633 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:11.633 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:11.633 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:11.633 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:11.633 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:11.633 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:11.633 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:08:11.633 Found net devices under 0000:08:00.0: cvl_0_0 00:08:11.633 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:11.633 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:11.633 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:11.633 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:11.633 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:11.633 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:11.633 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:11.633 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:11.633 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:08:11.633 Found net devices under 0000:08:00.1: cvl_0_1 00:08:11.633 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:11.633 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:11.633 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:08:11.633 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:11.633 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:11.633 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:11.633 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:11.633 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:11.633 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:11.633 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:11.633 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:11.633 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:11.633 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:11.633 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:11.633 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:11.633 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:11.633 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:11.633 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:11.633 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:11.892 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:11.892 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:11.892 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:11.892 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:11.892 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:11.892 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:11.892 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:11.892 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:11.892 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.241 ms 00:08:11.892 00:08:11.892 --- 10.0.0.2 ping statistics --- 00:08:11.892 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.892 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:08:11.892 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:11.892 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:11.892 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:08:11.892 00:08:11.892 --- 10.0.0.1 ping statistics --- 00:08:11.892 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.892 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:08:11.892 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:11.892 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:08:11.893 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:11.893 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:11.893 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:11.893 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:11.893 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:11.893 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:11.893 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:11.893 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:11.893 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:11.893 only one NIC for nvmf test 00:08:11.893 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:11.893 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:11.893 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:08:11.893 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:11.893 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:08:11.893 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:11.893 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:11.893 rmmod nvme_tcp 00:08:11.893 rmmod nvme_fabrics 00:08:11.893 rmmod nvme_keyring 00:08:11.893 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:11.893 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:08:11.893 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:08:11.893 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:11.893 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:11.893 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:11.893 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:11.893 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:11.893 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:11.893 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:11.893 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:11.893 22:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:14.431 22:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:14.431 22:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:14.431 22:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:14.431 22:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:14.431 22:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:08:14.431 22:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:14.431 22:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:08:14.431 22:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:14.431 22:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:14.431 22:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:14.431 22:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:08:14.431 22:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:08:14.431 22:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:14.431 22:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:14.431 22:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:14.431 22:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:14.431 22:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:14.431 22:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:14.431 22:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:14.431 22:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:14.431 22:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:14.431 22:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:14.431 00:08:14.431 real 0m3.907s 00:08:14.431 user 0m0.676s 00:08:14.431 sys 0m1.214s 00:08:14.431 22:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:14.431 22:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:14.431 ************************************ 00:08:14.431 END TEST nvmf_target_multipath 00:08:14.431 ************************************ 00:08:14.431 22:17:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:14.431 22:17:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:14.431 22:17:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:14.431 22:17:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:14.431 ************************************ 00:08:14.431 START TEST nvmf_zcopy 00:08:14.431 ************************************ 00:08:14.431 22:17:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:14.431 * Looking for test storage... 00:08:14.431 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:14.431 22:17:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:14.431 22:17:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:14.431 22:17:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:14.431 22:17:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:14.431 22:17:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:14.431 22:17:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:14.431 22:17:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:14.431 22:17:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:14.431 22:17:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:14.431 22:17:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:14.431 22:17:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:14.431 22:17:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:14.431 22:17:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:08:14.431 22:17:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:08:14.431 22:17:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:14.431 22:17:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:14.431 22:17:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:14.431 22:17:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:14.431 22:17:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:14.431 22:17:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:14.431 22:17:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:14.431 22:17:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:14.431 22:17:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.431 22:17:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.431 22:17:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.431 22:17:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:14.431 22:17:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.431 22:17:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:08:14.431 22:17:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:14.431 22:17:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:14.431 22:17:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:14.431 22:17:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:14.431 22:17:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:14.431 22:17:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:14.431 22:17:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:14.431 22:17:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:14.431 22:17:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:14.431 22:17:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:14.431 22:17:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:14.431 22:17:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:14.431 22:17:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:14.431 22:17:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:14.431 22:17:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:14.431 22:17:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:14.431 22:17:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:14.431 22:17:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:14.432 22:17:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:14.432 22:17:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:08:14.432 22:17:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:15.815 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:15.815 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:08:15.815 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:15.815 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:08:15.816 Found 0000:08:00.0 (0x8086 - 0x159b) 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:08:15.816 Found 0000:08:00.1 (0x8086 - 0x159b) 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:08:15.816 Found net devices under 0000:08:00.0: cvl_0_0 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:08:15.816 Found net devices under 0000:08:00.1: cvl_0_1 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:15.816 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:16.077 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:16.077 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:16.077 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:08:16.077 00:08:16.077 --- 10.0.0.2 ping statistics --- 00:08:16.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:16.077 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:08:16.077 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:16.078 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:16.078 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:08:16.078 00:08:16.078 --- 10.0.0.1 ping statistics --- 00:08:16.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:16.078 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:08:16.078 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:16.078 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:08:16.078 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:16.078 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:16.078 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:16.078 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:16.078 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:16.078 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:16.078 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:16.078 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:16.078 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:16.078 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:16.078 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:16.078 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=3785107 00:08:16.078 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:16.078 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 3785107 00:08:16.078 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 3785107 ']' 00:08:16.078 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.078 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:16.078 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.078 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:16.078 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:16.078 [2024-07-24 22:17:41.621722] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:08:16.078 [2024-07-24 22:17:41.621819] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:16.078 EAL: No free 2048 kB hugepages reported on node 1 00:08:16.078 [2024-07-24 22:17:41.687967] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.338 [2024-07-24 22:17:41.806168] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:16.338 [2024-07-24 22:17:41.806232] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:16.338 [2024-07-24 22:17:41.806248] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:16.338 [2024-07-24 22:17:41.806261] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:16.338 [2024-07-24 22:17:41.806272] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:16.338 [2024-07-24 22:17:41.806303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:16.338 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:16.338 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:08:16.338 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:16.338 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:16.338 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:16.338 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:16.338 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:16.338 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:16.338 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.338 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:16.338 [2024-07-24 22:17:41.946473] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:16.338 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.338 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:16.338 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.338 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:16.338 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.338 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:16.338 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.338 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:16.338 [2024-07-24 22:17:41.962692] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:16.338 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.338 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:16.338 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.338 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:16.338 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.338 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:16.338 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.338 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:16.338 malloc0 00:08:16.338 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.338 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:16.338 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:16.338 22:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:16.338 22:17:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:16.338 22:17:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:16.338 22:17:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:16.338 22:17:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:08:16.338 22:17:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:08:16.338 22:17:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:16.338 22:17:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:16.338 { 00:08:16.338 "params": { 00:08:16.338 "name": "Nvme$subsystem", 00:08:16.338 "trtype": "$TEST_TRANSPORT", 00:08:16.338 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:16.338 "adrfam": "ipv4", 00:08:16.338 "trsvcid": "$NVMF_PORT", 00:08:16.338 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:16.338 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:16.338 "hdgst": ${hdgst:-false}, 00:08:16.338 "ddgst": ${ddgst:-false} 00:08:16.338 }, 00:08:16.338 "method": "bdev_nvme_attach_controller" 00:08:16.338 } 00:08:16.338 EOF 00:08:16.338 )") 00:08:16.338 22:17:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:08:16.338 22:17:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:08:16.338 22:17:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:08:16.338 22:17:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:16.338 "params": { 00:08:16.338 "name": "Nvme1", 00:08:16.338 "trtype": "tcp", 00:08:16.338 "traddr": "10.0.0.2", 00:08:16.338 "adrfam": "ipv4", 00:08:16.338 "trsvcid": "4420", 00:08:16.338 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:16.338 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:16.338 "hdgst": false, 00:08:16.338 "ddgst": false 00:08:16.338 }, 00:08:16.338 "method": "bdev_nvme_attach_controller" 00:08:16.338 }' 00:08:16.599 [2024-07-24 22:17:42.052951] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:08:16.599 [2024-07-24 22:17:42.053039] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3785132 ] 00:08:16.599 EAL: No free 2048 kB hugepages reported on node 1 00:08:16.599 [2024-07-24 22:17:42.115358] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.599 [2024-07-24 22:17:42.234990] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.168 Running I/O for 10 seconds... 00:08:27.156 00:08:27.156 Latency(us) 00:08:27.156 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:27.156 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:27.156 Verification LBA range: start 0x0 length 0x1000 00:08:27.156 Nvme1n1 : 10.02 4174.93 32.62 0.00 0.00 30576.02 4903.06 42137.22 00:08:27.156 =================================================================================================================== 00:08:27.156 Total : 4174.93 32.62 0.00 0.00 30576.02 4903.06 42137.22 00:08:27.156 22:17:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3786131 00:08:27.156 22:17:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:27.156 22:17:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:27.156 22:17:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:27.156 22:17:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:27.156 22:17:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:08:27.156 22:17:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:08:27.156 22:17:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:27.156 22:17:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:27.156 { 00:08:27.156 "params": { 00:08:27.156 "name": "Nvme$subsystem", 00:08:27.156 "trtype": "$TEST_TRANSPORT", 00:08:27.156 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:27.156 "adrfam": "ipv4", 00:08:27.157 "trsvcid": "$NVMF_PORT", 00:08:27.157 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:27.157 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:27.157 "hdgst": ${hdgst:-false}, 00:08:27.157 "ddgst": ${ddgst:-false} 00:08:27.157 }, 00:08:27.157 "method": "bdev_nvme_attach_controller" 00:08:27.157 } 00:08:27.157 EOF 00:08:27.157 )") 00:08:27.157 [2024-07-24 22:17:52.838344] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.157 [2024-07-24 22:17:52.838389] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.157 22:17:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:08:27.157 22:17:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:08:27.157 22:17:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:08:27.157 22:17:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:27.157 "params": { 00:08:27.157 "name": "Nvme1", 00:08:27.157 "trtype": "tcp", 00:08:27.157 "traddr": "10.0.0.2", 00:08:27.157 "adrfam": "ipv4", 00:08:27.157 "trsvcid": "4420", 00:08:27.157 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:27.157 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:27.157 "hdgst": false, 00:08:27.157 "ddgst": false 00:08:27.157 }, 00:08:27.157 "method": "bdev_nvme_attach_controller" 00:08:27.157 }' 00:08:27.157 [2024-07-24 22:17:52.846305] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.157 [2024-07-24 22:17:52.846330] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.157 [2024-07-24 22:17:52.854323] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.157 [2024-07-24 22:17:52.854346] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.416 [2024-07-24 22:17:52.862345] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.416 [2024-07-24 22:17:52.862368] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.416 [2024-07-24 22:17:52.870367] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.416 [2024-07-24 22:17:52.870390] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.416 [2024-07-24 22:17:52.878387] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.416 [2024-07-24 22:17:52.878409] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.416 [2024-07-24 22:17:52.882997] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:08:27.416 [2024-07-24 22:17:52.883084] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3786131 ] 00:08:27.416 [2024-07-24 22:17:52.886409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.416 [2024-07-24 22:17:52.886433] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.416 [2024-07-24 22:17:52.894431] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.416 [2024-07-24 22:17:52.894453] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.416 [2024-07-24 22:17:52.902454] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.417 [2024-07-24 22:17:52.902476] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.417 [2024-07-24 22:17:52.910475] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.417 [2024-07-24 22:17:52.910505] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.417 EAL: No free 2048 kB hugepages reported on node 1 00:08:27.417 [2024-07-24 22:17:52.918503] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.417 [2024-07-24 22:17:52.918532] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.417 [2024-07-24 22:17:52.926525] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.417 [2024-07-24 22:17:52.926547] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.417 [2024-07-24 22:17:52.934547] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.417 [2024-07-24 22:17:52.934570] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.417 [2024-07-24 22:17:52.942571] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.417 [2024-07-24 22:17:52.942602] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.417 [2024-07-24 22:17:52.944714] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.417 [2024-07-24 22:17:52.950650] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.417 [2024-07-24 22:17:52.950701] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.417 [2024-07-24 22:17:52.958660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.417 [2024-07-24 22:17:52.958707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.417 [2024-07-24 22:17:52.966639] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.417 [2024-07-24 22:17:52.966666] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.417 [2024-07-24 22:17:52.974676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.417 [2024-07-24 22:17:52.974709] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.417 [2024-07-24 22:17:52.982682] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.417 [2024-07-24 22:17:52.982708] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.417 [2024-07-24 22:17:52.990711] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.417 [2024-07-24 22:17:52.990739] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.417 [2024-07-24 22:17:52.998725] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.417 [2024-07-24 22:17:52.998750] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.417 [2024-07-24 22:17:53.006796] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.417 [2024-07-24 22:17:53.006844] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.417 [2024-07-24 22:17:53.014835] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.417 [2024-07-24 22:17:53.014887] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.417 [2024-07-24 22:17:53.022789] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.417 [2024-07-24 22:17:53.022812] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.417 [2024-07-24 22:17:53.030832] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.417 [2024-07-24 22:17:53.030864] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.417 [2024-07-24 22:17:53.038840] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.417 [2024-07-24 22:17:53.038865] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.417 [2024-07-24 22:17:53.046867] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.417 [2024-07-24 22:17:53.046896] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.417 [2024-07-24 22:17:53.054885] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.417 [2024-07-24 22:17:53.054911] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.417 [2024-07-24 22:17:53.062911] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.417 [2024-07-24 22:17:53.062939] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.417 [2024-07-24 22:17:53.064310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.417 [2024-07-24 22:17:53.070924] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.417 [2024-07-24 22:17:53.070948] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.417 [2024-07-24 22:17:53.079006] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.417 [2024-07-24 22:17:53.079058] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.417 [2024-07-24 22:17:53.087028] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.417 [2024-07-24 22:17:53.087079] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.417 [2024-07-24 22:17:53.095048] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.417 [2024-07-24 22:17:53.095099] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.417 [2024-07-24 22:17:53.103075] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.417 [2024-07-24 22:17:53.103125] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.417 [2024-07-24 22:17:53.111108] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.417 [2024-07-24 22:17:53.111170] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.417 [2024-07-24 22:17:53.119102] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.417 [2024-07-24 22:17:53.119147] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.676 [2024-07-24 22:17:53.127157] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.676 [2024-07-24 22:17:53.127208] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.676 [2024-07-24 22:17:53.135168] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.676 [2024-07-24 22:17:53.135217] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.676 [2024-07-24 22:17:53.143175] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.676 [2024-07-24 22:17:53.143222] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.676 [2024-07-24 22:17:53.151161] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.676 [2024-07-24 22:17:53.151184] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.676 [2024-07-24 22:17:53.159177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.676 [2024-07-24 22:17:53.159217] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.676 [2024-07-24 22:17:53.167218] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.676 [2024-07-24 22:17:53.167248] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.676 [2024-07-24 22:17:53.175233] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.676 [2024-07-24 22:17:53.175260] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.676 [2024-07-24 22:17:53.183255] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.676 [2024-07-24 22:17:53.183281] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.676 [2024-07-24 22:17:53.191299] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.676 [2024-07-24 22:17:53.191325] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.676 [2024-07-24 22:17:53.199310] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.676 [2024-07-24 22:17:53.199335] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.676 [2024-07-24 22:17:53.207335] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.676 [2024-07-24 22:17:53.207362] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.676 [2024-07-24 22:17:53.215358] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.676 [2024-07-24 22:17:53.215383] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.676 [2024-07-24 22:17:53.223381] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.676 [2024-07-24 22:17:53.223405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.676 [2024-07-24 22:17:53.231406] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.676 [2024-07-24 22:17:53.231434] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.676 Running I/O for 5 seconds... 00:08:27.676 [2024-07-24 22:17:53.239435] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.676 [2024-07-24 22:17:53.239459] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.676 [2024-07-24 22:17:53.252732] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.676 [2024-07-24 22:17:53.252763] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.676 [2024-07-24 22:17:53.264242] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.676 [2024-07-24 22:17:53.264272] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.676 [2024-07-24 22:17:53.276261] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.676 [2024-07-24 22:17:53.276301] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.676 [2024-07-24 22:17:53.288405] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.676 [2024-07-24 22:17:53.288434] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.677 [2024-07-24 22:17:53.300375] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.677 [2024-07-24 22:17:53.300404] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.677 [2024-07-24 22:17:53.312470] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.677 [2024-07-24 22:17:53.312508] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.677 [2024-07-24 22:17:53.324511] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.677 [2024-07-24 22:17:53.324540] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.677 [2024-07-24 22:17:53.336629] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.677 [2024-07-24 22:17:53.336657] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.677 [2024-07-24 22:17:53.348427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.677 [2024-07-24 22:17:53.348456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.677 [2024-07-24 22:17:53.360796] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.677 [2024-07-24 22:17:53.360825] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.677 [2024-07-24 22:17:53.372818] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.677 [2024-07-24 22:17:53.372846] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.940 [2024-07-24 22:17:53.384613] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.940 [2024-07-24 22:17:53.384642] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.940 [2024-07-24 22:17:53.396644] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.940 [2024-07-24 22:17:53.396673] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.940 [2024-07-24 22:17:53.408638] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.940 [2024-07-24 22:17:53.408666] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.940 [2024-07-24 22:17:53.420465] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.940 [2024-07-24 22:17:53.420503] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.940 [2024-07-24 22:17:53.432315] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.940 [2024-07-24 22:17:53.432345] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.940 [2024-07-24 22:17:53.444171] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.940 [2024-07-24 22:17:53.444199] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.940 [2024-07-24 22:17:53.455870] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.940 [2024-07-24 22:17:53.455899] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.940 [2024-07-24 22:17:53.467797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.940 [2024-07-24 22:17:53.467825] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.940 [2024-07-24 22:17:53.479522] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.940 [2024-07-24 22:17:53.479551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.940 [2024-07-24 22:17:53.493491] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.940 [2024-07-24 22:17:53.493519] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.940 [2024-07-24 22:17:53.504897] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.940 [2024-07-24 22:17:53.504926] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.940 [2024-07-24 22:17:53.516901] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.940 [2024-07-24 22:17:53.516930] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.940 [2024-07-24 22:17:53.528592] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.940 [2024-07-24 22:17:53.528620] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.940 [2024-07-24 22:17:53.542328] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.940 [2024-07-24 22:17:53.542358] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.940 [2024-07-24 22:17:53.553182] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.940 [2024-07-24 22:17:53.553210] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.940 [2024-07-24 22:17:53.565937] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.940 [2024-07-24 22:17:53.565966] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.940 [2024-07-24 22:17:53.577814] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.940 [2024-07-24 22:17:53.577842] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.940 [2024-07-24 22:17:53.590437] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.940 [2024-07-24 22:17:53.590465] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.940 [2024-07-24 22:17:53.602157] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.940 [2024-07-24 22:17:53.602194] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.940 [2024-07-24 22:17:53.614101] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.940 [2024-07-24 22:17:53.614130] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.940 [2024-07-24 22:17:53.626056] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.940 [2024-07-24 22:17:53.626092] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.940 [2024-07-24 22:17:53.637951] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.940 [2024-07-24 22:17:53.637980] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.218 [2024-07-24 22:17:53.653975] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.218 [2024-07-24 22:17:53.654008] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.218 [2024-07-24 22:17:53.664815] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.218 [2024-07-24 22:17:53.664843] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.218 [2024-07-24 22:17:53.677740] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.218 [2024-07-24 22:17:53.677769] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.218 [2024-07-24 22:17:53.689799] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.218 [2024-07-24 22:17:53.689829] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.218 [2024-07-24 22:17:53.701689] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.218 [2024-07-24 22:17:53.701719] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.218 [2024-07-24 22:17:53.713954] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.218 [2024-07-24 22:17:53.713983] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.218 [2024-07-24 22:17:53.725758] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.218 [2024-07-24 22:17:53.725787] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.218 [2024-07-24 22:17:53.737732] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.218 [2024-07-24 22:17:53.737760] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.218 [2024-07-24 22:17:53.749784] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.218 [2024-07-24 22:17:53.749812] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.218 [2024-07-24 22:17:53.761427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.218 [2024-07-24 22:17:53.761456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.218 [2024-07-24 22:17:53.773370] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.218 [2024-07-24 22:17:53.773408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.218 [2024-07-24 22:17:53.785445] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.218 [2024-07-24 22:17:53.785477] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.218 [2024-07-24 22:17:53.797468] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.218 [2024-07-24 22:17:53.797512] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.218 [2024-07-24 22:17:53.809425] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.218 [2024-07-24 22:17:53.809454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.218 [2024-07-24 22:17:53.821877] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.218 [2024-07-24 22:17:53.821905] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.218 [2024-07-24 22:17:53.833782] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.218 [2024-07-24 22:17:53.833814] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.218 [2024-07-24 22:17:53.845850] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.218 [2024-07-24 22:17:53.845878] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.218 [2024-07-24 22:17:53.858227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.218 [2024-07-24 22:17:53.858256] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.218 [2024-07-24 22:17:53.870403] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.218 [2024-07-24 22:17:53.870443] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.218 [2024-07-24 22:17:53.883228] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.218 [2024-07-24 22:17:53.883256] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.218 [2024-07-24 22:17:53.895104] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.218 [2024-07-24 22:17:53.895132] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.218 [2024-07-24 22:17:53.907025] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.219 [2024-07-24 22:17:53.907054] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.522 [2024-07-24 22:17:53.921013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.522 [2024-07-24 22:17:53.921043] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.522 [2024-07-24 22:17:53.932487] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.522 [2024-07-24 22:17:53.932515] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.522 [2024-07-24 22:17:53.944629] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.522 [2024-07-24 22:17:53.944658] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.522 [2024-07-24 22:17:53.956659] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.522 [2024-07-24 22:17:53.956688] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.522 [2024-07-24 22:17:53.969070] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.522 [2024-07-24 22:17:53.969099] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.522 [2024-07-24 22:17:53.981114] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.522 [2024-07-24 22:17:53.981143] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.522 [2024-07-24 22:17:53.993041] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.522 [2024-07-24 22:17:53.993069] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.522 [2024-07-24 22:17:54.005659] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.522 [2024-07-24 22:17:54.005688] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.522 [2024-07-24 22:17:54.017895] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.522 [2024-07-24 22:17:54.017923] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.522 [2024-07-24 22:17:54.030154] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.522 [2024-07-24 22:17:54.030183] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.522 [2024-07-24 22:17:54.042399] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.522 [2024-07-24 22:17:54.042427] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.522 [2024-07-24 22:17:54.054430] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.522 [2024-07-24 22:17:54.054466] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.522 [2024-07-24 22:17:54.066602] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.522 [2024-07-24 22:17:54.066630] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.522 [2024-07-24 22:17:54.078698] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.522 [2024-07-24 22:17:54.078727] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.522 [2024-07-24 22:17:54.090537] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.522 [2024-07-24 22:17:54.090565] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.522 [2024-07-24 22:17:54.102242] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.522 [2024-07-24 22:17:54.102270] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.522 [2024-07-24 22:17:54.114121] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.522 [2024-07-24 22:17:54.114149] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.522 [2024-07-24 22:17:54.125906] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.522 [2024-07-24 22:17:54.125934] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.522 [2024-07-24 22:17:54.137910] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.522 [2024-07-24 22:17:54.137938] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.522 [2024-07-24 22:17:54.151750] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.522 [2024-07-24 22:17:54.151779] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.522 [2024-07-24 22:17:54.162649] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.522 [2024-07-24 22:17:54.162679] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.522 [2024-07-24 22:17:54.174736] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.522 [2024-07-24 22:17:54.174765] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.522 [2024-07-24 22:17:54.186783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.522 [2024-07-24 22:17:54.186813] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.808 [2024-07-24 22:17:54.198674] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.808 [2024-07-24 22:17:54.198703] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.808 [2024-07-24 22:17:54.210722] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.808 [2024-07-24 22:17:54.210752] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.808 [2024-07-24 22:17:54.222421] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.808 [2024-07-24 22:17:54.222450] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.808 [2024-07-24 22:17:54.234563] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.808 [2024-07-24 22:17:54.234592] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.808 [2024-07-24 22:17:54.246522] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.808 [2024-07-24 22:17:54.246559] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.808 [2024-07-24 22:17:54.258509] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.808 [2024-07-24 22:17:54.258545] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.808 [2024-07-24 22:17:54.270659] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.808 [2024-07-24 22:17:54.270688] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.808 [2024-07-24 22:17:54.282768] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.808 [2024-07-24 22:17:54.282797] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.808 [2024-07-24 22:17:54.294526] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.808 [2024-07-24 22:17:54.294556] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.808 [2024-07-24 22:17:54.308714] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.808 [2024-07-24 22:17:54.308743] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.808 [2024-07-24 22:17:54.319949] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.808 [2024-07-24 22:17:54.319977] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.808 [2024-07-24 22:17:54.331596] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.808 [2024-07-24 22:17:54.331624] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.808 [2024-07-24 22:17:54.343701] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.808 [2024-07-24 22:17:54.343729] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.808 [2024-07-24 22:17:54.355720] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.808 [2024-07-24 22:17:54.355749] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.808 [2024-07-24 22:17:54.367788] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.808 [2024-07-24 22:17:54.367817] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.809 [2024-07-24 22:17:54.379775] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.809 [2024-07-24 22:17:54.379803] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.809 [2024-07-24 22:17:54.391595] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.809 [2024-07-24 22:17:54.391623] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.809 [2024-07-24 22:17:54.404081] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.809 [2024-07-24 22:17:54.404108] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.809 [2024-07-24 22:17:54.415864] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.809 [2024-07-24 22:17:54.415903] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.809 [2024-07-24 22:17:54.428177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.809 [2024-07-24 22:17:54.428205] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.809 [2024-07-24 22:17:54.440333] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.809 [2024-07-24 22:17:54.440362] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.809 [2024-07-24 22:17:54.452200] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.809 [2024-07-24 22:17:54.452229] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.809 [2024-07-24 22:17:54.464208] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.809 [2024-07-24 22:17:54.464237] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.809 [2024-07-24 22:17:54.476205] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.809 [2024-07-24 22:17:54.476241] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.809 [2024-07-24 22:17:54.488253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.809 [2024-07-24 22:17:54.488291] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.809 [2024-07-24 22:17:54.499739] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.809 [2024-07-24 22:17:54.499768] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.809 [2024-07-24 22:17:54.511643] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.809 [2024-07-24 22:17:54.511672] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.069 [2024-07-24 22:17:54.523467] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.069 [2024-07-24 22:17:54.523510] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.069 [2024-07-24 22:17:54.535678] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.069 [2024-07-24 22:17:54.535707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.069 [2024-07-24 22:17:54.547457] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.069 [2024-07-24 22:17:54.547495] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.069 [2024-07-24 22:17:54.559642] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.069 [2024-07-24 22:17:54.559671] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.069 [2024-07-24 22:17:54.571428] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.069 [2024-07-24 22:17:54.571457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.069 [2024-07-24 22:17:54.583370] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.069 [2024-07-24 22:17:54.583400] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.069 [2024-07-24 22:17:54.595229] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.069 [2024-07-24 22:17:54.595257] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.069 [2024-07-24 22:17:54.607182] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.069 [2024-07-24 22:17:54.607211] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.069 [2024-07-24 22:17:54.621281] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.069 [2024-07-24 22:17:54.621310] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.070 [2024-07-24 22:17:54.632911] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.070 [2024-07-24 22:17:54.632941] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.070 [2024-07-24 22:17:54.644510] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.070 [2024-07-24 22:17:54.644548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.070 [2024-07-24 22:17:54.656455] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.070 [2024-07-24 22:17:54.656492] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.070 [2024-07-24 22:17:54.668368] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.070 [2024-07-24 22:17:54.668398] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.070 [2024-07-24 22:17:54.680221] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.070 [2024-07-24 22:17:54.680250] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.070 [2024-07-24 22:17:54.692271] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.070 [2024-07-24 22:17:54.692300] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.070 [2024-07-24 22:17:54.704809] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.070 [2024-07-24 22:17:54.704838] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.070 [2024-07-24 22:17:54.716609] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.070 [2024-07-24 22:17:54.716638] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.070 [2024-07-24 22:17:54.728695] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.070 [2024-07-24 22:17:54.728723] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.070 [2024-07-24 22:17:54.740337] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.070 [2024-07-24 22:17:54.740366] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.070 [2024-07-24 22:17:54.752431] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.070 [2024-07-24 22:17:54.752459] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.070 [2024-07-24 22:17:54.764567] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.070 [2024-07-24 22:17:54.764596] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.330 [2024-07-24 22:17:54.776334] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.330 [2024-07-24 22:17:54.776363] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.330 [2024-07-24 22:17:54.788543] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.330 [2024-07-24 22:17:54.788572] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.330 [2024-07-24 22:17:54.802702] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.330 [2024-07-24 22:17:54.802730] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.330 [2024-07-24 22:17:54.814502] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.330 [2024-07-24 22:17:54.814530] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.330 [2024-07-24 22:17:54.826503] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.330 [2024-07-24 22:17:54.826531] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.330 [2024-07-24 22:17:54.838658] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.330 [2024-07-24 22:17:54.838687] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.330 [2024-07-24 22:17:54.850564] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.330 [2024-07-24 22:17:54.850593] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.330 [2024-07-24 22:17:54.863015] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.330 [2024-07-24 22:17:54.863043] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.330 [2024-07-24 22:17:54.875393] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.330 [2024-07-24 22:17:54.875447] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.330 [2024-07-24 22:17:54.887392] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.330 [2024-07-24 22:17:54.887420] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.330 [2024-07-24 22:17:54.899380] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.330 [2024-07-24 22:17:54.899408] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.330 [2024-07-24 22:17:54.911252] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.330 [2024-07-24 22:17:54.911280] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.330 [2024-07-24 22:17:54.923128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.330 [2024-07-24 22:17:54.923156] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.330 [2024-07-24 22:17:54.934670] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.330 [2024-07-24 22:17:54.934698] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.330 [2024-07-24 22:17:54.946635] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.330 [2024-07-24 22:17:54.946663] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.330 [2024-07-24 22:17:54.958550] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.330 [2024-07-24 22:17:54.958578] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.330 [2024-07-24 22:17:54.970502] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.330 [2024-07-24 22:17:54.970531] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.330 [2024-07-24 22:17:54.982031] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.331 [2024-07-24 22:17:54.982068] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.331 [2024-07-24 22:17:54.993913] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.331 [2024-07-24 22:17:54.993941] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.331 [2024-07-24 22:17:55.005949] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.331 [2024-07-24 22:17:55.005977] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.331 [2024-07-24 22:17:55.017910] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.331 [2024-07-24 22:17:55.017938] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.331 [2024-07-24 22:17:55.029803] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.331 [2024-07-24 22:17:55.029832] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.589 [2024-07-24 22:17:55.041756] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.589 [2024-07-24 22:17:55.041808] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.590 [2024-07-24 22:17:55.053693] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.590 [2024-07-24 22:17:55.053721] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.590 [2024-07-24 22:17:55.065830] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.590 [2024-07-24 22:17:55.065858] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.590 [2024-07-24 22:17:55.079639] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.590 [2024-07-24 22:17:55.079675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.590 [2024-07-24 22:17:55.091340] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.590 [2024-07-24 22:17:55.091369] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.590 [2024-07-24 22:17:55.103184] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.590 [2024-07-24 22:17:55.103224] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.590 [2024-07-24 22:17:55.114953] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.590 [2024-07-24 22:17:55.114981] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.590 [2024-07-24 22:17:55.126581] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.590 [2024-07-24 22:17:55.126609] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.590 [2024-07-24 22:17:55.138438] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.590 [2024-07-24 22:17:55.138472] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.590 [2024-07-24 22:17:55.150531] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.590 [2024-07-24 22:17:55.150568] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.590 [2024-07-24 22:17:55.162667] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.590 [2024-07-24 22:17:55.162695] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.590 [2024-07-24 22:17:55.174626] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.590 [2024-07-24 22:17:55.174657] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.590 [2024-07-24 22:17:55.186768] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.590 [2024-07-24 22:17:55.186797] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.590 [2024-07-24 22:17:55.198577] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.590 [2024-07-24 22:17:55.198606] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.590 [2024-07-24 22:17:55.210473] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.590 [2024-07-24 22:17:55.210509] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.590 [2024-07-24 22:17:55.222623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.590 [2024-07-24 22:17:55.222651] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.590 [2024-07-24 22:17:55.234354] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.590 [2024-07-24 22:17:55.234382] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.590 [2024-07-24 22:17:55.246163] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.590 [2024-07-24 22:17:55.246191] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.590 [2024-07-24 22:17:55.260220] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.590 [2024-07-24 22:17:55.260248] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.590 [2024-07-24 22:17:55.271835] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.590 [2024-07-24 22:17:55.271864] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.590 [2024-07-24 22:17:55.283959] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.590 [2024-07-24 22:17:55.283989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.850 [2024-07-24 22:17:55.295622] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.850 [2024-07-24 22:17:55.295655] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.850 [2024-07-24 22:17:55.307530] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.850 [2024-07-24 22:17:55.307559] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.850 [2024-07-24 22:17:55.319421] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.850 [2024-07-24 22:17:55.319450] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.850 [2024-07-24 22:17:55.331552] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.850 [2024-07-24 22:17:55.331589] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.850 [2024-07-24 22:17:55.343620] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.850 [2024-07-24 22:17:55.343649] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.850 [2024-07-24 22:17:55.355637] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.850 [2024-07-24 22:17:55.355675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.850 [2024-07-24 22:17:55.369508] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.850 [2024-07-24 22:17:55.369536] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.850 [2024-07-24 22:17:55.381455] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.850 [2024-07-24 22:17:55.381492] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.850 [2024-07-24 22:17:55.393729] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.850 [2024-07-24 22:17:55.393757] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.850 [2024-07-24 22:17:55.405637] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.850 [2024-07-24 22:17:55.405669] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.850 [2024-07-24 22:17:55.417894] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.850 [2024-07-24 22:17:55.417922] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.850 [2024-07-24 22:17:55.430081] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.850 [2024-07-24 22:17:55.430109] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.850 [2024-07-24 22:17:55.442260] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.850 [2024-07-24 22:17:55.442291] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.850 [2024-07-24 22:17:55.454712] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.850 [2024-07-24 22:17:55.454740] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.850 [2024-07-24 22:17:55.466989] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.850 [2024-07-24 22:17:55.467025] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.850 [2024-07-24 22:17:55.478812] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.850 [2024-07-24 22:17:55.478840] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.850 [2024-07-24 22:17:55.492660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.850 [2024-07-24 22:17:55.492688] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.850 [2024-07-24 22:17:55.504049] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.850 [2024-07-24 22:17:55.504085] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.850 [2024-07-24 22:17:55.515772] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.850 [2024-07-24 22:17:55.515811] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.850 [2024-07-24 22:17:55.529453] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.850 [2024-07-24 22:17:55.529492] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.850 [2024-07-24 22:17:55.540172] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.850 [2024-07-24 22:17:55.540210] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.850 [2024-07-24 22:17:55.552435] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.850 [2024-07-24 22:17:55.552463] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.109 [2024-07-24 22:17:55.564337] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.109 [2024-07-24 22:17:55.564366] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.109 [2024-07-24 22:17:55.576139] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.109 [2024-07-24 22:17:55.576167] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.109 [2024-07-24 22:17:55.590626] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.109 [2024-07-24 22:17:55.590654] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.109 [2024-07-24 22:17:55.602429] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.109 [2024-07-24 22:17:55.602457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.109 [2024-07-24 22:17:55.614688] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.109 [2024-07-24 22:17:55.614720] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.109 [2024-07-24 22:17:55.626622] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.109 [2024-07-24 22:17:55.626650] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.109 [2024-07-24 22:17:55.638838] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.109 [2024-07-24 22:17:55.638869] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.109 [2024-07-24 22:17:55.650706] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.109 [2024-07-24 22:17:55.650740] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.109 [2024-07-24 22:17:55.664758] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.109 [2024-07-24 22:17:55.664786] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.109 [2024-07-24 22:17:55.676699] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.109 [2024-07-24 22:17:55.676727] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.109 [2024-07-24 22:17:55.690867] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.109 [2024-07-24 22:17:55.690895] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.109 [2024-07-24 22:17:55.702448] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.109 [2024-07-24 22:17:55.702477] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.109 [2024-07-24 22:17:55.714422] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.109 [2024-07-24 22:17:55.714450] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.109 [2024-07-24 22:17:55.726958] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.109 [2024-07-24 22:17:55.726986] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.109 [2024-07-24 22:17:55.739147] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.109 [2024-07-24 22:17:55.739183] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.109 [2024-07-24 22:17:55.751633] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.109 [2024-07-24 22:17:55.751662] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.109 [2024-07-24 22:17:55.763918] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.109 [2024-07-24 22:17:55.763946] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.109 [2024-07-24 22:17:55.776146] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.109 [2024-07-24 22:17:55.776175] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.110 [2024-07-24 22:17:55.790187] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.110 [2024-07-24 22:17:55.790217] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.110 [2024-07-24 22:17:55.801415] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.110 [2024-07-24 22:17:55.801444] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.110 [2024-07-24 22:17:55.813320] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.110 [2024-07-24 22:17:55.813349] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.370 [2024-07-24 22:17:55.825094] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.370 [2024-07-24 22:17:55.825123] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.370 [2024-07-24 22:17:55.839082] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.370 [2024-07-24 22:17:55.839109] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.370 [2024-07-24 22:17:55.850066] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.370 [2024-07-24 22:17:55.850095] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.370 [2024-07-24 22:17:55.861787] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.370 [2024-07-24 22:17:55.861815] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.370 [2024-07-24 22:17:55.875149] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.370 [2024-07-24 22:17:55.875179] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.370 [2024-07-24 22:17:55.885910] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.370 [2024-07-24 22:17:55.885938] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.370 [2024-07-24 22:17:55.898360] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.370 [2024-07-24 22:17:55.898388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.370 [2024-07-24 22:17:55.910313] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.370 [2024-07-24 22:17:55.910342] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.370 [2024-07-24 22:17:55.922057] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.370 [2024-07-24 22:17:55.922085] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.370 [2024-07-24 22:17:55.933733] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.370 [2024-07-24 22:17:55.933761] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.370 [2024-07-24 22:17:55.945395] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.370 [2024-07-24 22:17:55.945423] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.370 [2024-07-24 22:17:55.956981] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.370 [2024-07-24 22:17:55.957009] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.370 [2024-07-24 22:17:55.969119] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.370 [2024-07-24 22:17:55.969147] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.370 [2024-07-24 22:17:55.980797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.370 [2024-07-24 22:17:55.980825] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.370 [2024-07-24 22:17:55.992740] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.370 [2024-07-24 22:17:55.992769] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.370 [2024-07-24 22:17:56.005021] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.370 [2024-07-24 22:17:56.005051] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.370 [2024-07-24 22:17:56.016748] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.370 [2024-07-24 22:17:56.016777] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.370 [2024-07-24 22:17:56.028389] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.370 [2024-07-24 22:17:56.028417] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.370 [2024-07-24 22:17:56.040382] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.370 [2024-07-24 22:17:56.040410] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.370 [2024-07-24 22:17:56.052142] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.370 [2024-07-24 22:17:56.052170] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.370 [2024-07-24 22:17:56.065827] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.370 [2024-07-24 22:17:56.065855] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.631 [2024-07-24 22:17:56.077211] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.631 [2024-07-24 22:17:56.077240] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.631 [2024-07-24 22:17:56.088876] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.631 [2024-07-24 22:17:56.088904] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.631 [2024-07-24 22:17:56.101328] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.631 [2024-07-24 22:17:56.101357] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.631 [2024-07-24 22:17:56.113442] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.631 [2024-07-24 22:17:56.113470] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.631 [2024-07-24 22:17:56.125264] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.632 [2024-07-24 22:17:56.125292] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.632 [2024-07-24 22:17:56.137094] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.632 [2024-07-24 22:17:56.137123] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.632 [2024-07-24 22:17:56.149349] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.632 [2024-07-24 22:17:56.149390] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.632 [2024-07-24 22:17:56.161602] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.632 [2024-07-24 22:17:56.161630] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.632 [2024-07-24 22:17:56.173544] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.632 [2024-07-24 22:17:56.173572] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.632 [2024-07-24 22:17:56.185522] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.632 [2024-07-24 22:17:56.185550] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.632 [2024-07-24 22:17:56.197445] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.632 [2024-07-24 22:17:56.197473] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.632 [2024-07-24 22:17:56.209847] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.632 [2024-07-24 22:17:56.209875] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.632 [2024-07-24 22:17:56.221922] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.632 [2024-07-24 22:17:56.221950] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.632 [2024-07-24 22:17:56.233965] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.632 [2024-07-24 22:17:56.233994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.632 [2024-07-24 22:17:56.246431] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.632 [2024-07-24 22:17:56.246471] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.632 [2024-07-24 22:17:56.258710] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.632 [2024-07-24 22:17:56.258739] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.632 [2024-07-24 22:17:56.271151] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.632 [2024-07-24 22:17:56.271179] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.632 [2024-07-24 22:17:56.283630] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.632 [2024-07-24 22:17:56.283659] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.632 [2024-07-24 22:17:56.295827] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.632 [2024-07-24 22:17:56.295855] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.632 [2024-07-24 22:17:56.308239] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.632 [2024-07-24 22:17:56.308267] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.632 [2024-07-24 22:17:56.320336] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.632 [2024-07-24 22:17:56.320364] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.632 [2024-07-24 22:17:56.332856] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.632 [2024-07-24 22:17:56.332883] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.901 [2024-07-24 22:17:56.345103] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.901 [2024-07-24 22:17:56.345133] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.901 [2024-07-24 22:17:56.357107] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.901 [2024-07-24 22:17:56.357135] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.901 [2024-07-24 22:17:56.368912] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.901 [2024-07-24 22:17:56.368941] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.901 [2024-07-24 22:17:56.380932] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.901 [2024-07-24 22:17:56.380961] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.901 [2024-07-24 22:17:56.392884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.901 [2024-07-24 22:17:56.392913] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.901 [2024-07-24 22:17:56.404847] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.901 [2024-07-24 22:17:56.404875] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.901 [2024-07-24 22:17:56.416863] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.901 [2024-07-24 22:17:56.416892] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.901 [2024-07-24 22:17:56.428656] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.901 [2024-07-24 22:17:56.428685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.901 [2024-07-24 22:17:56.440576] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.901 [2024-07-24 22:17:56.440605] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.901 [2024-07-24 22:17:56.452596] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.901 [2024-07-24 22:17:56.452626] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.901 [2024-07-24 22:17:56.464789] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.901 [2024-07-24 22:17:56.464818] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.901 [2024-07-24 22:17:56.477305] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.901 [2024-07-24 22:17:56.477346] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.901 [2024-07-24 22:17:56.489742] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.901 [2024-07-24 22:17:56.489771] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.901 [2024-07-24 22:17:56.502071] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.901 [2024-07-24 22:17:56.502099] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.901 [2024-07-24 22:17:56.514738] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.901 [2024-07-24 22:17:56.514766] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.901 [2024-07-24 22:17:56.526785] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.901 [2024-07-24 22:17:56.526813] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.901 [2024-07-24 22:17:56.538840] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.901 [2024-07-24 22:17:56.538868] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.901 [2024-07-24 22:17:56.552924] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.901 [2024-07-24 22:17:56.552952] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.901 [2024-07-24 22:17:56.564668] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.901 [2024-07-24 22:17:56.564697] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.901 [2024-07-24 22:17:56.576712] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.901 [2024-07-24 22:17:56.576741] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.901 [2024-07-24 22:17:56.588571] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.901 [2024-07-24 22:17:56.588606] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.901 [2024-07-24 22:17:56.601012] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.901 [2024-07-24 22:17:56.601040] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.161 [2024-07-24 22:17:56.612649] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.161 [2024-07-24 22:17:56.612678] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.161 [2024-07-24 22:17:56.624539] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.161 [2024-07-24 22:17:56.624567] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.161 [2024-07-24 22:17:56.636570] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.161 [2024-07-24 22:17:56.636598] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.161 [2024-07-24 22:17:56.648413] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.161 [2024-07-24 22:17:56.648441] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.161 [2024-07-24 22:17:56.660472] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.161 [2024-07-24 22:17:56.660515] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.161 [2024-07-24 22:17:56.672821] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.161 [2024-07-24 22:17:56.672849] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.161 [2024-07-24 22:17:56.685089] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.161 [2024-07-24 22:17:56.685116] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.161 [2024-07-24 22:17:56.696996] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.161 [2024-07-24 22:17:56.697024] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.161 [2024-07-24 22:17:56.708922] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.161 [2024-07-24 22:17:56.708967] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.161 [2024-07-24 22:17:56.720827] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.161 [2024-07-24 22:17:56.720855] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.161 [2024-07-24 22:17:56.734773] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.161 [2024-07-24 22:17:56.734801] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.161 [2024-07-24 22:17:56.745788] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.161 [2024-07-24 22:17:56.745816] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.161 [2024-07-24 22:17:56.757777] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.161 [2024-07-24 22:17:56.757805] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.162 [2024-07-24 22:17:56.770145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.162 [2024-07-24 22:17:56.770173] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.162 [2024-07-24 22:17:56.782032] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.162 [2024-07-24 22:17:56.782061] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.162 [2024-07-24 22:17:56.796049] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.162 [2024-07-24 22:17:56.796077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.162 [2024-07-24 22:17:56.807577] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.162 [2024-07-24 22:17:56.807605] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.162 [2024-07-24 22:17:56.819072] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.162 [2024-07-24 22:17:56.819100] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.162 [2024-07-24 22:17:56.830804] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.162 [2024-07-24 22:17:56.830832] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.162 [2024-07-24 22:17:56.842883] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.162 [2024-07-24 22:17:56.842911] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.162 [2024-07-24 22:17:56.854674] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.162 [2024-07-24 22:17:56.854702] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.420 [2024-07-24 22:17:56.866354] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.420 [2024-07-24 22:17:56.866383] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.420 [2024-07-24 22:17:56.878315] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.420 [2024-07-24 22:17:56.878343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.420 [2024-07-24 22:17:56.892013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.420 [2024-07-24 22:17:56.892041] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.420 [2024-07-24 22:17:56.903492] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.420 [2024-07-24 22:17:56.903522] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.420 [2024-07-24 22:17:56.915421] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.420 [2024-07-24 22:17:56.915457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.420 [2024-07-24 22:17:56.927046] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.421 [2024-07-24 22:17:56.927074] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.421 [2024-07-24 22:17:56.938664] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.421 [2024-07-24 22:17:56.938703] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.421 [2024-07-24 22:17:56.950416] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.421 [2024-07-24 22:17:56.950444] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.421 [2024-07-24 22:17:56.962958] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.421 [2024-07-24 22:17:56.962986] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.421 [2024-07-24 22:17:56.974955] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.421 [2024-07-24 22:17:56.974982] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.421 [2024-07-24 22:17:56.987084] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.421 [2024-07-24 22:17:56.987113] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.421 [2024-07-24 22:17:56.998939] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.421 [2024-07-24 22:17:56.998967] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.421 [2024-07-24 22:17:57.011312] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.421 [2024-07-24 22:17:57.011341] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.421 [2024-07-24 22:17:57.023202] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.421 [2024-07-24 22:17:57.023230] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.421 [2024-07-24 22:17:57.035158] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.421 [2024-07-24 22:17:57.035186] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.421 [2024-07-24 22:17:57.047289] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.421 [2024-07-24 22:17:57.047317] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.421 [2024-07-24 22:17:57.059146] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.421 [2024-07-24 22:17:57.059175] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.421 [2024-07-24 22:17:57.071219] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.421 [2024-07-24 22:17:57.071250] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.421 [2024-07-24 22:17:57.083106] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.421 [2024-07-24 22:17:57.083134] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.421 [2024-07-24 22:17:57.095226] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.421 [2024-07-24 22:17:57.095254] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.421 [2024-07-24 22:17:57.107567] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.421 [2024-07-24 22:17:57.107595] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.421 [2024-07-24 22:17:57.119343] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.421 [2024-07-24 22:17:57.119371] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.681 [2024-07-24 22:17:57.131235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.681 [2024-07-24 22:17:57.131264] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.681 [2024-07-24 22:17:57.143324] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.681 [2024-07-24 22:17:57.143352] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.681 [2024-07-24 22:17:57.155666] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.681 [2024-07-24 22:17:57.155693] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.681 [2024-07-24 22:17:57.167879] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.681 [2024-07-24 22:17:57.167908] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.681 [2024-07-24 22:17:57.180025] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.681 [2024-07-24 22:17:57.180053] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.681 [2024-07-24 22:17:57.191865] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.681 [2024-07-24 22:17:57.191893] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.681 [2024-07-24 22:17:57.204132] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.681 [2024-07-24 22:17:57.204160] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.681 [2024-07-24 22:17:57.215881] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.681 [2024-07-24 22:17:57.215909] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.681 [2024-07-24 22:17:57.230112] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.681 [2024-07-24 22:17:57.230140] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.681 [2024-07-24 22:17:57.241140] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.681 [2024-07-24 22:17:57.241168] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.681 [2024-07-24 22:17:57.254007] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.681 [2024-07-24 22:17:57.254035] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.681 [2024-07-24 22:17:57.266359] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.681 [2024-07-24 22:17:57.266387] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.681 [2024-07-24 22:17:57.278697] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.681 [2024-07-24 22:17:57.278725] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.681 [2024-07-24 22:17:57.290658] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.681 [2024-07-24 22:17:57.290686] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.681 [2024-07-24 22:17:57.302820] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.681 [2024-07-24 22:17:57.302850] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.681 [2024-07-24 22:17:57.317144] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.681 [2024-07-24 22:17:57.317188] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.681 [2024-07-24 22:17:57.328667] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.681 [2024-07-24 22:17:57.328696] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.681 [2024-07-24 22:17:57.340567] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.681 [2024-07-24 22:17:57.340595] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.681 [2024-07-24 22:17:57.352893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.681 [2024-07-24 22:17:57.352925] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.681 [2024-07-24 22:17:57.364833] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.681 [2024-07-24 22:17:57.364862] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.681 [2024-07-24 22:17:57.376922] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.681 [2024-07-24 22:17:57.376961] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.942 [2024-07-24 22:17:57.388700] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.942 [2024-07-24 22:17:57.388729] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.942 [2024-07-24 22:17:57.400905] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.942 [2024-07-24 22:17:57.400934] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.942 [2024-07-24 22:17:57.412886] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.942 [2024-07-24 22:17:57.412918] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.942 [2024-07-24 22:17:57.424877] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.942 [2024-07-24 22:17:57.424906] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.942 [2024-07-24 22:17:57.438786] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.942 [2024-07-24 22:17:57.438814] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.942 [2024-07-24 22:17:57.449904] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.942 [2024-07-24 22:17:57.449933] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.942 [2024-07-24 22:17:57.462443] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.942 [2024-07-24 22:17:57.462471] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.942 [2024-07-24 22:17:57.474409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.942 [2024-07-24 22:17:57.474440] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.942 [2024-07-24 22:17:57.486253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.942 [2024-07-24 22:17:57.486282] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.942 [2024-07-24 22:17:57.498224] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.942 [2024-07-24 22:17:57.498254] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.942 [2024-07-24 22:17:57.510186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.942 [2024-07-24 22:17:57.510215] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.942 [2024-07-24 22:17:57.522398] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.942 [2024-07-24 22:17:57.522427] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.942 [2024-07-24 22:17:57.534595] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.942 [2024-07-24 22:17:57.534624] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.942 [2024-07-24 22:17:57.546819] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.942 [2024-07-24 22:17:57.546848] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.942 [2024-07-24 22:17:57.558503] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.942 [2024-07-24 22:17:57.558532] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.942 [2024-07-24 22:17:57.570437] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.942 [2024-07-24 22:17:57.570465] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.942 [2024-07-24 22:17:57.582287] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.942 [2024-07-24 22:17:57.582316] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.942 [2024-07-24 22:17:57.594541] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.942 [2024-07-24 22:17:57.594576] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.942 [2024-07-24 22:17:57.606572] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.942 [2024-07-24 22:17:57.606600] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.942 [2024-07-24 22:17:57.618519] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.942 [2024-07-24 22:17:57.618551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.942 [2024-07-24 22:17:57.632268] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.942 [2024-07-24 22:17:57.632304] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:31.942 [2024-07-24 22:17:57.642681] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:31.942 [2024-07-24 22:17:57.642709] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.203 [2024-07-24 22:17:57.654927] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.203 [2024-07-24 22:17:57.654956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.203 [2024-07-24 22:17:57.666880] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.203 [2024-07-24 22:17:57.666909] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.203 [2024-07-24 22:17:57.678922] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.203 [2024-07-24 22:17:57.678950] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.203 [2024-07-24 22:17:57.690651] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.203 [2024-07-24 22:17:57.690679] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.203 [2024-07-24 22:17:57.702667] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.203 [2024-07-24 22:17:57.702695] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.203 [2024-07-24 22:17:57.714536] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.203 [2024-07-24 22:17:57.714565] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.203 [2024-07-24 22:17:57.726284] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.203 [2024-07-24 22:17:57.726312] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.203 [2024-07-24 22:17:57.738285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.203 [2024-07-24 22:17:57.738313] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.203 [2024-07-24 22:17:57.750277] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.203 [2024-07-24 22:17:57.750308] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.203 [2024-07-24 22:17:57.764134] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.203 [2024-07-24 22:17:57.764166] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.203 [2024-07-24 22:17:57.775383] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.203 [2024-07-24 22:17:57.775410] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.203 [2024-07-24 22:17:57.787445] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.203 [2024-07-24 22:17:57.787477] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.203 [2024-07-24 22:17:57.799802] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.203 [2024-07-24 22:17:57.799830] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.203 [2024-07-24 22:17:57.811878] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.203 [2024-07-24 22:17:57.811907] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.203 [2024-07-24 22:17:57.824037] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.203 [2024-07-24 22:17:57.824066] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.203 [2024-07-24 22:17:57.836281] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.203 [2024-07-24 22:17:57.836309] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.203 [2024-07-24 22:17:57.848008] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.203 [2024-07-24 22:17:57.848036] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.203 [2024-07-24 22:17:57.859745] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.203 [2024-07-24 22:17:57.859773] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.203 [2024-07-24 22:17:57.871497] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.203 [2024-07-24 22:17:57.871526] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.203 [2024-07-24 22:17:57.883258] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.203 [2024-07-24 22:17:57.883286] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.203 [2024-07-24 22:17:57.895105] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.203 [2024-07-24 22:17:57.895132] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.203 [2024-07-24 22:17:57.907168] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.203 [2024-07-24 22:17:57.907197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.464 [2024-07-24 22:17:57.919024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.464 [2024-07-24 22:17:57.919060] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.464 [2024-07-24 22:17:57.931175] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.464 [2024-07-24 22:17:57.931204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.464 [2024-07-24 22:17:57.943073] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.464 [2024-07-24 22:17:57.943101] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.464 [2024-07-24 22:17:57.955338] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.464 [2024-07-24 22:17:57.955366] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.465 [2024-07-24 22:17:57.967756] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.465 [2024-07-24 22:17:57.967784] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.465 [2024-07-24 22:17:57.979706] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.465 [2024-07-24 22:17:57.979735] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.465 [2024-07-24 22:17:57.991971] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.465 [2024-07-24 22:17:57.991999] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.465 [2024-07-24 22:17:58.003926] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.465 [2024-07-24 22:17:58.003954] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.465 [2024-07-24 22:17:58.015927] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.465 [2024-07-24 22:17:58.015955] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.465 [2024-07-24 22:17:58.029985] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.465 [2024-07-24 22:17:58.030013] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.465 [2024-07-24 22:17:58.041549] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.465 [2024-07-24 22:17:58.041578] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.465 [2024-07-24 22:17:58.053719] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.465 [2024-07-24 22:17:58.053748] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.465 [2024-07-24 22:17:58.065974] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.465 [2024-07-24 22:17:58.066006] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.465 [2024-07-24 22:17:58.077666] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.465 [2024-07-24 22:17:58.077707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.465 [2024-07-24 22:17:58.089821] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.465 [2024-07-24 22:17:58.089850] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.465 [2024-07-24 22:17:58.107136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.465 [2024-07-24 22:17:58.107174] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.465 [2024-07-24 22:17:58.118980] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.465 [2024-07-24 22:17:58.119009] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.465 [2024-07-24 22:17:58.133011] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.465 [2024-07-24 22:17:58.133040] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.465 [2024-07-24 22:17:58.144802] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.465 [2024-07-24 22:17:58.144834] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.465 [2024-07-24 22:17:58.156833] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.465 [2024-07-24 22:17:58.156864] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.465 [2024-07-24 22:17:58.168926] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.465 [2024-07-24 22:17:58.168954] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.726 [2024-07-24 22:17:58.180652] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.726 [2024-07-24 22:17:58.180681] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.726 [2024-07-24 22:17:58.192364] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.726 [2024-07-24 22:17:58.192396] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.726 [2024-07-24 22:17:58.204343] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.726 [2024-07-24 22:17:58.204371] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.726 [2024-07-24 22:17:58.215920] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.726 [2024-07-24 22:17:58.215949] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.726 [2024-07-24 22:17:58.228300] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.726 [2024-07-24 22:17:58.228329] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.726 [2024-07-24 22:17:58.240373] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.726 [2024-07-24 22:17:58.240402] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.726 [2024-07-24 22:17:58.252702] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.726 [2024-07-24 22:17:58.252731] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.726 [2024-07-24 22:17:58.260967] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.726 [2024-07-24 22:17:58.260999] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.726 00:08:32.726 Latency(us) 00:08:32.726 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:32.726 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:08:32.726 Nvme1n1 : 5.01 10577.61 82.64 0.00 0.00 12084.05 5558.42 23010.42 00:08:32.726 =================================================================================================================== 00:08:32.726 Total : 10577.61 82.64 0.00 0.00 12084.05 5558.42 23010.42 00:08:32.726 [2024-07-24 22:17:58.267686] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.726 [2024-07-24 22:17:58.267724] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.726 [2024-07-24 22:17:58.275703] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.726 [2024-07-24 22:17:58.275732] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.726 [2024-07-24 22:17:58.283729] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.726 [2024-07-24 22:17:58.283758] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.726 [2024-07-24 22:17:58.295847] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.726 [2024-07-24 22:17:58.295924] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.726 [2024-07-24 22:17:58.303856] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.726 [2024-07-24 22:17:58.303918] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.726 [2024-07-24 22:17:58.315906] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.726 [2024-07-24 22:17:58.315973] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.726 [2024-07-24 22:17:58.327973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.726 [2024-07-24 22:17:58.328063] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.726 [2024-07-24 22:17:58.339984] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.726 [2024-07-24 22:17:58.340050] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.726 [2024-07-24 22:17:58.347979] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.726 [2024-07-24 22:17:58.348037] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.726 [2024-07-24 22:17:58.355987] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.726 [2024-07-24 22:17:58.356040] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.726 [2024-07-24 22:17:58.363956] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.726 [2024-07-24 22:17:58.363984] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.726 [2024-07-24 22:17:58.371987] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.726 [2024-07-24 22:17:58.372019] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.726 [2024-07-24 22:17:58.380003] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.726 [2024-07-24 22:17:58.380034] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.726 [2024-07-24 22:17:58.388022] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.726 [2024-07-24 22:17:58.388049] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.726 [2024-07-24 22:17:58.396107] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.726 [2024-07-24 22:17:58.396158] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.726 [2024-07-24 22:17:58.408134] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.726 [2024-07-24 22:17:58.408187] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.726 [2024-07-24 22:17:58.416095] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.726 [2024-07-24 22:17:58.416122] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.726 [2024-07-24 22:17:58.424128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.726 [2024-07-24 22:17:58.424159] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.986 [2024-07-24 22:17:58.432149] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.986 [2024-07-24 22:17:58.432178] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.986 [2024-07-24 22:17:58.440165] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.986 [2024-07-24 22:17:58.440206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.986 [2024-07-24 22:17:58.452288] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.986 [2024-07-24 22:17:58.452344] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.986 [2024-07-24 22:17:58.460235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.986 [2024-07-24 22:17:58.460266] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.986 [2024-07-24 22:17:58.468231] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.986 [2024-07-24 22:17:58.468254] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.986 [2024-07-24 22:17:58.476252] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:32.986 [2024-07-24 22:17:58.476275] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:32.986 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3786131) - No such process 00:08:32.986 22:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3786131 00:08:32.986 22:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:32.986 22:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.986 22:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:32.986 22:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.987 22:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:32.987 22:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.987 22:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:32.987 delay0 00:08:32.987 22:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.987 22:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:32.987 22:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.987 22:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:32.987 22:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.987 22:17:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:08:32.987 EAL: No free 2048 kB hugepages reported on node 1 00:08:32.987 [2024-07-24 22:17:58.640587] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:41.105 Initializing NVMe Controllers 00:08:41.105 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:41.105 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:41.105 Initialization complete. Launching workers. 00:08:41.105 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 263, failed: 16034 00:08:41.105 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 16194, failed to submit 103 00:08:41.105 success 16103, unsuccess 91, failed 0 00:08:41.105 22:18:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:08:41.105 22:18:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:08:41.105 22:18:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:41.105 22:18:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:08:41.105 22:18:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:41.105 22:18:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:08:41.105 22:18:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:41.105 22:18:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:41.105 rmmod nvme_tcp 00:08:41.105 rmmod nvme_fabrics 00:08:41.105 rmmod nvme_keyring 00:08:41.105 22:18:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:41.105 22:18:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:08:41.105 22:18:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:08:41.105 22:18:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 3785107 ']' 00:08:41.105 22:18:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 3785107 00:08:41.105 22:18:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 3785107 ']' 00:08:41.105 22:18:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 3785107 00:08:41.105 22:18:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:08:41.105 22:18:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:41.105 22:18:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3785107 00:08:41.105 22:18:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:41.105 22:18:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:41.105 22:18:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3785107' 00:08:41.105 killing process with pid 3785107 00:08:41.105 22:18:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 3785107 00:08:41.105 22:18:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 3785107 00:08:41.105 22:18:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:41.105 22:18:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:41.105 22:18:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:41.105 22:18:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:41.105 22:18:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:41.105 22:18:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:41.106 22:18:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:41.106 22:18:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:42.487 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:42.487 00:08:42.487 real 0m28.451s 00:08:42.487 user 0m39.075s 00:08:42.487 sys 0m9.705s 00:08:42.487 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:42.487 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:42.487 ************************************ 00:08:42.487 END TEST nvmf_zcopy 00:08:42.487 ************************************ 00:08:42.487 22:18:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:42.487 22:18:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:42.487 22:18:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:42.487 22:18:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:42.487 ************************************ 00:08:42.487 START TEST nvmf_nmic 00:08:42.487 ************************************ 00:08:42.487 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:42.487 * Looking for test storage... 00:08:42.487 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:42.487 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:42.487 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:08:42.746 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:42.746 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:42.746 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:42.746 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:42.746 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:42.746 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:42.746 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:42.746 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:42.746 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:42.746 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:42.746 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:08:42.746 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:08:42.746 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:42.746 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:42.746 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:42.746 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:42.746 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:42.746 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:42.746 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:42.746 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:42.746 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.746 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.746 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.746 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:08:42.746 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.746 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:08:42.746 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:42.746 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:42.746 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:42.746 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:42.746 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:42.746 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:42.746 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:42.746 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:42.746 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:42.746 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:42.746 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:08:42.746 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:42.746 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:42.746 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:42.746 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:42.746 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:42.746 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:42.746 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:42.746 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:42.746 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:42.746 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:42.746 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:08:42.746 22:18:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:44.126 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:44.126 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:08:44.126 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:44.126 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:44.126 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:44.126 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:44.126 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:44.126 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:08:44.126 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:44.126 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:08:44.126 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:08:44.126 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:08:44.126 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:08:44.126 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:08:44.126 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:08:44.126 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:44.126 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:44.126 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:44.126 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:44.126 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:44.126 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:44.126 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:44.126 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:44.126 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:44.126 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:44.126 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:44.126 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:44.126 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:44.126 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:44.126 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:44.127 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:44.127 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:44.127 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:44.127 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:08:44.127 Found 0000:08:00.0 (0x8086 - 0x159b) 00:08:44.127 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:44.127 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:44.127 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:44.127 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:44.127 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:44.127 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:44.127 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:08:44.127 Found 0000:08:00.1 (0x8086 - 0x159b) 00:08:44.127 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:44.127 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:44.127 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:44.127 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:44.127 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:44.127 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:44.127 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:44.127 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:44.127 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:44.127 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:44.127 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:44.127 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:44.127 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:44.127 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:44.127 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:44.127 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:08:44.127 Found net devices under 0000:08:00.0: cvl_0_0 00:08:44.127 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:44.127 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:44.127 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:44.127 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:44.127 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:44.127 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:44.386 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:44.386 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:44.386 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:08:44.386 Found net devices under 0000:08:00.1: cvl_0_1 00:08:44.386 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:44.386 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:44.386 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:08:44.386 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:44.386 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:44.386 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:44.387 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:44.387 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:44.387 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:44.387 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:44.387 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:44.387 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:44.387 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:44.387 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:44.387 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:44.387 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:44.387 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:44.387 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:44.387 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:44.387 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:44.387 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:44.387 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:44.387 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:44.387 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:44.387 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:44.387 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:44.387 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:44.387 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.357 ms 00:08:44.387 00:08:44.387 --- 10.0.0.2 ping statistics --- 00:08:44.387 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.387 rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms 00:08:44.387 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:44.387 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:44.387 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:08:44.387 00:08:44.387 --- 10.0.0.1 ping statistics --- 00:08:44.387 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.387 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:08:44.387 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:44.387 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:08:44.387 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:44.387 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:44.387 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:44.387 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:44.387 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:44.387 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:44.387 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:44.387 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:08:44.387 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:44.387 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:44.387 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:44.387 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=3789430 00:08:44.387 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:44.387 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 3789430 00:08:44.387 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 3789430 ']' 00:08:44.387 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:44.387 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:44.387 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:44.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:44.387 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:44.387 22:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:44.387 [2024-07-24 22:18:10.046303] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:08:44.387 [2024-07-24 22:18:10.046404] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:44.387 EAL: No free 2048 kB hugepages reported on node 1 00:08:44.647 [2024-07-24 22:18:10.114228] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:44.647 [2024-07-24 22:18:10.233001] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:44.647 [2024-07-24 22:18:10.233061] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:44.647 [2024-07-24 22:18:10.233081] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:44.647 [2024-07-24 22:18:10.233095] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:44.647 [2024-07-24 22:18:10.233106] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:44.647 [2024-07-24 22:18:10.233161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:44.647 [2024-07-24 22:18:10.233185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:44.647 [2024-07-24 22:18:10.233206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:44.647 [2024-07-24 22:18:10.233209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.647 22:18:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:44.647 22:18:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:08:44.647 22:18:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:44.647 22:18:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:44.647 22:18:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:44.906 22:18:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:44.906 22:18:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:44.906 22:18:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.906 22:18:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:44.906 [2024-07-24 22:18:10.375578] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:44.906 22:18:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.906 22:18:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:44.906 22:18:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.906 22:18:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:44.906 Malloc0 00:08:44.906 22:18:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.906 22:18:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:44.906 22:18:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.906 22:18:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:44.906 22:18:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.906 22:18:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:44.906 22:18:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.906 22:18:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:44.906 22:18:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.906 22:18:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:44.906 22:18:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.906 22:18:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:44.906 [2024-07-24 22:18:10.424806] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:44.906 22:18:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.906 22:18:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:08:44.906 test case1: single bdev can't be used in multiple subsystems 00:08:44.906 22:18:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:08:44.906 22:18:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.906 22:18:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:44.906 22:18:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.906 22:18:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:44.906 22:18:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.906 22:18:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:44.906 22:18:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.906 22:18:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:08:44.906 22:18:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:08:44.906 22:18:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.906 22:18:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:44.906 [2024-07-24 22:18:10.448652] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:08:44.906 [2024-07-24 22:18:10.448684] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:08:44.906 [2024-07-24 22:18:10.448700] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:44.906 request: 00:08:44.906 { 00:08:44.906 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:44.906 "namespace": { 00:08:44.906 "bdev_name": "Malloc0", 00:08:44.906 "no_auto_visible": false 00:08:44.906 }, 00:08:44.906 "method": "nvmf_subsystem_add_ns", 00:08:44.906 "req_id": 1 00:08:44.906 } 00:08:44.906 Got JSON-RPC error response 00:08:44.906 response: 00:08:44.906 { 00:08:44.906 "code": -32602, 00:08:44.906 "message": "Invalid parameters" 00:08:44.906 } 00:08:44.906 22:18:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:08:44.906 22:18:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:08:44.906 22:18:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:08:44.906 22:18:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:08:44.906 Adding namespace failed - expected result. 00:08:44.906 22:18:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:08:44.906 test case2: host connect to nvmf target in multiple paths 00:08:44.906 22:18:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:08:44.906 22:18:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.906 22:18:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:44.906 [2024-07-24 22:18:10.456769] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:08:44.906 22:18:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.906 22:18:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:45.475 22:18:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:08:46.044 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:08:46.044 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1196 -- # local i=0 00:08:46.044 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:08:46.044 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:08:46.044 22:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # sleep 2 00:08:47.949 22:18:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:08:47.949 22:18:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:08:47.949 22:18:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:08:47.949 22:18:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:08:47.949 22:18:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:08:47.949 22:18:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # return 0 00:08:47.949 22:18:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:47.949 [global] 00:08:47.949 thread=1 00:08:47.949 invalidate=1 00:08:47.949 rw=write 00:08:47.949 time_based=1 00:08:47.949 runtime=1 00:08:47.949 ioengine=libaio 00:08:47.949 direct=1 00:08:47.949 bs=4096 00:08:47.949 iodepth=1 00:08:47.949 norandommap=0 00:08:47.949 numjobs=1 00:08:47.949 00:08:47.949 verify_dump=1 00:08:47.949 verify_backlog=512 00:08:47.949 verify_state_save=0 00:08:47.949 do_verify=1 00:08:47.949 verify=crc32c-intel 00:08:47.949 [job0] 00:08:47.949 filename=/dev/nvme0n1 00:08:47.949 Could not set queue depth (nvme0n1) 00:08:48.208 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:48.208 fio-3.35 00:08:48.208 Starting 1 thread 00:08:49.588 00:08:49.588 job0: (groupid=0, jobs=1): err= 0: pid=3789860: Wed Jul 24 22:18:14 2024 00:08:49.588 read: IOPS=22, BW=90.6KiB/s (92.8kB/s)(92.0KiB/1015msec) 00:08:49.588 slat (nsec): min=14158, max=31851, avg=21958.57, stdev=7440.88 00:08:49.588 clat (usec): min=465, max=42053, avg=39670.46, stdev=8560.08 00:08:49.588 lat (usec): min=494, max=42067, avg=39692.42, stdev=8558.53 00:08:49.588 clat percentiles (usec): 00:08:49.588 | 1.00th=[ 465], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:08:49.588 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:08:49.588 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:08:49.588 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:08:49.588 | 99.99th=[42206] 00:08:49.588 write: IOPS=504, BW=2018KiB/s (2066kB/s)(2048KiB/1015msec); 0 zone resets 00:08:49.588 slat (nsec): min=6892, max=37808, avg=14614.43, stdev=5400.22 00:08:49.588 clat (usec): min=157, max=334, avg=180.89, stdev=14.95 00:08:49.588 lat (usec): min=165, max=342, avg=195.50, stdev=17.16 00:08:49.588 clat percentiles (usec): 00:08:49.588 | 1.00th=[ 161], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 172], 00:08:49.588 | 30.00th=[ 174], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 182], 00:08:49.588 | 70.00th=[ 186], 80.00th=[ 190], 90.00th=[ 198], 95.00th=[ 204], 00:08:49.588 | 99.00th=[ 227], 99.50th=[ 245], 99.90th=[ 334], 99.95th=[ 334], 00:08:49.588 | 99.99th=[ 334] 00:08:49.588 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:08:49.588 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:49.588 lat (usec) : 250=95.33%, 500=0.56% 00:08:49.588 lat (msec) : 50=4.11% 00:08:49.588 cpu : usr=0.39%, sys=0.69%, ctx=535, majf=0, minf=1 00:08:49.588 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:49.588 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:49.588 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:49.588 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:49.588 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:49.588 00:08:49.588 Run status group 0 (all jobs): 00:08:49.588 READ: bw=90.6KiB/s (92.8kB/s), 90.6KiB/s-90.6KiB/s (92.8kB/s-92.8kB/s), io=92.0KiB (94.2kB), run=1015-1015msec 00:08:49.588 WRITE: bw=2018KiB/s (2066kB/s), 2018KiB/s-2018KiB/s (2066kB/s-2066kB/s), io=2048KiB (2097kB), run=1015-1015msec 00:08:49.588 00:08:49.588 Disk stats (read/write): 00:08:49.588 nvme0n1: ios=70/512, merge=0/0, ticks=805/88, in_queue=893, util=91.68% 00:08:49.588 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:49.588 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:49.588 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:49.588 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1217 -- # local i=0 00:08:49.588 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:08:49.588 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:49.588 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:08:49.588 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:49.588 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # return 0 00:08:49.588 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:08:49.588 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:08:49.588 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:49.588 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:08:49.588 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:49.588 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:08:49.588 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:49.588 22:18:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:49.588 rmmod nvme_tcp 00:08:49.588 rmmod nvme_fabrics 00:08:49.588 rmmod nvme_keyring 00:08:49.588 22:18:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:49.588 22:18:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:08:49.588 22:18:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:08:49.588 22:18:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 3789430 ']' 00:08:49.588 22:18:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 3789430 00:08:49.588 22:18:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 3789430 ']' 00:08:49.588 22:18:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 3789430 00:08:49.588 22:18:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:08:49.588 22:18:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:49.588 22:18:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3789430 00:08:49.588 22:18:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:49.588 22:18:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:49.588 22:18:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3789430' 00:08:49.588 killing process with pid 3789430 00:08:49.588 22:18:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 3789430 00:08:49.588 22:18:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 3789430 00:08:49.847 22:18:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:49.847 22:18:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:49.847 22:18:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:49.847 22:18:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:49.847 22:18:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:49.847 22:18:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:49.847 22:18:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:49.847 22:18:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:51.756 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:51.756 00:08:51.756 real 0m9.233s 00:08:51.756 user 0m20.815s 00:08:51.756 sys 0m2.028s 00:08:51.756 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:51.756 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:51.756 ************************************ 00:08:51.756 END TEST nvmf_nmic 00:08:51.756 ************************************ 00:08:51.756 22:18:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:51.756 22:18:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:51.756 22:18:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:51.756 22:18:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:51.756 ************************************ 00:08:51.756 START TEST nvmf_fio_target 00:08:51.756 ************************************ 00:08:51.756 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:52.015 * Looking for test storage... 00:08:52.015 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:52.015 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:52.015 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:08:52.015 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:52.015 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:52.015 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:52.015 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:52.015 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:52.015 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:52.015 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:52.015 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:52.015 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:52.015 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:52.015 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:08:52.015 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:08:52.015 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:52.015 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:52.015 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:52.015 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:52.015 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:52.015 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:52.015 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:52.015 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:52.015 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.016 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.016 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.016 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:08:52.016 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.016 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:08:52.016 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:52.016 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:52.016 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:52.016 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:52.016 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:52.016 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:52.016 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:52.016 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:52.016 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:52.016 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:52.016 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:52.016 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:08:52.016 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:52.016 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:52.016 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:52.016 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:52.016 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:52.016 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:52.016 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:52.016 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:52.016 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:52.016 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:52.016 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:08:52.016 22:18:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:53.926 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:53.926 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:08:53.926 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:53.926 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:53.926 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:53.926 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:53.926 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:53.926 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:08:53.926 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:53.926 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:08:53.926 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:08:53.926 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:08:53.926 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:08:53.926 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:08:53.926 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:08:53.926 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:53.926 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:53.926 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:53.926 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:08:53.927 Found 0000:08:00.0 (0x8086 - 0x159b) 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:08:53.927 Found 0000:08:00.1 (0x8086 - 0x159b) 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:08:53.927 Found net devices under 0000:08:00.0: cvl_0_0 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:08:53.927 Found net devices under 0000:08:00.1: cvl_0_1 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:53.927 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:53.927 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:08:53.927 00:08:53.927 --- 10.0.0.2 ping statistics --- 00:08:53.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:53.927 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:53.927 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:53.927 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:08:53.927 00:08:53.927 --- 10.0.0.1 ping statistics --- 00:08:53.927 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:53.927 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=3791470 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 3791470 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 3791470 ']' 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:53.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:53.927 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:53.928 [2024-07-24 22:18:19.327980] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:08:53.928 [2024-07-24 22:18:19.328077] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:53.928 EAL: No free 2048 kB hugepages reported on node 1 00:08:53.928 [2024-07-24 22:18:19.396682] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:53.928 [2024-07-24 22:18:19.516829] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:53.928 [2024-07-24 22:18:19.516897] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:53.928 [2024-07-24 22:18:19.516912] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:53.928 [2024-07-24 22:18:19.516925] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:53.928 [2024-07-24 22:18:19.516936] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:53.928 [2024-07-24 22:18:19.517024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:53.928 [2024-07-24 22:18:19.517050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:53.928 [2024-07-24 22:18:19.517103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:53.928 [2024-07-24 22:18:19.517106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.193 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:54.193 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:08:54.193 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:54.193 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:54.193 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:54.193 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:54.193 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:54.450 [2024-07-24 22:18:19.934683] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:54.451 22:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:54.709 22:18:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:08:54.709 22:18:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:54.968 22:18:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:08:54.968 22:18:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:55.226 22:18:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:08:55.226 22:18:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:55.796 22:18:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:08:55.796 22:18:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:08:56.055 22:18:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:56.313 22:18:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:08:56.313 22:18:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:56.571 22:18:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:08:56.571 22:18:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:56.830 22:18:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:08:56.830 22:18:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:08:57.088 22:18:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:57.345 22:18:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:57.345 22:18:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:57.603 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:57.603 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:57.862 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:58.121 [2024-07-24 22:18:23.671840] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:58.121 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:08:58.379 22:18:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:08:58.639 22:18:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:59.208 22:18:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:08:59.208 22:18:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1196 -- # local i=0 00:08:59.208 22:18:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:08:59.208 22:18:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # [[ -n 4 ]] 00:08:59.208 22:18:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # nvme_device_counter=4 00:08:59.208 22:18:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # sleep 2 00:09:01.745 22:18:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:09:01.745 22:18:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:09:01.745 22:18:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:09:01.745 22:18:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_devices=4 00:09:01.745 22:18:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:09:01.745 22:18:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # return 0 00:09:01.745 22:18:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:01.745 [global] 00:09:01.745 thread=1 00:09:01.745 invalidate=1 00:09:01.745 rw=write 00:09:01.745 time_based=1 00:09:01.745 runtime=1 00:09:01.745 ioengine=libaio 00:09:01.745 direct=1 00:09:01.745 bs=4096 00:09:01.746 iodepth=1 00:09:01.746 norandommap=0 00:09:01.746 numjobs=1 00:09:01.746 00:09:01.746 verify_dump=1 00:09:01.746 verify_backlog=512 00:09:01.746 verify_state_save=0 00:09:01.746 do_verify=1 00:09:01.746 verify=crc32c-intel 00:09:01.746 [job0] 00:09:01.746 filename=/dev/nvme0n1 00:09:01.746 [job1] 00:09:01.746 filename=/dev/nvme0n2 00:09:01.746 [job2] 00:09:01.746 filename=/dev/nvme0n3 00:09:01.746 [job3] 00:09:01.746 filename=/dev/nvme0n4 00:09:01.746 Could not set queue depth (nvme0n1) 00:09:01.746 Could not set queue depth (nvme0n2) 00:09:01.746 Could not set queue depth (nvme0n3) 00:09:01.746 Could not set queue depth (nvme0n4) 00:09:01.746 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:01.746 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:01.746 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:01.746 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:01.746 fio-3.35 00:09:01.746 Starting 4 threads 00:09:02.684 00:09:02.684 job0: (groupid=0, jobs=1): err= 0: pid=3792312: Wed Jul 24 22:18:28 2024 00:09:02.684 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:09:02.684 slat (nsec): min=6272, max=69384, avg=12494.95, stdev=6603.32 00:09:02.684 clat (usec): min=252, max=42244, avg=564.45, stdev=2861.65 00:09:02.684 lat (usec): min=260, max=42253, avg=576.94, stdev=2861.86 00:09:02.684 clat percentiles (usec): 00:09:02.684 | 1.00th=[ 269], 5.00th=[ 293], 10.00th=[ 297], 20.00th=[ 310], 00:09:02.684 | 30.00th=[ 322], 40.00th=[ 330], 50.00th=[ 338], 60.00th=[ 355], 00:09:02.684 | 70.00th=[ 367], 80.00th=[ 412], 90.00th=[ 474], 95.00th=[ 510], 00:09:02.684 | 99.00th=[ 627], 99.50th=[ 2343], 99.90th=[42206], 99.95th=[42206], 00:09:02.684 | 99.99th=[42206] 00:09:02.684 write: IOPS=1455, BW=5822KiB/s (5962kB/s)(5828KiB/1001msec); 0 zone resets 00:09:02.684 slat (nsec): min=7997, max=55367, avg=19690.52, stdev=6336.08 00:09:02.684 clat (usec): min=177, max=2620, avg=253.68, stdev=99.36 00:09:02.684 lat (usec): min=192, max=2648, avg=273.37, stdev=100.42 00:09:02.684 clat percentiles (usec): 00:09:02.684 | 1.00th=[ 192], 5.00th=[ 202], 10.00th=[ 208], 20.00th=[ 217], 00:09:02.684 | 30.00th=[ 223], 40.00th=[ 229], 50.00th=[ 235], 60.00th=[ 243], 00:09:02.684 | 70.00th=[ 253], 80.00th=[ 277], 90.00th=[ 326], 95.00th=[ 351], 00:09:02.684 | 99.00th=[ 404], 99.50th=[ 424], 99.90th=[ 2114], 99.95th=[ 2606], 00:09:02.684 | 99.99th=[ 2606] 00:09:02.684 bw ( KiB/s): min= 8192, max= 8192, per=37.83%, avg=8192.00, stdev= 0.00, samples=1 00:09:02.684 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:02.684 lat (usec) : 250=39.98%, 500=57.32%, 750=2.26%, 1000=0.04% 00:09:02.684 lat (msec) : 2=0.08%, 4=0.12%, 50=0.20% 00:09:02.684 cpu : usr=3.70%, sys=4.70%, ctx=2483, majf=0, minf=1 00:09:02.684 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:02.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:02.684 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:02.684 issued rwts: total=1024,1457,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:02.684 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:02.684 job1: (groupid=0, jobs=1): err= 0: pid=3792313: Wed Jul 24 22:18:28 2024 00:09:02.684 read: IOPS=904, BW=3619KiB/s (3706kB/s)(3720KiB/1028msec) 00:09:02.684 slat (nsec): min=6132, max=38850, avg=12564.55, stdev=5730.87 00:09:02.684 clat (usec): min=280, max=41996, avg=803.03, stdev=3997.80 00:09:02.684 lat (usec): min=288, max=42004, avg=815.60, stdev=3998.94 00:09:02.684 clat percentiles (usec): 00:09:02.684 | 1.00th=[ 314], 5.00th=[ 322], 10.00th=[ 326], 20.00th=[ 334], 00:09:02.684 | 30.00th=[ 343], 40.00th=[ 351], 50.00th=[ 363], 60.00th=[ 375], 00:09:02.684 | 70.00th=[ 420], 80.00th=[ 490], 90.00th=[ 515], 95.00th=[ 537], 00:09:02.684 | 99.00th=[ 5997], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:09:02.684 | 99.99th=[42206] 00:09:02.684 write: IOPS=996, BW=3984KiB/s (4080kB/s)(4096KiB/1028msec); 0 zone resets 00:09:02.684 slat (nsec): min=8065, max=50029, avg=14404.82, stdev=6118.43 00:09:02.684 clat (usec): min=187, max=464, avg=240.65, stdev=32.58 00:09:02.684 lat (usec): min=199, max=475, avg=255.06, stdev=34.40 00:09:02.684 clat percentiles (usec): 00:09:02.684 | 1.00th=[ 198], 5.00th=[ 206], 10.00th=[ 212], 20.00th=[ 217], 00:09:02.684 | 30.00th=[ 223], 40.00th=[ 227], 50.00th=[ 235], 60.00th=[ 241], 00:09:02.684 | 70.00th=[ 247], 80.00th=[ 258], 90.00th=[ 277], 95.00th=[ 306], 00:09:02.684 | 99.00th=[ 363], 99.50th=[ 400], 99.90th=[ 416], 99.95th=[ 465], 00:09:02.684 | 99.99th=[ 465] 00:09:02.684 bw ( KiB/s): min= 1304, max= 6888, per=18.92%, avg=4096.00, stdev=3948.48, samples=2 00:09:02.684 iops : min= 326, max= 1722, avg=1024.00, stdev=987.12, samples=2 00:09:02.684 lat (usec) : 250=38.59%, 500=53.94%, 750=6.81% 00:09:02.684 lat (msec) : 2=0.05%, 4=0.10%, 10=0.05%, 50=0.46% 00:09:02.684 cpu : usr=1.17%, sys=4.58%, ctx=1954, majf=0, minf=1 00:09:02.684 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:02.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:02.684 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:02.684 issued rwts: total=930,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:02.684 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:02.684 job2: (groupid=0, jobs=1): err= 0: pid=3792314: Wed Jul 24 22:18:28 2024 00:09:02.684 read: IOPS=1496, BW=5986KiB/s (6130kB/s)(5992KiB/1001msec) 00:09:02.684 slat (nsec): min=5131, max=53457, avg=11920.42, stdev=6123.09 00:09:02.684 clat (usec): min=247, max=42392, avg=410.74, stdev=1517.84 00:09:02.684 lat (usec): min=254, max=42409, avg=422.66, stdev=1518.16 00:09:02.684 clat percentiles (usec): 00:09:02.684 | 1.00th=[ 258], 5.00th=[ 277], 10.00th=[ 289], 20.00th=[ 306], 00:09:02.684 | 30.00th=[ 314], 40.00th=[ 326], 50.00th=[ 334], 60.00th=[ 343], 00:09:02.684 | 70.00th=[ 359], 80.00th=[ 396], 90.00th=[ 482], 95.00th=[ 506], 00:09:02.684 | 99.00th=[ 586], 99.50th=[ 619], 99.90th=[41157], 99.95th=[42206], 00:09:02.684 | 99.99th=[42206] 00:09:02.684 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:09:02.684 slat (nsec): min=6787, max=47480, avg=13722.65, stdev=5954.13 00:09:02.684 clat (usec): min=162, max=379, avg=218.40, stdev=26.81 00:09:02.684 lat (usec): min=170, max=407, avg=232.12, stdev=27.67 00:09:02.684 clat percentiles (usec): 00:09:02.684 | 1.00th=[ 176], 5.00th=[ 184], 10.00th=[ 188], 20.00th=[ 196], 00:09:02.684 | 30.00th=[ 202], 40.00th=[ 210], 50.00th=[ 217], 60.00th=[ 223], 00:09:02.684 | 70.00th=[ 229], 80.00th=[ 237], 90.00th=[ 251], 95.00th=[ 265], 00:09:02.684 | 99.00th=[ 318], 99.50th=[ 330], 99.90th=[ 359], 99.95th=[ 379], 00:09:02.684 | 99.99th=[ 379] 00:09:02.684 bw ( KiB/s): min= 6288, max= 6288, per=29.04%, avg=6288.00, stdev= 0.00, samples=1 00:09:02.684 iops : min= 1572, max= 1572, avg=1572.00, stdev= 0.00, samples=1 00:09:02.684 lat (usec) : 250=45.58%, 500=51.45%, 750=2.90% 00:09:02.684 lat (msec) : 50=0.07% 00:09:02.684 cpu : usr=2.60%, sys=3.90%, ctx=3034, majf=0, minf=1 00:09:02.684 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:02.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:02.684 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:02.684 issued rwts: total=1498,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:02.684 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:02.684 job3: (groupid=0, jobs=1): err= 0: pid=3792315: Wed Jul 24 22:18:28 2024 00:09:02.684 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:09:02.684 slat (nsec): min=5151, max=50192, avg=11600.76, stdev=5392.42 00:09:02.684 clat (usec): min=251, max=40930, avg=386.24, stdev=1037.92 00:09:02.684 lat (usec): min=256, max=40937, avg=397.84, stdev=1037.96 00:09:02.684 clat percentiles (usec): 00:09:02.684 | 1.00th=[ 265], 5.00th=[ 273], 10.00th=[ 281], 20.00th=[ 306], 00:09:02.684 | 30.00th=[ 314], 40.00th=[ 326], 50.00th=[ 334], 60.00th=[ 343], 00:09:02.684 | 70.00th=[ 379], 80.00th=[ 441], 90.00th=[ 486], 95.00th=[ 510], 00:09:02.684 | 99.00th=[ 562], 99.50th=[ 586], 99.90th=[ 627], 99.95th=[41157], 00:09:02.684 | 99.99th=[41157] 00:09:02.684 write: IOPS=1546, BW=6186KiB/s (6334kB/s)(6192KiB/1001msec); 0 zone resets 00:09:02.684 slat (nsec): min=6943, max=63485, avg=14286.98, stdev=5436.45 00:09:02.684 clat (usec): min=175, max=481, avg=230.07, stdev=33.64 00:09:02.684 lat (usec): min=186, max=504, avg=244.36, stdev=33.75 00:09:02.684 clat percentiles (usec): 00:09:02.685 | 1.00th=[ 182], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 202], 00:09:02.685 | 30.00th=[ 208], 40.00th=[ 217], 50.00th=[ 225], 60.00th=[ 233], 00:09:02.685 | 70.00th=[ 241], 80.00th=[ 253], 90.00th=[ 277], 95.00th=[ 302], 00:09:02.685 | 99.00th=[ 330], 99.50th=[ 347], 99.90th=[ 383], 99.95th=[ 482], 00:09:02.685 | 99.99th=[ 482] 00:09:02.685 bw ( KiB/s): min= 7520, max= 7520, per=34.73%, avg=7520.00, stdev= 0.00, samples=1 00:09:02.685 iops : min= 1880, max= 1880, avg=1880.00, stdev= 0.00, samples=1 00:09:02.685 lat (usec) : 250=39.43%, 500=57.13%, 750=3.40% 00:09:02.685 lat (msec) : 50=0.03% 00:09:02.685 cpu : usr=2.50%, sys=3.90%, ctx=3084, majf=0, minf=1 00:09:02.685 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:02.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:02.685 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:02.685 issued rwts: total=1536,1548,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:02.685 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:02.685 00:09:02.685 Run status group 0 (all jobs): 00:09:02.685 READ: bw=19.0MiB/s (19.9MB/s), 3619KiB/s-6138KiB/s (3706kB/s-6285kB/s), io=19.5MiB (20.4MB), run=1001-1028msec 00:09:02.685 WRITE: bw=21.1MiB/s (22.2MB/s), 3984KiB/s-6186KiB/s (4080kB/s-6334kB/s), io=21.7MiB (22.8MB), run=1001-1028msec 00:09:02.685 00:09:02.685 Disk stats (read/write): 00:09:02.685 nvme0n1: ios=1067/1024, merge=0/0, ticks=815/230, in_queue=1045, util=97.80% 00:09:02.685 nvme0n2: ios=935/1024, merge=0/0, ticks=545/231, in_queue=776, util=86.97% 00:09:02.685 nvme0n3: ios=1028/1536, merge=0/0, ticks=453/335, in_queue=788, util=88.90% 00:09:02.685 nvme0n4: ios=1140/1536, merge=0/0, ticks=436/343, in_queue=779, util=89.65% 00:09:02.685 22:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:02.685 [global] 00:09:02.685 thread=1 00:09:02.685 invalidate=1 00:09:02.685 rw=randwrite 00:09:02.685 time_based=1 00:09:02.685 runtime=1 00:09:02.685 ioengine=libaio 00:09:02.685 direct=1 00:09:02.685 bs=4096 00:09:02.685 iodepth=1 00:09:02.685 norandommap=0 00:09:02.685 numjobs=1 00:09:02.685 00:09:02.685 verify_dump=1 00:09:02.685 verify_backlog=512 00:09:02.685 verify_state_save=0 00:09:02.685 do_verify=1 00:09:02.685 verify=crc32c-intel 00:09:02.685 [job0] 00:09:02.685 filename=/dev/nvme0n1 00:09:02.685 [job1] 00:09:02.685 filename=/dev/nvme0n2 00:09:02.685 [job2] 00:09:02.685 filename=/dev/nvme0n3 00:09:02.685 [job3] 00:09:02.685 filename=/dev/nvme0n4 00:09:02.685 Could not set queue depth (nvme0n1) 00:09:02.685 Could not set queue depth (nvme0n2) 00:09:02.685 Could not set queue depth (nvme0n3) 00:09:02.685 Could not set queue depth (nvme0n4) 00:09:02.945 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:02.945 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:02.945 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:02.945 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:02.945 fio-3.35 00:09:02.945 Starting 4 threads 00:09:04.321 00:09:04.321 job0: (groupid=0, jobs=1): err= 0: pid=3792577: Wed Jul 24 22:18:29 2024 00:09:04.321 read: IOPS=1521, BW=6086KiB/s (6232kB/s)(6092KiB/1001msec) 00:09:04.321 slat (nsec): min=5483, max=36517, avg=11917.18, stdev=4193.12 00:09:04.321 clat (usec): min=244, max=42237, avg=380.31, stdev=1107.73 00:09:04.321 lat (usec): min=251, max=42253, avg=392.23, stdev=1107.86 00:09:04.321 clat percentiles (usec): 00:09:04.321 | 1.00th=[ 260], 5.00th=[ 285], 10.00th=[ 293], 20.00th=[ 302], 00:09:04.321 | 30.00th=[ 310], 40.00th=[ 318], 50.00th=[ 330], 60.00th=[ 343], 00:09:04.321 | 70.00th=[ 359], 80.00th=[ 379], 90.00th=[ 441], 95.00th=[ 486], 00:09:04.321 | 99.00th=[ 529], 99.50th=[ 545], 99.90th=[10814], 99.95th=[42206], 00:09:04.321 | 99.99th=[42206] 00:09:04.321 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:09:04.321 slat (nsec): min=7076, max=51192, avg=15409.40, stdev=6599.38 00:09:04.321 clat (usec): min=187, max=450, avg=239.10, stdev=33.58 00:09:04.321 lat (usec): min=198, max=470, avg=254.51, stdev=32.65 00:09:04.321 clat percentiles (usec): 00:09:04.321 | 1.00th=[ 194], 5.00th=[ 200], 10.00th=[ 206], 20.00th=[ 212], 00:09:04.321 | 30.00th=[ 219], 40.00th=[ 227], 50.00th=[ 233], 60.00th=[ 241], 00:09:04.321 | 70.00th=[ 251], 80.00th=[ 262], 90.00th=[ 277], 95.00th=[ 302], 00:09:04.321 | 99.00th=[ 367], 99.50th=[ 379], 99.90th=[ 412], 99.95th=[ 453], 00:09:04.321 | 99.99th=[ 453] 00:09:04.321 bw ( KiB/s): min= 8192, max= 8192, per=34.73%, avg=8192.00, stdev= 0.00, samples=1 00:09:04.321 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:04.321 lat (usec) : 250=34.52%, 500=63.55%, 750=1.86% 00:09:04.321 lat (msec) : 20=0.03%, 50=0.03% 00:09:04.321 cpu : usr=3.30%, sys=4.20%, ctx=3061, majf=0, minf=1 00:09:04.321 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:04.321 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:04.321 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:04.321 issued rwts: total=1523,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:04.321 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:04.321 job1: (groupid=0, jobs=1): err= 0: pid=3792578: Wed Jul 24 22:18:29 2024 00:09:04.321 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:09:04.321 slat (nsec): min=4998, max=56688, avg=11195.16, stdev=4944.12 00:09:04.321 clat (usec): min=211, max=634, avg=314.65, stdev=64.81 00:09:04.321 lat (usec): min=217, max=641, avg=325.85, stdev=66.29 00:09:04.321 clat percentiles (usec): 00:09:04.321 | 1.00th=[ 217], 5.00th=[ 225], 10.00th=[ 233], 20.00th=[ 247], 00:09:04.321 | 30.00th=[ 281], 40.00th=[ 297], 50.00th=[ 314], 60.00th=[ 334], 00:09:04.321 | 70.00th=[ 347], 80.00th=[ 359], 90.00th=[ 379], 95.00th=[ 441], 00:09:04.321 | 99.00th=[ 498], 99.50th=[ 529], 99.90th=[ 627], 99.95th=[ 635], 00:09:04.321 | 99.99th=[ 635] 00:09:04.321 write: IOPS=2034, BW=8140KiB/s (8335kB/s)(8148KiB/1001msec); 0 zone resets 00:09:04.321 slat (nsec): min=6329, max=61349, avg=15573.82, stdev=6008.04 00:09:04.321 clat (usec): min=155, max=702, avg=222.90, stdev=39.44 00:09:04.321 lat (usec): min=162, max=734, avg=238.47, stdev=41.61 00:09:04.321 clat percentiles (usec): 00:09:04.321 | 1.00th=[ 176], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 196], 00:09:04.321 | 30.00th=[ 202], 40.00th=[ 206], 50.00th=[ 215], 60.00th=[ 221], 00:09:04.321 | 70.00th=[ 231], 80.00th=[ 243], 90.00th=[ 273], 95.00th=[ 310], 00:09:04.321 | 99.00th=[ 330], 99.50th=[ 334], 99.90th=[ 685], 99.95th=[ 701], 00:09:04.321 | 99.99th=[ 701] 00:09:04.321 bw ( KiB/s): min= 8175, max= 8175, per=34.66%, avg=8175.00, stdev= 0.00, samples=1 00:09:04.321 iops : min= 2043, max= 2043, avg=2043.00, stdev= 0.00, samples=1 00:09:04.321 lat (usec) : 250=57.77%, 500=41.76%, 750=0.48% 00:09:04.321 cpu : usr=3.20%, sys=6.50%, ctx=3573, majf=0, minf=1 00:09:04.321 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:04.321 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:04.321 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:04.321 issued rwts: total=1536,2037,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:04.321 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:04.321 job2: (groupid=0, jobs=1): err= 0: pid=3792579: Wed Jul 24 22:18:29 2024 00:09:04.321 read: IOPS=528, BW=2114KiB/s (2165kB/s)(2180KiB/1031msec) 00:09:04.321 slat (nsec): min=6258, max=46981, avg=10933.44, stdev=7040.45 00:09:04.321 clat (usec): min=263, max=41475, avg=1345.04, stdev=6190.34 00:09:04.321 lat (usec): min=270, max=41506, avg=1355.98, stdev=6192.59 00:09:04.321 clat percentiles (usec): 00:09:04.321 | 1.00th=[ 273], 5.00th=[ 285], 10.00th=[ 297], 20.00th=[ 322], 00:09:04.321 | 30.00th=[ 343], 40.00th=[ 351], 50.00th=[ 363], 60.00th=[ 375], 00:09:04.321 | 70.00th=[ 404], 80.00th=[ 465], 90.00th=[ 490], 95.00th=[ 510], 00:09:04.322 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:09:04.322 | 99.99th=[41681] 00:09:04.322 write: IOPS=993, BW=3973KiB/s (4068kB/s)(4096KiB/1031msec); 0 zone resets 00:09:04.322 slat (nsec): min=6913, max=47522, avg=16537.30, stdev=6453.56 00:09:04.322 clat (usec): min=177, max=785, avg=262.48, stdev=43.79 00:09:04.322 lat (usec): min=186, max=795, avg=279.02, stdev=45.09 00:09:04.322 clat percentiles (usec): 00:09:04.322 | 1.00th=[ 200], 5.00th=[ 215], 10.00th=[ 219], 20.00th=[ 231], 00:09:04.322 | 30.00th=[ 241], 40.00th=[ 249], 50.00th=[ 258], 60.00th=[ 265], 00:09:04.322 | 70.00th=[ 273], 80.00th=[ 285], 90.00th=[ 314], 95.00th=[ 347], 00:09:04.322 | 99.00th=[ 400], 99.50th=[ 433], 99.90th=[ 529], 99.95th=[ 783], 00:09:04.322 | 99.99th=[ 783] 00:09:04.322 bw ( KiB/s): min= 736, max= 7456, per=17.36%, avg=4096.00, stdev=4751.76, samples=2 00:09:04.322 iops : min= 184, max= 1864, avg=1024.00, stdev=1187.94, samples=2 00:09:04.322 lat (usec) : 250=27.09%, 500=70.49%, 750=1.47%, 1000=0.13% 00:09:04.322 lat (msec) : 50=0.83% 00:09:04.322 cpu : usr=1.75%, sys=2.23%, ctx=1569, majf=0, minf=1 00:09:04.322 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:04.322 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:04.322 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:04.322 issued rwts: total=545,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:04.322 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:04.322 job3: (groupid=0, jobs=1): err= 0: pid=3792580: Wed Jul 24 22:18:29 2024 00:09:04.322 read: IOPS=1394, BW=5577KiB/s (5711kB/s)(5800KiB/1040msec) 00:09:04.322 slat (nsec): min=5299, max=43137, avg=9724.76, stdev=3926.46 00:09:04.322 clat (usec): min=241, max=42388, avg=448.67, stdev=2514.29 00:09:04.322 lat (usec): min=247, max=42409, avg=458.40, stdev=2514.95 00:09:04.322 clat percentiles (usec): 00:09:04.322 | 1.00th=[ 249], 5.00th=[ 253], 10.00th=[ 258], 20.00th=[ 265], 00:09:04.322 | 30.00th=[ 269], 40.00th=[ 273], 50.00th=[ 277], 60.00th=[ 281], 00:09:04.322 | 70.00th=[ 289], 80.00th=[ 297], 90.00th=[ 330], 95.00th=[ 392], 00:09:04.322 | 99.00th=[ 486], 99.50th=[ 545], 99.90th=[41681], 99.95th=[42206], 00:09:04.322 | 99.99th=[42206] 00:09:04.322 write: IOPS=1476, BW=5908KiB/s (6049kB/s)(6144KiB/1040msec); 0 zone resets 00:09:04.322 slat (nsec): min=6929, max=63167, avg=16721.86, stdev=7357.12 00:09:04.322 clat (usec): min=171, max=697, avg=220.24, stdev=45.64 00:09:04.322 lat (usec): min=181, max=723, avg=236.96, stdev=50.15 00:09:04.322 clat percentiles (usec): 00:09:04.322 | 1.00th=[ 180], 5.00th=[ 184], 10.00th=[ 186], 20.00th=[ 190], 00:09:04.322 | 30.00th=[ 194], 40.00th=[ 196], 50.00th=[ 202], 60.00th=[ 210], 00:09:04.322 | 70.00th=[ 229], 80.00th=[ 253], 90.00th=[ 273], 95.00th=[ 310], 00:09:04.322 | 99.00th=[ 371], 99.50th=[ 408], 99.90th=[ 578], 99.95th=[ 701], 00:09:04.322 | 99.99th=[ 701] 00:09:04.322 bw ( KiB/s): min= 4096, max= 8192, per=26.05%, avg=6144.00, stdev=2896.31, samples=2 00:09:04.322 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:09:04.322 lat (usec) : 250=41.26%, 500=58.20%, 750=0.33% 00:09:04.322 lat (msec) : 50=0.20% 00:09:04.322 cpu : usr=2.12%, sys=4.33%, ctx=2987, majf=0, minf=1 00:09:04.322 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:04.322 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:04.322 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:04.322 issued rwts: total=1450,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:04.322 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:04.322 00:09:04.322 Run status group 0 (all jobs): 00:09:04.322 READ: bw=19.0MiB/s (19.9MB/s), 2114KiB/s-6138KiB/s (2165kB/s-6285kB/s), io=19.7MiB (20.7MB), run=1001-1040msec 00:09:04.322 WRITE: bw=23.0MiB/s (24.2MB/s), 3973KiB/s-8140KiB/s (4068kB/s-8335kB/s), io=24.0MiB (25.1MB), run=1001-1040msec 00:09:04.322 00:09:04.322 Disk stats (read/write): 00:09:04.322 nvme0n1: ios=1115/1536, merge=0/0, ticks=661/341, in_queue=1002, util=96.39% 00:09:04.322 nvme0n2: ios=1500/1536, merge=0/0, ticks=476/315, in_queue=791, util=86.90% 00:09:04.322 nvme0n3: ios=539/1024, merge=0/0, ticks=522/258, in_queue=780, util=88.96% 00:09:04.322 nvme0n4: ios=1404/1536, merge=0/0, ticks=811/325, in_queue=1136, util=98.22% 00:09:04.322 22:18:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:04.322 [global] 00:09:04.322 thread=1 00:09:04.322 invalidate=1 00:09:04.322 rw=write 00:09:04.322 time_based=1 00:09:04.322 runtime=1 00:09:04.322 ioengine=libaio 00:09:04.322 direct=1 00:09:04.322 bs=4096 00:09:04.322 iodepth=128 00:09:04.322 norandommap=0 00:09:04.322 numjobs=1 00:09:04.322 00:09:04.322 verify_dump=1 00:09:04.322 verify_backlog=512 00:09:04.322 verify_state_save=0 00:09:04.322 do_verify=1 00:09:04.322 verify=crc32c-intel 00:09:04.322 [job0] 00:09:04.322 filename=/dev/nvme0n1 00:09:04.322 [job1] 00:09:04.322 filename=/dev/nvme0n2 00:09:04.322 [job2] 00:09:04.322 filename=/dev/nvme0n3 00:09:04.322 [job3] 00:09:04.322 filename=/dev/nvme0n4 00:09:04.322 Could not set queue depth (nvme0n1) 00:09:04.322 Could not set queue depth (nvme0n2) 00:09:04.322 Could not set queue depth (nvme0n3) 00:09:04.322 Could not set queue depth (nvme0n4) 00:09:04.322 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:04.322 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:04.322 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:04.322 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:04.322 fio-3.35 00:09:04.322 Starting 4 threads 00:09:05.697 00:09:05.697 job0: (groupid=0, jobs=1): err= 0: pid=3792764: Wed Jul 24 22:18:31 2024 00:09:05.697 read: IOPS=3687, BW=14.4MiB/s (15.1MB/s)(14.5MiB/1004msec) 00:09:05.697 slat (usec): min=2, max=36585, avg=135.71, stdev=1067.87 00:09:05.697 clat (usec): min=3321, max=92008, avg=17211.92, stdev=13458.56 00:09:05.697 lat (usec): min=6072, max=92022, avg=17347.63, stdev=13532.76 00:09:05.697 clat percentiles (usec): 00:09:05.697 | 1.00th=[ 7635], 5.00th=[ 9110], 10.00th=[10028], 20.00th=[11338], 00:09:05.697 | 30.00th=[11731], 40.00th=[12125], 50.00th=[12780], 60.00th=[16319], 00:09:05.697 | 70.00th=[17695], 80.00th=[18744], 90.00th=[21627], 95.00th=[40633], 00:09:05.697 | 99.00th=[86508], 99.50th=[91751], 99.90th=[91751], 99.95th=[91751], 00:09:05.697 | 99.99th=[91751] 00:09:05.697 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:09:05.697 slat (usec): min=4, max=10871, avg=110.02, stdev=493.39 00:09:05.697 clat (usec): min=5850, max=29987, avg=15224.27, stdev=5983.35 00:09:05.697 lat (usec): min=5863, max=30005, avg=15334.29, stdev=6025.37 00:09:05.697 clat percentiles (usec): 00:09:05.697 | 1.00th=[ 6063], 5.00th=[ 9896], 10.00th=[10683], 20.00th=[11207], 00:09:05.697 | 30.00th=[11338], 40.00th=[11469], 50.00th=[11994], 60.00th=[13566], 00:09:05.698 | 70.00th=[15533], 80.00th=[22152], 90.00th=[25297], 95.00th=[27132], 00:09:05.698 | 99.00th=[29754], 99.50th=[29754], 99.90th=[30016], 99.95th=[30016], 00:09:05.698 | 99.99th=[30016] 00:09:05.698 bw ( KiB/s): min=12216, max=20480, per=27.47%, avg=16348.00, stdev=5843.53, samples=2 00:09:05.698 iops : min= 3054, max= 5120, avg=4087.00, stdev=1460.88, samples=2 00:09:05.698 lat (msec) : 4=0.01%, 10=7.75%, 20=72.48%, 50=18.08%, 100=1.68% 00:09:05.698 cpu : usr=6.18%, sys=9.07%, ctx=501, majf=0, minf=15 00:09:05.698 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:05.698 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:05.698 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:05.698 issued rwts: total=3702,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:05.698 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:05.698 job1: (groupid=0, jobs=1): err= 0: pid=3792765: Wed Jul 24 22:18:31 2024 00:09:05.698 read: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec) 00:09:05.698 slat (usec): min=3, max=13630, avg=120.10, stdev=622.70 00:09:05.698 clat (usec): min=9316, max=38511, avg=15182.30, stdev=4385.92 00:09:05.698 lat (usec): min=9442, max=42826, avg=15302.40, stdev=4435.38 00:09:05.698 clat percentiles (usec): 00:09:05.698 | 1.00th=[ 9765], 5.00th=[10552], 10.00th=[11076], 20.00th=[11863], 00:09:05.698 | 30.00th=[12125], 40.00th=[12780], 50.00th=[13304], 60.00th=[14091], 00:09:05.698 | 70.00th=[17957], 80.00th=[19006], 90.00th=[20055], 95.00th=[23987], 00:09:05.698 | 99.00th=[27657], 99.50th=[35390], 99.90th=[38536], 99.95th=[38536], 00:09:05.698 | 99.99th=[38536] 00:09:05.698 write: IOPS=3834, BW=15.0MiB/s (15.7MB/s)(15.0MiB/1004msec); 0 zone resets 00:09:05.698 slat (usec): min=4, max=22350, avg=135.79, stdev=779.06 00:09:05.698 clat (usec): min=1190, max=49799, avg=18951.52, stdev=9068.61 00:09:05.698 lat (usec): min=1201, max=49819, avg=19087.31, stdev=9113.38 00:09:05.698 clat percentiles (usec): 00:09:05.698 | 1.00th=[ 8225], 5.00th=[ 9896], 10.00th=[10421], 20.00th=[10945], 00:09:05.698 | 30.00th=[11338], 40.00th=[12387], 50.00th=[13960], 60.00th=[20841], 00:09:05.698 | 70.00th=[25560], 80.00th=[28181], 90.00th=[32637], 95.00th=[37487], 00:09:05.698 | 99.00th=[38536], 99.50th=[41157], 99.90th=[41157], 99.95th=[47973], 00:09:05.698 | 99.99th=[49546] 00:09:05.698 bw ( KiB/s): min=12488, max=17296, per=25.02%, avg=14892.00, stdev=3399.77, samples=2 00:09:05.698 iops : min= 3122, max= 4324, avg=3723.00, stdev=849.94, samples=2 00:09:05.698 lat (msec) : 2=0.16%, 4=0.01%, 10=3.82%, 20=69.28%, 50=26.73% 00:09:05.698 cpu : usr=5.78%, sys=9.57%, ctx=421, majf=0, minf=13 00:09:05.698 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:05.698 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:05.698 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:05.698 issued rwts: total=3584,3850,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:05.698 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:05.698 job2: (groupid=0, jobs=1): err= 0: pid=3792766: Wed Jul 24 22:18:31 2024 00:09:05.698 read: IOPS=3581, BW=14.0MiB/s (14.7MB/s)(14.7MiB/1048msec) 00:09:05.698 slat (usec): min=3, max=18038, avg=132.90, stdev=920.60 00:09:05.698 clat (usec): min=2086, max=72553, avg=18782.53, stdev=9354.64 00:09:05.698 lat (usec): min=2090, max=77294, avg=18915.43, stdev=9401.96 00:09:05.698 clat percentiles (usec): 00:09:05.698 | 1.00th=[ 4424], 5.00th=[11600], 10.00th=[13042], 20.00th=[14746], 00:09:05.698 | 30.00th=[15008], 40.00th=[15270], 50.00th=[16909], 60.00th=[17957], 00:09:05.698 | 70.00th=[19006], 80.00th=[20841], 90.00th=[24511], 95.00th=[31589], 00:09:05.698 | 99.00th=[68682], 99.50th=[71828], 99.90th=[72877], 99.95th=[72877], 00:09:05.698 | 99.99th=[72877] 00:09:05.698 write: IOPS=3908, BW=15.3MiB/s (16.0MB/s)(16.0MiB/1048msec); 0 zone resets 00:09:05.698 slat (usec): min=4, max=14993, avg=106.53, stdev=795.56 00:09:05.698 clat (usec): min=922, max=34192, avg=15163.67, stdev=3712.69 00:09:05.698 lat (usec): min=934, max=34204, avg=15270.21, stdev=3770.46 00:09:05.698 clat percentiles (usec): 00:09:05.698 | 1.00th=[ 4883], 5.00th=[ 8717], 10.00th=[11207], 20.00th=[13435], 00:09:05.698 | 30.00th=[13829], 40.00th=[14091], 50.00th=[14353], 60.00th=[15401], 00:09:05.698 | 70.00th=[16581], 80.00th=[17695], 90.00th=[20055], 95.00th=[21627], 00:09:05.698 | 99.00th=[28705], 99.50th=[29230], 99.90th=[29230], 99.95th=[31065], 00:09:05.698 | 99.99th=[34341] 00:09:05.698 bw ( KiB/s): min=16384, max=16384, per=27.53%, avg=16384.00, stdev= 0.00, samples=2 00:09:05.698 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:09:05.698 lat (usec) : 1000=0.08% 00:09:05.698 lat (msec) : 2=0.10%, 4=0.65%, 10=4.98%, 20=76.33%, 50=16.70% 00:09:05.698 lat (msec) : 100=1.16% 00:09:05.698 cpu : usr=3.25%, sys=6.49%, ctx=264, majf=0, minf=13 00:09:05.698 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:05.698 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:05.698 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:05.698 issued rwts: total=3753,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:05.698 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:05.698 job3: (groupid=0, jobs=1): err= 0: pid=3792767: Wed Jul 24 22:18:31 2024 00:09:05.698 read: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec) 00:09:05.698 slat (usec): min=4, max=11786, avg=125.43, stdev=727.55 00:09:05.698 clat (usec): min=9070, max=82036, avg=17708.05, stdev=10214.06 00:09:05.698 lat (usec): min=9079, max=85370, avg=17833.49, stdev=10256.08 00:09:05.698 clat percentiles (usec): 00:09:05.698 | 1.00th=[10028], 5.00th=[11338], 10.00th=[12518], 20.00th=[13304], 00:09:05.698 | 30.00th=[13435], 40.00th=[13698], 50.00th=[13829], 60.00th=[14091], 00:09:05.698 | 70.00th=[15270], 80.00th=[19530], 90.00th=[28443], 95.00th=[34341], 00:09:05.698 | 99.00th=[68682], 99.50th=[79168], 99.90th=[82314], 99.95th=[82314], 00:09:05.698 | 99.99th=[82314] 00:09:05.698 write: IOPS=3528, BW=13.8MiB/s (14.5MB/s)(13.9MiB/1006msec); 0 zone resets 00:09:05.698 slat (usec): min=5, max=30371, avg=162.56, stdev=1260.00 00:09:05.698 clat (usec): min=1042, max=97101, avg=20379.04, stdev=16290.47 00:09:05.698 lat (usec): min=6612, max=97144, avg=20541.61, stdev=16451.14 00:09:05.698 clat percentiles (usec): 00:09:05.698 | 1.00th=[ 7504], 5.00th=[10290], 10.00th=[11731], 20.00th=[12911], 00:09:05.698 | 30.00th=[13042], 40.00th=[13304], 50.00th=[13698], 60.00th=[13960], 00:09:05.698 | 70.00th=[15270], 80.00th=[19268], 90.00th=[51643], 95.00th=[62129], 00:09:05.698 | 99.00th=[80217], 99.50th=[80217], 99.90th=[85459], 99.95th=[92799], 00:09:05.698 | 99.99th=[96994] 00:09:05.698 bw ( KiB/s): min= 8400, max=18976, per=23.00%, avg=13688.00, stdev=7478.36, samples=2 00:09:05.698 iops : min= 2100, max= 4744, avg=3422.00, stdev=1869.59, samples=2 00:09:05.698 lat (msec) : 2=0.02%, 10=2.17%, 20=79.12%, 50=11.05%, 100=7.64% 00:09:05.698 cpu : usr=5.27%, sys=8.46%, ctx=296, majf=0, minf=11 00:09:05.698 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:09:05.698 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:05.698 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:05.698 issued rwts: total=3072,3550,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:05.698 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:05.698 00:09:05.698 Run status group 0 (all jobs): 00:09:05.698 READ: bw=52.6MiB/s (55.2MB/s), 11.9MiB/s-14.4MiB/s (12.5MB/s-15.1MB/s), io=55.1MiB (57.8MB), run=1004-1048msec 00:09:05.698 WRITE: bw=58.1MiB/s (60.9MB/s), 13.8MiB/s-15.9MiB/s (14.5MB/s-16.7MB/s), io=60.9MiB (63.9MB), run=1004-1048msec 00:09:05.698 00:09:05.698 Disk stats (read/write): 00:09:05.698 nvme0n1: ios=3326/3584, merge=0/0, ticks=18984/17067, in_queue=36051, util=96.79% 00:09:05.698 nvme0n2: ios=3123/3391, merge=0/0, ticks=13255/19423, in_queue=32678, util=97.26% 00:09:05.698 nvme0n3: ios=3108/3558, merge=0/0, ticks=37048/33115, in_queue=70163, util=96.55% 00:09:05.698 nvme0n4: ios=2593/2663, merge=0/0, ticks=15461/18557, in_queue=34018, util=97.25% 00:09:05.698 22:18:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:05.698 [global] 00:09:05.698 thread=1 00:09:05.698 invalidate=1 00:09:05.698 rw=randwrite 00:09:05.698 time_based=1 00:09:05.698 runtime=1 00:09:05.698 ioengine=libaio 00:09:05.698 direct=1 00:09:05.698 bs=4096 00:09:05.698 iodepth=128 00:09:05.698 norandommap=0 00:09:05.698 numjobs=1 00:09:05.698 00:09:05.698 verify_dump=1 00:09:05.698 verify_backlog=512 00:09:05.698 verify_state_save=0 00:09:05.698 do_verify=1 00:09:05.698 verify=crc32c-intel 00:09:05.698 [job0] 00:09:05.698 filename=/dev/nvme0n1 00:09:05.698 [job1] 00:09:05.698 filename=/dev/nvme0n2 00:09:05.698 [job2] 00:09:05.698 filename=/dev/nvme0n3 00:09:05.698 [job3] 00:09:05.698 filename=/dev/nvme0n4 00:09:05.698 Could not set queue depth (nvme0n1) 00:09:05.698 Could not set queue depth (nvme0n2) 00:09:05.698 Could not set queue depth (nvme0n3) 00:09:05.698 Could not set queue depth (nvme0n4) 00:09:05.957 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:05.957 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:05.957 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:05.957 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:05.957 fio-3.35 00:09:05.957 Starting 4 threads 00:09:07.339 00:09:07.339 job0: (groupid=0, jobs=1): err= 0: pid=3792953: Wed Jul 24 22:18:32 2024 00:09:07.339 read: IOPS=4242, BW=16.6MiB/s (17.4MB/s)(16.7MiB/1005msec) 00:09:07.339 slat (usec): min=2, max=12139, avg=101.35, stdev=513.21 00:09:07.339 clat (usec): min=610, max=29662, avg=13601.48, stdev=3161.97 00:09:07.339 lat (usec): min=4630, max=31541, avg=13702.83, stdev=3159.51 00:09:07.339 clat percentiles (usec): 00:09:07.339 | 1.00th=[ 4948], 5.00th=[10552], 10.00th=[10814], 20.00th=[11731], 00:09:07.339 | 30.00th=[12256], 40.00th=[12649], 50.00th=[13173], 60.00th=[13566], 00:09:07.339 | 70.00th=[13960], 80.00th=[14615], 90.00th=[16581], 95.00th=[19530], 00:09:07.339 | 99.00th=[27395], 99.50th=[29754], 99.90th=[29754], 99.95th=[29754], 00:09:07.339 | 99.99th=[29754] 00:09:07.339 write: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec); 0 zone resets 00:09:07.339 slat (usec): min=4, max=9818, avg=111.18, stdev=583.74 00:09:07.339 clat (usec): min=8363, max=38260, avg=14982.12, stdev=6706.09 00:09:07.339 lat (usec): min=8373, max=38278, avg=15093.29, stdev=6741.44 00:09:07.339 clat percentiles (usec): 00:09:07.339 | 1.00th=[ 8717], 5.00th=[10552], 10.00th=[10945], 20.00th=[11469], 00:09:07.339 | 30.00th=[11863], 40.00th=[11994], 50.00th=[12780], 60.00th=[13173], 00:09:07.339 | 70.00th=[13566], 80.00th=[14746], 90.00th=[22676], 95.00th=[34866], 00:09:07.339 | 99.00th=[38011], 99.50th=[38011], 99.90th=[38011], 99.95th=[38011], 00:09:07.339 | 99.99th=[38011] 00:09:07.339 bw ( KiB/s): min=16440, max=20424, per=27.41%, avg=18432.00, stdev=2817.11, samples=2 00:09:07.339 iops : min= 4110, max= 5106, avg=4608.00, stdev=704.28, samples=2 00:09:07.339 lat (usec) : 750=0.01% 00:09:07.339 lat (msec) : 10=3.39%, 20=87.58%, 50=9.02% 00:09:07.339 cpu : usr=7.47%, sys=10.76%, ctx=465, majf=0, minf=9 00:09:07.339 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:07.339 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:07.339 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:07.339 issued rwts: total=4264,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:07.339 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:07.339 job1: (groupid=0, jobs=1): err= 0: pid=3792954: Wed Jul 24 22:18:32 2024 00:09:07.339 read: IOPS=3796, BW=14.8MiB/s (15.5MB/s)(14.9MiB/1005msec) 00:09:07.339 slat (usec): min=2, max=15979, avg=127.38, stdev=805.24 00:09:07.339 clat (usec): min=546, max=35443, avg=16339.94, stdev=5534.16 00:09:07.339 lat (usec): min=5542, max=35483, avg=16467.32, stdev=5588.14 00:09:07.339 clat percentiles (usec): 00:09:07.339 | 1.00th=[ 5997], 5.00th=[ 9765], 10.00th=[11338], 20.00th=[11994], 00:09:07.339 | 30.00th=[12256], 40.00th=[13042], 50.00th=[14877], 60.00th=[17171], 00:09:07.339 | 70.00th=[18220], 80.00th=[20579], 90.00th=[23987], 95.00th=[28443], 00:09:07.339 | 99.00th=[29492], 99.50th=[31327], 99.90th=[33817], 99.95th=[34866], 00:09:07.339 | 99.99th=[35390] 00:09:07.339 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:09:07.339 slat (usec): min=4, max=28442, avg=117.36, stdev=842.48 00:09:07.339 clat (usec): min=4659, max=71689, avg=15532.72, stdev=7890.07 00:09:07.339 lat (usec): min=4670, max=71704, avg=15650.08, stdev=7951.88 00:09:07.339 clat percentiles (usec): 00:09:07.339 | 1.00th=[ 5735], 5.00th=[ 8848], 10.00th=[10290], 20.00th=[11994], 00:09:07.339 | 30.00th=[12256], 40.00th=[12518], 50.00th=[13173], 60.00th=[14877], 00:09:07.339 | 70.00th=[15795], 80.00th=[17171], 90.00th=[21890], 95.00th=[24511], 00:09:07.339 | 99.00th=[62129], 99.50th=[62129], 99.90th=[62129], 99.95th=[62129], 00:09:07.339 | 99.99th=[71828] 00:09:07.339 bw ( KiB/s): min=15272, max=17496, per=24.36%, avg=16384.00, stdev=1572.61, samples=2 00:09:07.339 iops : min= 3818, max= 4374, avg=4096.00, stdev=393.15, samples=2 00:09:07.339 lat (usec) : 750=0.01% 00:09:07.339 lat (msec) : 10=7.51%, 20=73.73%, 50=17.67%, 100=1.07% 00:09:07.339 cpu : usr=4.68%, sys=6.87%, ctx=320, majf=0, minf=13 00:09:07.339 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:07.339 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:07.339 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:07.339 issued rwts: total=3815,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:07.339 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:07.339 job2: (groupid=0, jobs=1): err= 0: pid=3792955: Wed Jul 24 22:18:32 2024 00:09:07.339 read: IOPS=3433, BW=13.4MiB/s (14.1MB/s)(13.5MiB/1005msec) 00:09:07.339 slat (usec): min=4, max=15144, avg=135.64, stdev=802.83 00:09:07.339 clat (usec): min=2210, max=44874, avg=17690.70, stdev=5970.86 00:09:07.339 lat (usec): min=6583, max=44897, avg=17826.34, stdev=6024.49 00:09:07.339 clat percentiles (usec): 00:09:07.339 | 1.00th=[ 6915], 5.00th=[11863], 10.00th=[13042], 20.00th=[13960], 00:09:07.339 | 30.00th=[14353], 40.00th=[14877], 50.00th=[15664], 60.00th=[17171], 00:09:07.339 | 70.00th=[18482], 80.00th=[21627], 90.00th=[25297], 95.00th=[30016], 00:09:07.339 | 99.00th=[41157], 99.50th=[42206], 99.90th=[44827], 99.95th=[44827], 00:09:07.339 | 99.99th=[44827] 00:09:07.339 write: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec); 0 zone resets 00:09:07.339 slat (usec): min=5, max=11364, avg=136.09, stdev=745.54 00:09:07.339 clat (usec): min=7219, max=41909, avg=18385.85, stdev=7586.40 00:09:07.339 lat (usec): min=7231, max=41931, avg=18521.94, stdev=7655.80 00:09:07.339 clat percentiles (usec): 00:09:07.339 | 1.00th=[ 9503], 5.00th=[11994], 10.00th=[12780], 20.00th=[13304], 00:09:07.339 | 30.00th=[13698], 40.00th=[14222], 50.00th=[15008], 60.00th=[15401], 00:09:07.339 | 70.00th=[19006], 80.00th=[23725], 90.00th=[31327], 95.00th=[35390], 00:09:07.339 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:09:07.339 | 99.99th=[41681] 00:09:07.339 bw ( KiB/s): min=12288, max=16384, per=21.32%, avg=14336.00, stdev=2896.31, samples=2 00:09:07.339 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:09:07.339 lat (msec) : 4=0.01%, 10=1.79%, 20=72.30%, 50=25.90% 00:09:07.339 cpu : usr=6.18%, sys=8.86%, ctx=348, majf=0, minf=17 00:09:07.339 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:09:07.339 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:07.339 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:07.339 issued rwts: total=3451,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:07.339 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:07.339 job3: (groupid=0, jobs=1): err= 0: pid=3792956: Wed Jul 24 22:18:32 2024 00:09:07.339 read: IOPS=4582, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec) 00:09:07.339 slat (usec): min=4, max=6570, avg=107.83, stdev=608.18 00:09:07.339 clat (usec): min=1867, max=20889, avg=13751.36, stdev=1949.06 00:09:07.339 lat (usec): min=5815, max=20929, avg=13859.19, stdev=2005.14 00:09:07.339 clat percentiles (usec): 00:09:07.339 | 1.00th=[ 8455], 5.00th=[10159], 10.00th=[11469], 20.00th=[13042], 00:09:07.339 | 30.00th=[13173], 40.00th=[13435], 50.00th=[13698], 60.00th=[13960], 00:09:07.339 | 70.00th=[14222], 80.00th=[14615], 90.00th=[16057], 95.00th=[17433], 00:09:07.339 | 99.00th=[19006], 99.50th=[19530], 99.90th=[20841], 99.95th=[20841], 00:09:07.339 | 99.99th=[20841] 00:09:07.339 write: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec); 0 zone resets 00:09:07.339 slat (usec): min=6, max=13705, avg=96.39, stdev=513.60 00:09:07.339 clat (usec): min=719, max=29831, avg=13873.52, stdev=2043.02 00:09:07.339 lat (usec): min=738, max=29851, avg=13969.92, stdev=2080.67 00:09:07.339 clat percentiles (usec): 00:09:07.339 | 1.00th=[ 8455], 5.00th=[11076], 10.00th=[12256], 20.00th=[12780], 00:09:07.339 | 30.00th=[13042], 40.00th=[13304], 50.00th=[13566], 60.00th=[13829], 00:09:07.339 | 70.00th=[14484], 80.00th=[14746], 90.00th=[16450], 95.00th=[17433], 00:09:07.339 | 99.00th=[21627], 99.50th=[21627], 99.90th=[21627], 99.95th=[28181], 00:09:07.339 | 99.99th=[29754] 00:09:07.339 bw ( KiB/s): min=16904, max=19960, per=27.41%, avg=18432.00, stdev=2160.92, samples=2 00:09:07.339 iops : min= 4226, max= 4990, avg=4608.00, stdev=540.23, samples=2 00:09:07.339 lat (usec) : 750=0.02% 00:09:07.339 lat (msec) : 2=0.01%, 10=3.72%, 20=95.29%, 50=0.96% 00:09:07.339 cpu : usr=8.87%, sys=10.97%, ctx=532, majf=0, minf=11 00:09:07.339 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:09:07.339 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:07.339 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:07.339 issued rwts: total=4601,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:07.339 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:07.339 00:09:07.339 Run status group 0 (all jobs): 00:09:07.339 READ: bw=62.7MiB/s (65.7MB/s), 13.4MiB/s-17.9MiB/s (14.1MB/s-18.8MB/s), io=63.0MiB (66.1MB), run=1004-1005msec 00:09:07.339 WRITE: bw=65.7MiB/s (68.9MB/s), 13.9MiB/s-17.9MiB/s (14.6MB/s-18.8MB/s), io=66.0MiB (69.2MB), run=1004-1005msec 00:09:07.339 00:09:07.339 Disk stats (read/write): 00:09:07.339 nvme0n1: ios=3649/4096, merge=0/0, ticks=12389/15193, in_queue=27582, util=87.17% 00:09:07.339 nvme0n2: ios=3097/3511, merge=0/0, ticks=24141/24447, in_queue=48588, util=87.28% 00:09:07.339 nvme0n3: ios=2593/3072, merge=0/0, ticks=23802/27165, in_queue=50967, util=100.00% 00:09:07.340 nvme0n4: ios=3704/4096, merge=0/0, ticks=24912/30198, in_queue=55110, util=98.95% 00:09:07.340 22:18:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:07.340 22:18:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3793062 00:09:07.340 22:18:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:07.340 22:18:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:07.340 [global] 00:09:07.340 thread=1 00:09:07.340 invalidate=1 00:09:07.340 rw=read 00:09:07.340 time_based=1 00:09:07.340 runtime=10 00:09:07.340 ioengine=libaio 00:09:07.340 direct=1 00:09:07.340 bs=4096 00:09:07.340 iodepth=1 00:09:07.340 norandommap=1 00:09:07.340 numjobs=1 00:09:07.340 00:09:07.340 [job0] 00:09:07.340 filename=/dev/nvme0n1 00:09:07.340 [job1] 00:09:07.340 filename=/dev/nvme0n2 00:09:07.340 [job2] 00:09:07.340 filename=/dev/nvme0n3 00:09:07.340 [job3] 00:09:07.340 filename=/dev/nvme0n4 00:09:07.340 Could not set queue depth (nvme0n1) 00:09:07.340 Could not set queue depth (nvme0n2) 00:09:07.340 Could not set queue depth (nvme0n3) 00:09:07.340 Could not set queue depth (nvme0n4) 00:09:07.340 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:07.340 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:07.340 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:07.340 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:07.340 fio-3.35 00:09:07.340 Starting 4 threads 00:09:10.629 22:18:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:10.629 22:18:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:10.629 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=27439104, buflen=4096 00:09:10.629 fio: pid=3793225, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:10.629 22:18:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:10.629 22:18:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:10.629 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=11137024, buflen=4096 00:09:10.629 fio: pid=3793223, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:11.199 22:18:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:11.199 22:18:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:11.199 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=1490944, buflen=4096 00:09:11.199 fio: pid=3793221, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:11.458 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=31830016, buflen=4096 00:09:11.458 fio: pid=3793222, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:09:11.458 22:18:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:11.458 22:18:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:11.458 00:09:11.458 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3793221: Wed Jul 24 22:18:36 2024 00:09:11.458 read: IOPS=103, BW=411KiB/s (421kB/s)(1456KiB/3542msec) 00:09:11.458 slat (usec): min=6, max=1893, avg=25.92, stdev=99.15 00:09:11.458 clat (usec): min=241, max=42405, avg=9634.71, stdev=17054.28 00:09:11.458 lat (usec): min=247, max=43018, avg=9660.26, stdev=17066.96 00:09:11.458 clat percentiles (usec): 00:09:11.458 | 1.00th=[ 245], 5.00th=[ 249], 10.00th=[ 253], 20.00th=[ 260], 00:09:11.458 | 30.00th=[ 273], 40.00th=[ 355], 50.00th=[ 486], 60.00th=[ 506], 00:09:11.458 | 70.00th=[ 545], 80.00th=[40633], 90.00th=[41157], 95.00th=[41157], 00:09:11.458 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:09:11.458 | 99.99th=[42206] 00:09:11.458 bw ( KiB/s): min= 96, max= 440, per=1.38%, avg=252.00, stdev=143.18, samples=6 00:09:11.458 iops : min= 24, max= 110, avg=63.00, stdev=35.79, samples=6 00:09:11.458 lat (usec) : 250=6.03%, 500=50.41%, 750=20.00%, 1000=0.55% 00:09:11.458 lat (msec) : 50=22.74% 00:09:11.458 cpu : usr=0.20%, sys=0.23%, ctx=371, majf=0, minf=1 00:09:11.458 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:11.458 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:11.458 complete : 0=0.3%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:11.458 issued rwts: total=365,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:11.458 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:11.458 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=3793222: Wed Jul 24 22:18:36 2024 00:09:11.458 read: IOPS=2028, BW=8112KiB/s (8306kB/s)(30.4MiB/3832msec) 00:09:11.458 slat (usec): min=4, max=21834, avg=14.97, stdev=260.21 00:09:11.458 clat (usec): min=207, max=50044, avg=475.45, stdev=2502.17 00:09:11.458 lat (usec): min=213, max=62971, avg=489.52, stdev=2560.03 00:09:11.458 clat percentiles (usec): 00:09:11.458 | 1.00th=[ 221], 5.00th=[ 243], 10.00th=[ 269], 20.00th=[ 281], 00:09:11.458 | 30.00th=[ 289], 40.00th=[ 293], 50.00th=[ 306], 60.00th=[ 322], 00:09:11.458 | 70.00th=[ 343], 80.00th=[ 367], 90.00th=[ 392], 95.00th=[ 453], 00:09:11.458 | 99.00th=[ 545], 99.50th=[ 668], 99.90th=[41681], 99.95th=[41681], 00:09:11.458 | 99.99th=[50070] 00:09:11.458 bw ( KiB/s): min= 2520, max=13576, per=48.04%, avg=8802.29, stdev=4044.33, samples=7 00:09:11.458 iops : min= 630, max= 3394, avg=2200.57, stdev=1011.08, samples=7 00:09:11.458 lat (usec) : 250=5.93%, 500=91.95%, 750=1.63%, 1000=0.01% 00:09:11.458 lat (msec) : 2=0.09%, 50=0.36%, 100=0.01% 00:09:11.458 cpu : usr=1.41%, sys=2.77%, ctx=7776, majf=0, minf=1 00:09:11.458 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:11.458 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:11.458 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:11.458 issued rwts: total=7772,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:11.458 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:11.458 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3793223: Wed Jul 24 22:18:36 2024 00:09:11.458 read: IOPS=842, BW=3368KiB/s (3449kB/s)(10.6MiB/3229msec) 00:09:11.458 slat (nsec): min=5023, max=48325, avg=12308.97, stdev=5477.43 00:09:11.458 clat (usec): min=229, max=43104, avg=1163.92, stdev=5726.88 00:09:11.458 lat (usec): min=236, max=43119, avg=1176.23, stdev=5728.87 00:09:11.458 clat percentiles (usec): 00:09:11.458 | 1.00th=[ 237], 5.00th=[ 247], 10.00th=[ 260], 20.00th=[ 277], 00:09:11.458 | 30.00th=[ 318], 40.00th=[ 347], 50.00th=[ 359], 60.00th=[ 367], 00:09:11.458 | 70.00th=[ 375], 80.00th=[ 383], 90.00th=[ 396], 95.00th=[ 433], 00:09:11.458 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:09:11.458 | 99.99th=[43254] 00:09:11.458 bw ( KiB/s): min= 96, max=10200, per=15.60%, avg=2858.67, stdev=4027.24, samples=6 00:09:11.458 iops : min= 24, max= 2550, avg=714.67, stdev=1006.81, samples=6 00:09:11.458 lat (usec) : 250=6.58%, 500=90.62%, 750=0.62%, 1000=0.11% 00:09:11.458 lat (msec) : 50=2.02% 00:09:11.458 cpu : usr=0.40%, sys=1.39%, ctx=2721, majf=0, minf=1 00:09:11.458 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:11.458 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:11.458 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:11.458 issued rwts: total=2720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:11.458 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:11.458 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3793225: Wed Jul 24 22:18:36 2024 00:09:11.458 read: IOPS=2285, BW=9139KiB/s (9358kB/s)(26.2MiB/2932msec) 00:09:11.458 slat (nsec): min=5095, max=54963, avg=11106.30, stdev=4164.83 00:09:11.458 clat (usec): min=252, max=41951, avg=420.71, stdev=1986.72 00:09:11.458 lat (usec): min=258, max=41982, avg=431.82, stdev=1987.59 00:09:11.458 clat percentiles (usec): 00:09:11.458 | 1.00th=[ 273], 5.00th=[ 281], 10.00th=[ 285], 20.00th=[ 289], 00:09:11.458 | 30.00th=[ 293], 40.00th=[ 302], 50.00th=[ 310], 60.00th=[ 322], 00:09:11.458 | 70.00th=[ 334], 80.00th=[ 351], 90.00th=[ 375], 95.00th=[ 445], 00:09:11.458 | 99.00th=[ 486], 99.50th=[ 502], 99.90th=[41157], 99.95th=[41157], 00:09:11.458 | 99.99th=[42206] 00:09:11.458 bw ( KiB/s): min= 96, max=12632, per=47.68%, avg=8736.00, stdev=5163.66, samples=5 00:09:11.458 iops : min= 24, max= 3158, avg=2184.00, stdev=1290.91, samples=5 00:09:11.458 lat (usec) : 500=99.40%, 750=0.30% 00:09:11.458 lat (msec) : 2=0.03%, 4=0.01%, 50=0.24% 00:09:11.458 cpu : usr=1.09%, sys=3.14%, ctx=6700, majf=0, minf=1 00:09:11.458 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:11.458 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:11.458 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:11.458 issued rwts: total=6700,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:11.458 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:11.458 00:09:11.458 Run status group 0 (all jobs): 00:09:11.458 READ: bw=17.9MiB/s (18.8MB/s), 411KiB/s-9139KiB/s (421kB/s-9358kB/s), io=68.6MiB (71.9MB), run=2932-3832msec 00:09:11.458 00:09:11.458 Disk stats (read/write): 00:09:11.458 nvme0n1: ios=396/0, merge=0/0, ticks=4212/0, in_queue=4212, util=99.80% 00:09:11.458 nvme0n2: ios=7806/0, merge=0/0, ticks=3555/0, in_queue=3555, util=98.55% 00:09:11.458 nvme0n3: ios=2488/0, merge=0/0, ticks=3050/0, in_queue=3050, util=96.79% 00:09:11.458 nvme0n4: ios=6518/0, merge=0/0, ticks=2690/0, in_queue=2690, util=96.75% 00:09:11.716 22:18:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:11.716 22:18:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:11.974 22:18:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:11.974 22:18:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:12.231 22:18:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:12.231 22:18:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:12.489 22:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:12.489 22:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:12.747 22:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:12.747 22:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 3793062 00:09:12.747 22:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:12.747 22:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:13.005 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:13.005 22:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:13.005 22:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1217 -- # local i=0 00:09:13.005 22:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:09:13.005 22:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:13.005 22:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:09:13.005 22:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:13.005 22:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # return 0 00:09:13.005 22:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:13.005 22:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:13.005 nvmf hotplug test: fio failed as expected 00:09:13.005 22:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:13.263 22:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:13.263 22:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:13.263 22:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:13.263 22:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:13.263 22:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:13.263 22:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:13.263 22:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:09:13.263 22:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:13.263 22:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:09:13.263 22:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:13.263 22:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:13.263 rmmod nvme_tcp 00:09:13.263 rmmod nvme_fabrics 00:09:13.263 rmmod nvme_keyring 00:09:13.263 22:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:13.263 22:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:09:13.263 22:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:09:13.263 22:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 3791470 ']' 00:09:13.263 22:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 3791470 00:09:13.263 22:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 3791470 ']' 00:09:13.263 22:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 3791470 00:09:13.263 22:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:09:13.263 22:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:13.263 22:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3791470 00:09:13.263 22:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:13.263 22:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:13.263 22:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3791470' 00:09:13.263 killing process with pid 3791470 00:09:13.263 22:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 3791470 00:09:13.263 22:18:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 3791470 00:09:13.521 22:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:13.522 22:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:13.522 22:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:13.522 22:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:13.522 22:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:13.522 22:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:13.522 22:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:13.522 22:18:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:16.063 22:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:16.063 00:09:16.063 real 0m23.765s 00:09:16.063 user 1m24.035s 00:09:16.063 sys 0m7.150s 00:09:16.063 22:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:16.063 22:18:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:16.063 ************************************ 00:09:16.063 END TEST nvmf_fio_target 00:09:16.063 ************************************ 00:09:16.063 22:18:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:16.063 22:18:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:16.063 22:18:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:16.063 22:18:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:16.063 ************************************ 00:09:16.063 START TEST nvmf_bdevio 00:09:16.063 ************************************ 00:09:16.063 22:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:16.063 * Looking for test storage... 00:09:16.063 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:16.063 22:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:16.063 22:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:16.063 22:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:16.063 22:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:16.063 22:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:16.063 22:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:16.063 22:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:16.063 22:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:16.063 22:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:16.063 22:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:16.063 22:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:16.063 22:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:16.063 22:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:09:16.063 22:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:09:16.063 22:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:16.063 22:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:16.063 22:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:16.063 22:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:16.063 22:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:16.063 22:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:16.063 22:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:16.063 22:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:16.063 22:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.063 22:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.063 22:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.063 22:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:16.063 22:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.063 22:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:09:16.063 22:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:16.063 22:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:16.063 22:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:16.063 22:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:16.063 22:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:16.063 22:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:16.063 22:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:16.063 22:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:16.063 22:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:16.063 22:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:16.063 22:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:16.063 22:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:16.063 22:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:16.063 22:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:16.063 22:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:16.063 22:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:16.063 22:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:16.063 22:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:16.063 22:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:16.063 22:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:16.063 22:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:16.063 22:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:09:16.063 22:18:41 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:09:17.445 Found 0000:08:00.0 (0x8086 - 0x159b) 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:09:17.445 Found 0000:08:00.1 (0x8086 - 0x159b) 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:09:17.445 Found net devices under 0000:08:00.0: cvl_0_0 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:09:17.445 Found net devices under 0000:08:00.1: cvl_0_1 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:17.445 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:17.445 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:09:17.445 00:09:17.445 --- 10.0.0.2 ping statistics --- 00:09:17.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:17.445 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:17.445 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:17.445 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:09:17.445 00:09:17.445 --- 10.0.0.1 ping statistics --- 00:09:17.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:17.445 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:17.445 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:17.446 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:17.446 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:17.705 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:17.705 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:17.705 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:17.706 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:17.706 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=3795273 00:09:17.706 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:17.706 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 3795273 00:09:17.706 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 3795273 ']' 00:09:17.706 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:17.706 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:17.706 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:17.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:17.706 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:17.706 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:17.706 [2024-07-24 22:18:43.214999] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:09:17.706 [2024-07-24 22:18:43.215094] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:17.706 EAL: No free 2048 kB hugepages reported on node 1 00:09:17.706 [2024-07-24 22:18:43.284829] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:17.706 [2024-07-24 22:18:43.405544] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:17.706 [2024-07-24 22:18:43.405616] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:17.706 [2024-07-24 22:18:43.405632] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:17.706 [2024-07-24 22:18:43.405645] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:17.706 [2024-07-24 22:18:43.405657] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:17.706 [2024-07-24 22:18:43.405715] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:09:17.706 [2024-07-24 22:18:43.405768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:09:17.706 [2024-07-24 22:18:43.405847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:09:17.706 [2024-07-24 22:18:43.405851] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:17.965 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:17.965 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:09:17.965 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:17.965 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:17.965 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:17.965 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:17.965 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:17.965 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.965 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:17.965 [2024-07-24 22:18:43.557848] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:17.965 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.965 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:17.965 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.965 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:17.965 Malloc0 00:09:17.965 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.965 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:17.965 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.965 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:17.965 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.965 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:17.965 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.965 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:17.965 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.965 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:17.965 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.965 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:17.965 [2024-07-24 22:18:43.608422] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:17.965 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.965 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:17.965 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:17.965 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:09:17.965 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:09:17.965 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:17.965 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:17.965 { 00:09:17.965 "params": { 00:09:17.966 "name": "Nvme$subsystem", 00:09:17.966 "trtype": "$TEST_TRANSPORT", 00:09:17.966 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:17.966 "adrfam": "ipv4", 00:09:17.966 "trsvcid": "$NVMF_PORT", 00:09:17.966 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:17.966 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:17.966 "hdgst": ${hdgst:-false}, 00:09:17.966 "ddgst": ${ddgst:-false} 00:09:17.966 }, 00:09:17.966 "method": "bdev_nvme_attach_controller" 00:09:17.966 } 00:09:17.966 EOF 00:09:17.966 )") 00:09:17.966 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:09:17.966 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:09:17.966 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:09:17.966 22:18:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:17.966 "params": { 00:09:17.966 "name": "Nvme1", 00:09:17.966 "trtype": "tcp", 00:09:17.966 "traddr": "10.0.0.2", 00:09:17.966 "adrfam": "ipv4", 00:09:17.966 "trsvcid": "4420", 00:09:17.966 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:17.966 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:17.966 "hdgst": false, 00:09:17.966 "ddgst": false 00:09:17.966 }, 00:09:17.966 "method": "bdev_nvme_attach_controller" 00:09:17.966 }' 00:09:17.966 [2024-07-24 22:18:43.659675] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:09:17.966 [2024-07-24 22:18:43.659765] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3795298 ] 00:09:18.224 EAL: No free 2048 kB hugepages reported on node 1 00:09:18.224 [2024-07-24 22:18:43.723275] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:18.224 [2024-07-24 22:18:43.843532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:18.224 [2024-07-24 22:18:43.843618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:18.224 [2024-07-24 22:18:43.843652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.483 I/O targets: 00:09:18.484 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:18.484 00:09:18.484 00:09:18.484 CUnit - A unit testing framework for C - Version 2.1-3 00:09:18.484 http://cunit.sourceforge.net/ 00:09:18.484 00:09:18.484 00:09:18.484 Suite: bdevio tests on: Nvme1n1 00:09:18.484 Test: blockdev write read block ...passed 00:09:18.484 Test: blockdev write zeroes read block ...passed 00:09:18.484 Test: blockdev write zeroes read no split ...passed 00:09:18.744 Test: blockdev write zeroes read split ...passed 00:09:18.744 Test: blockdev write zeroes read split partial ...passed 00:09:18.744 Test: blockdev reset ...[2024-07-24 22:18:44.252776] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:09:18.744 [2024-07-24 22:18:44.252896] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed3f60 (9): Bad file descriptor 00:09:18.744 [2024-07-24 22:18:44.270446] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:18.744 passed 00:09:18.744 Test: blockdev write read 8 blocks ...passed 00:09:18.744 Test: blockdev write read size > 128k ...passed 00:09:18.744 Test: blockdev write read invalid size ...passed 00:09:18.744 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:18.744 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:18.744 Test: blockdev write read max offset ...passed 00:09:18.744 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:18.744 Test: blockdev writev readv 8 blocks ...passed 00:09:19.005 Test: blockdev writev readv 30 x 1block ...passed 00:09:19.005 Test: blockdev writev readv block ...passed 00:09:19.005 Test: blockdev writev readv size > 128k ...passed 00:09:19.005 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:19.005 Test: blockdev comparev and writev ...[2024-07-24 22:18:44.527618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:19.005 [2024-07-24 22:18:44.527669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:19.005 [2024-07-24 22:18:44.527696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:19.005 [2024-07-24 22:18:44.527716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:19.005 [2024-07-24 22:18:44.528100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:19.005 [2024-07-24 22:18:44.528128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:19.005 [2024-07-24 22:18:44.528151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:19.005 [2024-07-24 22:18:44.528168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:19.005 [2024-07-24 22:18:44.528559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:19.005 [2024-07-24 22:18:44.528590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:19.005 [2024-07-24 22:18:44.528615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:19.005 [2024-07-24 22:18:44.528635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:19.005 [2024-07-24 22:18:44.529008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:19.005 [2024-07-24 22:18:44.529033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:19.005 [2024-07-24 22:18:44.529057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:19.006 [2024-07-24 22:18:44.529073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:19.006 passed 00:09:19.006 Test: blockdev nvme passthru rw ...passed 00:09:19.006 Test: blockdev nvme passthru vendor specific ...[2024-07-24 22:18:44.611822] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:19.006 [2024-07-24 22:18:44.611851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:19.006 [2024-07-24 22:18:44.612043] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:19.006 [2024-07-24 22:18:44.612068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:19.006 [2024-07-24 22:18:44.612256] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:19.006 [2024-07-24 22:18:44.612281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:19.006 [2024-07-24 22:18:44.612472] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:19.006 [2024-07-24 22:18:44.612504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:19.006 passed 00:09:19.006 Test: blockdev nvme admin passthru ...passed 00:09:19.006 Test: blockdev copy ...passed 00:09:19.006 00:09:19.006 Run Summary: Type Total Ran Passed Failed Inactive 00:09:19.006 suites 1 1 n/a 0 0 00:09:19.006 tests 23 23 23 0 0 00:09:19.006 asserts 152 152 152 0 n/a 00:09:19.006 00:09:19.006 Elapsed time = 1.227 seconds 00:09:19.265 22:18:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:19.265 22:18:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:19.265 22:18:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:19.265 22:18:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:19.265 22:18:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:19.265 22:18:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:19.265 22:18:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:19.265 22:18:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:09:19.265 22:18:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:19.265 22:18:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:09:19.265 22:18:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:19.265 22:18:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:19.265 rmmod nvme_tcp 00:09:19.265 rmmod nvme_fabrics 00:09:19.265 rmmod nvme_keyring 00:09:19.265 22:18:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:19.265 22:18:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:09:19.265 22:18:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:09:19.265 22:18:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 3795273 ']' 00:09:19.265 22:18:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 3795273 00:09:19.265 22:18:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 3795273 ']' 00:09:19.265 22:18:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 3795273 00:09:19.265 22:18:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:09:19.265 22:18:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:19.265 22:18:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3795273 00:09:19.265 22:18:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:09:19.265 22:18:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:09:19.265 22:18:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3795273' 00:09:19.265 killing process with pid 3795273 00:09:19.265 22:18:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 3795273 00:09:19.265 22:18:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 3795273 00:09:19.525 22:18:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:19.525 22:18:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:19.525 22:18:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:19.525 22:18:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:19.525 22:18:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:19.526 22:18:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.526 22:18:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:19.526 22:18:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:22.068 22:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:22.068 00:09:22.068 real 0m5.990s 00:09:22.068 user 0m9.844s 00:09:22.068 sys 0m1.859s 00:09:22.068 22:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:22.068 22:18:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:22.068 ************************************ 00:09:22.068 END TEST nvmf_bdevio 00:09:22.068 ************************************ 00:09:22.068 22:18:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:22.068 00:09:22.068 real 3m51.094s 00:09:22.068 user 10m12.997s 00:09:22.068 sys 1m4.642s 00:09:22.068 22:18:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:22.068 22:18:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:22.068 ************************************ 00:09:22.068 END TEST nvmf_target_core 00:09:22.068 ************************************ 00:09:22.068 22:18:47 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:22.068 22:18:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:22.068 22:18:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:22.068 22:18:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:22.068 ************************************ 00:09:22.068 START TEST nvmf_target_extra 00:09:22.068 ************************************ 00:09:22.068 22:18:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:22.068 * Looking for test storage... 00:09:22.068 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:22.068 22:18:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:22.069 ************************************ 00:09:22.069 START TEST nvmf_example 00:09:22.069 ************************************ 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:22.069 * Looking for test storage... 00:09:22.069 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:22.069 22:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:22.070 22:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:09:22.070 22:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:09:22.070 22:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:09:22.070 22:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:09:22.070 22:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:09:22.070 22:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:09:22.070 22:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:09:22.070 22:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:09:22.070 22:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:22.070 22:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:22.070 22:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:09:22.070 22:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:22.070 22:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:22.070 22:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:22.070 22:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:22.070 22:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:22.070 22:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:22.070 22:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:22.070 22:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:22.070 22:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:22.070 22:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:22.070 22:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:09:22.070 22:18:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:23.453 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:23.453 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:09:23.453 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:23.453 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:23.453 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:23.453 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:23.453 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:23.453 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:09:23.453 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:23.453 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:09:23.453 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:09:23.453 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:09:23.453 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:09:23.453 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:09:23.453 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:09:23.453 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:23.453 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:23.453 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:23.453 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:23.453 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:23.453 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:23.453 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:23.453 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:23.453 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:23.453 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:23.453 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:23.453 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:23.453 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:23.453 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:23.453 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:23.453 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:23.453 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:23.453 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:23.453 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:09:23.453 Found 0000:08:00.0 (0x8086 - 0x159b) 00:09:23.453 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:23.453 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:23.453 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:23.453 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:23.453 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:23.453 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:23.453 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:09:23.453 Found 0000:08:00.1 (0x8086 - 0x159b) 00:09:23.453 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:23.453 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:23.453 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:23.453 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:23.453 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:23.453 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:23.453 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:23.453 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:23.453 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:23.453 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:23.453 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:23.453 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:23.453 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:23.453 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:23.453 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:23.453 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:09:23.453 Found net devices under 0000:08:00.0: cvl_0_0 00:09:23.453 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:23.453 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:23.453 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:23.453 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:23.453 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:23.453 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:23.453 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:23.453 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:23.453 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:09:23.454 Found net devices under 0000:08:00.1: cvl_0_1 00:09:23.454 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:23.454 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:23.454 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:09:23.454 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:23.454 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:23.454 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:23.454 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:23.454 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:23.454 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:23.454 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:23.454 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:23.454 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:23.454 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:23.454 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:23.454 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:23.454 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:23.454 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:23.454 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:23.454 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:23.454 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:23.454 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:23.454 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:23.454 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:23.454 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:23.454 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:23.454 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:23.454 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:23.454 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.314 ms 00:09:23.454 00:09:23.454 --- 10.0.0.2 ping statistics --- 00:09:23.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:23.454 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:09:23.454 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:23.454 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:23.454 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:09:23.454 00:09:23.454 --- 10.0.0.1 ping statistics --- 00:09:23.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:23.454 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:09:23.454 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:23.454 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:09:23.454 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:23.454 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:23.454 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:23.454 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:23.454 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:23.454 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:23.454 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:23.454 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:09:23.454 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:09:23.454 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:23.454 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:23.713 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:09:23.713 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:09:23.713 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:09:23.713 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3796950 00:09:23.713 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:23.713 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3796950 00:09:23.713 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 3796950 ']' 00:09:23.713 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:23.713 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:23.713 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:23.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:23.713 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:23.713 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:23.713 EAL: No free 2048 kB hugepages reported on node 1 00:09:23.973 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:23.973 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:09:23.973 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:09:23.973 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:23.973 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:23.973 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:23.973 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.973 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:23.973 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.973 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:09:23.973 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.973 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:23.973 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.973 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:09:23.973 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:23.973 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.973 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:23.973 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.973 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:09:23.973 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:23.973 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.973 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:23.973 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.974 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:23.974 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:23.974 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:23.974 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:23.974 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:09:23.974 22:18:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:23.974 EAL: No free 2048 kB hugepages reported on node 1 00:09:36.220 Initializing NVMe Controllers 00:09:36.220 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:36.220 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:36.220 Initialization complete. Launching workers. 00:09:36.220 ======================================================== 00:09:36.220 Latency(us) 00:09:36.220 Device Information : IOPS MiB/s Average min max 00:09:36.220 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 13111.20 51.22 4882.28 1047.30 15783.46 00:09:36.220 ======================================================== 00:09:36.220 Total : 13111.20 51.22 4882.28 1047.30 15783.46 00:09:36.220 00:09:36.220 22:18:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:09:36.220 22:18:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:09:36.220 22:18:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:36.220 22:18:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # sync 00:09:36.220 22:18:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:36.220 22:18:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:09:36.220 22:18:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:36.220 22:18:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:36.220 rmmod nvme_tcp 00:09:36.220 rmmod nvme_fabrics 00:09:36.220 rmmod nvme_keyring 00:09:36.220 22:18:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:36.220 22:18:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:09:36.220 22:18:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:09:36.220 22:18:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 3796950 ']' 00:09:36.220 22:18:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # killprocess 3796950 00:09:36.220 22:18:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 3796950 ']' 00:09:36.220 22:18:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 3796950 00:09:36.220 22:18:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:09:36.220 22:18:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:36.220 22:18:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3796950 00:09:36.220 22:18:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:09:36.220 22:18:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:09:36.220 22:18:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3796950' 00:09:36.220 killing process with pid 3796950 00:09:36.220 22:18:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@967 -- # kill 3796950 00:09:36.220 22:18:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # wait 3796950 00:09:36.220 nvmf threads initialize successfully 00:09:36.220 bdev subsystem init successfully 00:09:36.220 created a nvmf target service 00:09:36.220 create targets's poll groups done 00:09:36.220 all subsystems of target started 00:09:36.220 nvmf target is running 00:09:36.220 all subsystems of target stopped 00:09:36.220 destroy targets's poll groups done 00:09:36.220 destroyed the nvmf target service 00:09:36.220 bdev subsystem finish successfully 00:09:36.220 nvmf threads destroy successfully 00:09:36.220 22:18:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:36.220 22:19:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:36.220 22:19:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:36.220 22:19:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:36.220 22:19:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:36.220 22:19:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:36.220 22:19:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:36.220 22:19:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.479 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:36.479 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:09:36.479 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:36.479 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:36.479 00:09:36.479 real 0m14.668s 00:09:36.479 user 0m39.388s 00:09:36.479 sys 0m3.950s 00:09:36.479 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:36.479 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:36.479 ************************************ 00:09:36.479 END TEST nvmf_example 00:09:36.479 ************************************ 00:09:36.479 22:19:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:36.479 22:19:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:36.479 22:19:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:36.479 22:19:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:36.479 ************************************ 00:09:36.479 START TEST nvmf_filesystem 00:09:36.479 ************************************ 00:09:36.479 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:36.741 * Looking for test storage... 00:09:36.741 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:36.741 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:09:36.741 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:09:36.741 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:09:36.741 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:09:36.741 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:09:36.741 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:09:36.741 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:09:36.741 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:09:36.741 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:09:36.741 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:36.741 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:09:36.741 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:36.741 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:36.741 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:09:36.741 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:36.741 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:36.741 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:09:36.741 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:36.741 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:36.741 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:36.741 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:36.741 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:36.741 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:36.741 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:36.741 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:36.741 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:09:36.741 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:36.741 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:36.741 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:09:36.741 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:09:36.741 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:09:36.741 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:36.741 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:09:36.741 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:09:36.741 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:36.741 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:36.741 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:09:36.741 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:09:36.741 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:09:36.741 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:09:36.741 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:09:36.741 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:09:36.741 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:09:36.741 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:09:36.741 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:36.741 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:09:36.741 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:09:36.741 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:09:36.741 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:09:36.741 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:09:36.741 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:09:36.741 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:09:36.741 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:36.741 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:09:36.741 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:09:36.741 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:09:36.741 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:09:36.741 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:36.741 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:09:36.741 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:09:36.741 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:09:36.741 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:09:36.741 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:09:36.741 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:09:36.741 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:09:36.741 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:09:36.741 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:09:36.741 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:09:36.741 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:09:36.741 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:09:36.741 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:09:36.741 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:09:36.741 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:09:36.742 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:09:36.742 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:09:36.742 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:09:36.742 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:09:36.742 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:36.742 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:09:36.742 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:09:36.742 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:09:36.742 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:09:36.742 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:09:36.742 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:09:36.742 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:09:36.742 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:09:36.742 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:09:36.742 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:09:36.742 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:09:36.742 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:36.742 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:09:36.742 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:09:36.742 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:36.742 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:36.742 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:36.742 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:36.742 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:36.742 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:36.742 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:36.742 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:36.742 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:09:36.742 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:09:36.742 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:09:36.742 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:09:36.742 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:09:36.742 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:09:36.742 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:09:36.742 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:09:36.742 #define SPDK_CONFIG_H 00:09:36.742 #define SPDK_CONFIG_APPS 1 00:09:36.742 #define SPDK_CONFIG_ARCH native 00:09:36.742 #undef SPDK_CONFIG_ASAN 00:09:36.742 #undef SPDK_CONFIG_AVAHI 00:09:36.742 #undef SPDK_CONFIG_CET 00:09:36.742 #define SPDK_CONFIG_COVERAGE 1 00:09:36.742 #define SPDK_CONFIG_CROSS_PREFIX 00:09:36.742 #undef SPDK_CONFIG_CRYPTO 00:09:36.742 #undef SPDK_CONFIG_CRYPTO_MLX5 00:09:36.742 #undef SPDK_CONFIG_CUSTOMOCF 00:09:36.742 #undef SPDK_CONFIG_DAOS 00:09:36.742 #define SPDK_CONFIG_DAOS_DIR 00:09:36.742 #define SPDK_CONFIG_DEBUG 1 00:09:36.742 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:09:36.742 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:36.742 #define SPDK_CONFIG_DPDK_INC_DIR 00:09:36.742 #define SPDK_CONFIG_DPDK_LIB_DIR 00:09:36.742 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:09:36.742 #undef SPDK_CONFIG_DPDK_UADK 00:09:36.742 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:36.742 #define SPDK_CONFIG_EXAMPLES 1 00:09:36.742 #undef SPDK_CONFIG_FC 00:09:36.742 #define SPDK_CONFIG_FC_PATH 00:09:36.742 #define SPDK_CONFIG_FIO_PLUGIN 1 00:09:36.742 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:09:36.742 #undef SPDK_CONFIG_FUSE 00:09:36.742 #undef SPDK_CONFIG_FUZZER 00:09:36.742 #define SPDK_CONFIG_FUZZER_LIB 00:09:36.742 #undef SPDK_CONFIG_GOLANG 00:09:36.742 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:09:36.742 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:09:36.742 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:09:36.742 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:09:36.742 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:09:36.742 #undef SPDK_CONFIG_HAVE_LIBBSD 00:09:36.742 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:09:36.742 #define SPDK_CONFIG_IDXD 1 00:09:36.742 #define SPDK_CONFIG_IDXD_KERNEL 1 00:09:36.742 #undef SPDK_CONFIG_IPSEC_MB 00:09:36.742 #define SPDK_CONFIG_IPSEC_MB_DIR 00:09:36.742 #define SPDK_CONFIG_ISAL 1 00:09:36.742 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:09:36.742 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:09:36.742 #define SPDK_CONFIG_LIBDIR 00:09:36.742 #undef SPDK_CONFIG_LTO 00:09:36.742 #define SPDK_CONFIG_MAX_LCORES 128 00:09:36.742 #define SPDK_CONFIG_NVME_CUSE 1 00:09:36.742 #undef SPDK_CONFIG_OCF 00:09:36.742 #define SPDK_CONFIG_OCF_PATH 00:09:36.742 #define SPDK_CONFIG_OPENSSL_PATH 00:09:36.742 #undef SPDK_CONFIG_PGO_CAPTURE 00:09:36.742 #define SPDK_CONFIG_PGO_DIR 00:09:36.742 #undef SPDK_CONFIG_PGO_USE 00:09:36.742 #define SPDK_CONFIG_PREFIX /usr/local 00:09:36.742 #undef SPDK_CONFIG_RAID5F 00:09:36.742 #undef SPDK_CONFIG_RBD 00:09:36.742 #define SPDK_CONFIG_RDMA 1 00:09:36.742 #define SPDK_CONFIG_RDMA_PROV verbs 00:09:36.742 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:09:36.742 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:09:36.742 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:09:36.742 #define SPDK_CONFIG_SHARED 1 00:09:36.742 #undef SPDK_CONFIG_SMA 00:09:36.742 #define SPDK_CONFIG_TESTS 1 00:09:36.742 #undef SPDK_CONFIG_TSAN 00:09:36.742 #define SPDK_CONFIG_UBLK 1 00:09:36.742 #define SPDK_CONFIG_UBSAN 1 00:09:36.742 #undef SPDK_CONFIG_UNIT_TESTS 00:09:36.742 #undef SPDK_CONFIG_URING 00:09:36.742 #define SPDK_CONFIG_URING_PATH 00:09:36.742 #undef SPDK_CONFIG_URING_ZNS 00:09:36.742 #undef SPDK_CONFIG_USDT 00:09:36.742 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:09:36.742 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:09:36.742 #define SPDK_CONFIG_VFIO_USER 1 00:09:36.742 #define SPDK_CONFIG_VFIO_USER_DIR 00:09:36.742 #define SPDK_CONFIG_VHOST 1 00:09:36.742 #define SPDK_CONFIG_VIRTIO 1 00:09:36.742 #undef SPDK_CONFIG_VTUNE 00:09:36.742 #define SPDK_CONFIG_VTUNE_DIR 00:09:36.742 #define SPDK_CONFIG_WERROR 1 00:09:36.742 #define SPDK_CONFIG_WPDK_DIR 00:09:36.742 #undef SPDK_CONFIG_XNVME 00:09:36.742 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:09:36.742 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:09:36.742 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:36.742 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:36.742 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:36.742 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:36.742 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.742 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.742 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.742 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:09:36.743 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:09:36.744 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:09:36.744 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:09:36.744 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:09:36.744 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:09:36.744 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:09:36.744 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:09:36.744 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:09:36.744 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:09:36.744 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:09:36.744 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:09:36.744 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:09:36.744 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:09:36.744 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:09:36.744 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:09:36.744 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:09:36.744 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:09:36.744 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:09:36.744 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:09:36.744 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:09:36.744 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:09:36.744 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:09:36.744 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:09:36.744 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:09:36.744 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:09:36.744 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:09:36.744 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:09:36.744 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:09:36.744 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:09:36.744 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:09:36.744 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:09:36.744 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:09:36.744 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:09:36.744 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:09:36.744 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:09:36.744 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:09:36.744 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:09:36.744 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:09:36.744 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:09:36.744 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:09:36.744 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:09:36.744 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:09:36.744 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:36.744 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:36.744 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:36.744 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:36.744 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:36.744 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:36.744 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:36.744 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:36.744 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:09:36.744 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:09:36.744 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:36.744 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:36.744 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:09:36.744 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:09:36.744 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:36.744 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:36.744 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:36.744 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:36.744 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:09:36.744 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:09:36.744 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:09:36.744 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:09:36.744 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:36.744 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:36.744 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:36.744 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:36.744 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:09:36.744 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:09:36.744 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j32 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 3798255 ]] 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 3798255 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.9WWE1e 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.9WWE1e/tests/target /tmp/spdk.9WWE1e 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=1957711872 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=3326717952 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=42759630848 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=53546168320 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=10786537472 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=26761826304 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=26773082112 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=11255808 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=10687102976 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=10709233664 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=22130688 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=26772307968 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=26773086208 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=778240 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:09:36.745 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=5354610688 00:09:36.746 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5354614784 00:09:36.746 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:09:36.746 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:09:36.746 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:09:36.746 * Looking for test storage... 00:09:36.747 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:09:36.747 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:09:36.747 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:36.747 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:09:36.747 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:09:36.747 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=42759630848 00:09:36.747 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:09:36.747 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:09:36.747 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:09:36.747 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:09:36.747 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:09:36.747 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=13001129984 00:09:36.747 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:09:36.747 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:36.747 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:36.747 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:36.747 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:36.747 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:09:36.747 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:09:36.747 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:09:36.747 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:36.747 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:09:36.747 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:09:36.747 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:09:36.747 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:09:36.747 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:09:36.747 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:09:36.747 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:09:36.747 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:09:36.747 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:09:36.747 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:09:36.747 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:09:36.747 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:36.747 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:09:36.747 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:36.747 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:36.747 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:36.747 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:36.747 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:36.747 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:36.747 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:36.747 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:36.747 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:36.747 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:36.747 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:09:36.747 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:09:36.747 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:36.747 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:36.747 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:36.747 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:36.747 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:36.747 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:36.747 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:36.747 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:36.747 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.747 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.747 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.747 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:36.747 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.747 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:09:36.747 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:36.747 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:36.747 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:36.747 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:36.747 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:36.747 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:36.747 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:36.747 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:36.747 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:09:36.747 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:36.747 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:09:36.747 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:36.747 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:36.747 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:36.747 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:36.747 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:36.748 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:36.748 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:36.748 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:36.748 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:36.748 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:36.748 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:09:36.748 22:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:38.652 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:38.652 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:09:38.652 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:38.652 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:38.652 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:38.652 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:38.652 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:38.652 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:09:38.652 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:38.652 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:09:38.652 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:09:38.652 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:09:38.652 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:09:38.652 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:09:38.652 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:09:38.652 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:38.652 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:38.652 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:38.652 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:38.652 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:38.652 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:38.652 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:38.652 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:38.652 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:38.652 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:38.652 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:38.652 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:38.652 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:38.652 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:38.652 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:38.652 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:38.652 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:38.652 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:38.652 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:09:38.652 Found 0000:08:00.0 (0x8086 - 0x159b) 00:09:38.652 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:38.652 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:38.652 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:38.652 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:38.652 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:38.652 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:38.652 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:09:38.652 Found 0000:08:00.1 (0x8086 - 0x159b) 00:09:38.652 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:38.652 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:38.652 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:38.652 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:38.653 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:38.653 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:38.653 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:38.653 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:38.653 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:38.653 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:38.653 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:38.653 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:38.653 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:38.653 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:38.653 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:38.653 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:09:38.653 Found net devices under 0000:08:00.0: cvl_0_0 00:09:38.653 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:38.653 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:38.653 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:38.653 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:38.653 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:38.653 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:38.653 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:38.653 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:38.653 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:09:38.653 Found net devices under 0000:08:00.1: cvl_0_1 00:09:38.653 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:38.653 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:38.653 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:09:38.653 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:38.653 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:38.653 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:38.653 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:38.653 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:38.653 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:38.653 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:38.653 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:38.653 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:38.653 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:38.653 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:38.653 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:38.653 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:38.653 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:38.653 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:38.653 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:38.653 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:38.653 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:38.653 22:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:38.653 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:38.653 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:38.653 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:38.653 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:38.653 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:38.653 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:09:38.653 00:09:38.653 --- 10.0.0.2 ping statistics --- 00:09:38.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:38.653 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:09:38.653 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:38.653 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:38.653 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:09:38.653 00:09:38.653 --- 10.0.0.1 ping statistics --- 00:09:38.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:38.653 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:09:38.653 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:38.653 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:09:38.653 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:38.653 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:38.653 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:38.653 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:38.653 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:38.653 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:38.653 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:38.653 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:09:38.653 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:38.653 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:38.653 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:38.653 ************************************ 00:09:38.653 START TEST nvmf_filesystem_no_in_capsule 00:09:38.653 ************************************ 00:09:38.653 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:09:38.653 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:09:38.653 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:38.653 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:38.653 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:38.653 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:38.653 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3799504 00:09:38.653 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:38.653 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3799504 00:09:38.653 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 3799504 ']' 00:09:38.653 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.653 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:38.653 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.653 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:38.653 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:38.653 [2024-07-24 22:19:04.162255] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:09:38.653 [2024-07-24 22:19:04.162348] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:38.653 EAL: No free 2048 kB hugepages reported on node 1 00:09:38.653 [2024-07-24 22:19:04.229015] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:38.653 [2024-07-24 22:19:04.346571] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:38.653 [2024-07-24 22:19:04.346635] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:38.653 [2024-07-24 22:19:04.346651] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:38.653 [2024-07-24 22:19:04.346664] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:38.653 [2024-07-24 22:19:04.346677] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:38.653 [2024-07-24 22:19:04.346776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:38.653 [2024-07-24 22:19:04.346890] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:38.653 [2024-07-24 22:19:04.346944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.653 [2024-07-24 22:19:04.346940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:38.914 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:38.914 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:09:38.914 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:38.914 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:38.914 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:38.914 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:38.914 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:38.914 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:38.914 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:38.914 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:38.914 [2024-07-24 22:19:04.498768] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:38.914 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:38.914 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:38.914 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:38.914 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:39.175 Malloc1 00:09:39.175 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.175 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:39.175 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.175 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:39.175 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.175 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:39.175 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.175 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:39.175 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.175 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:39.175 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.175 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:39.175 [2024-07-24 22:19:04.668123] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:39.175 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.175 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:39.175 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # local bdev_name=Malloc1 00:09:39.175 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local bdev_info 00:09:39.175 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bs 00:09:39.175 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local nb 00:09:39.175 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:39.175 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.175 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:39.175 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.175 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # bdev_info='[ 00:09:39.175 { 00:09:39.175 "name": "Malloc1", 00:09:39.175 "aliases": [ 00:09:39.175 "f3b1b4d4-ad81-4178-a22a-5698f9b4d447" 00:09:39.175 ], 00:09:39.175 "product_name": "Malloc disk", 00:09:39.175 "block_size": 512, 00:09:39.175 "num_blocks": 1048576, 00:09:39.175 "uuid": "f3b1b4d4-ad81-4178-a22a-5698f9b4d447", 00:09:39.175 "assigned_rate_limits": { 00:09:39.175 "rw_ios_per_sec": 0, 00:09:39.175 "rw_mbytes_per_sec": 0, 00:09:39.175 "r_mbytes_per_sec": 0, 00:09:39.175 "w_mbytes_per_sec": 0 00:09:39.175 }, 00:09:39.175 "claimed": true, 00:09:39.175 "claim_type": "exclusive_write", 00:09:39.175 "zoned": false, 00:09:39.175 "supported_io_types": { 00:09:39.175 "read": true, 00:09:39.175 "write": true, 00:09:39.175 "unmap": true, 00:09:39.175 "flush": true, 00:09:39.175 "reset": true, 00:09:39.175 "nvme_admin": false, 00:09:39.175 "nvme_io": false, 00:09:39.175 "nvme_io_md": false, 00:09:39.175 "write_zeroes": true, 00:09:39.175 "zcopy": true, 00:09:39.175 "get_zone_info": false, 00:09:39.175 "zone_management": false, 00:09:39.175 "zone_append": false, 00:09:39.175 "compare": false, 00:09:39.175 "compare_and_write": false, 00:09:39.175 "abort": true, 00:09:39.175 "seek_hole": false, 00:09:39.175 "seek_data": false, 00:09:39.175 "copy": true, 00:09:39.175 "nvme_iov_md": false 00:09:39.175 }, 00:09:39.175 "memory_domains": [ 00:09:39.175 { 00:09:39.175 "dma_device_id": "system", 00:09:39.175 "dma_device_type": 1 00:09:39.175 }, 00:09:39.175 { 00:09:39.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.175 "dma_device_type": 2 00:09:39.175 } 00:09:39.175 ], 00:09:39.175 "driver_specific": {} 00:09:39.175 } 00:09:39.175 ]' 00:09:39.175 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # jq '.[] .block_size' 00:09:39.175 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # bs=512 00:09:39.175 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # jq '.[] .num_blocks' 00:09:39.175 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # nb=1048576 00:09:39.175 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # bdev_size=512 00:09:39.175 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # echo 512 00:09:39.175 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:39.175 22:19:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:39.745 22:19:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:39.745 22:19:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1196 -- # local i=0 00:09:39.745 22:19:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:09:39.745 22:19:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:09:39.745 22:19:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # sleep 2 00:09:41.650 22:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:09:41.650 22:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:09:41.650 22:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:09:41.650 22:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:09:41.650 22:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:09:41.650 22:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # return 0 00:09:41.650 22:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:41.650 22:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:41.650 22:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:41.650 22:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:41.650 22:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:41.650 22:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:41.650 22:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:41.650 22:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:41.650 22:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:41.650 22:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:41.650 22:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:41.907 22:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:42.164 22:19:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:43.097 22:19:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:09:43.097 22:19:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:43.097 22:19:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:43.097 22:19:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:43.097 22:19:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:43.097 ************************************ 00:09:43.097 START TEST filesystem_ext4 00:09:43.097 ************************************ 00:09:43.097 22:19:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:43.097 22:19:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:43.097 22:19:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:43.097 22:19:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:43.097 22:19:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:09:43.097 22:19:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:09:43.097 22:19:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:09:43.097 22:19:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:09:43.097 22:19:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:09:43.098 22:19:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:09:43.098 22:19:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:43.098 mke2fs 1.46.5 (30-Dec-2021) 00:09:43.098 Discarding device blocks: 0/522240 done 00:09:43.355 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:43.355 Filesystem UUID: 694c4d2f-435b-43a7-8313-718f4bf98745 00:09:43.355 Superblock backups stored on blocks: 00:09:43.355 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:43.355 00:09:43.355 Allocating group tables: 0/64 done 00:09:43.355 Writing inode tables: 0/64 done 00:09:45.252 Creating journal (8192 blocks): done 00:09:45.510 Writing superblocks and filesystem accounting information: 0/64 4/64 done 00:09:45.510 00:09:45.510 22:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:09:45.510 22:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:46.076 22:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:46.335 22:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:09:46.335 22:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:46.335 22:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:09:46.335 22:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:09:46.335 22:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:46.335 22:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3799504 00:09:46.335 22:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:46.335 22:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:46.335 22:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:46.335 22:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:46.335 00:09:46.335 real 0m3.143s 00:09:46.335 user 0m0.012s 00:09:46.335 sys 0m0.066s 00:09:46.335 22:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:46.335 22:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:09:46.335 ************************************ 00:09:46.335 END TEST filesystem_ext4 00:09:46.335 ************************************ 00:09:46.335 22:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:46.335 22:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:46.335 22:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:46.335 22:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:46.335 ************************************ 00:09:46.335 START TEST filesystem_btrfs 00:09:46.335 ************************************ 00:09:46.335 22:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:46.335 22:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:46.335 22:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:46.335 22:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:46.335 22:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:09:46.335 22:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:09:46.335 22:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:09:46.335 22:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:09:46.335 22:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:09:46.335 22:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:09:46.335 22:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:46.335 btrfs-progs v6.6.2 00:09:46.335 See https://btrfs.readthedocs.io for more information. 00:09:46.335 00:09:46.335 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:46.335 NOTE: several default settings have changed in version 5.15, please make sure 00:09:46.335 this does not affect your deployments: 00:09:46.335 - DUP for metadata (-m dup) 00:09:46.335 - enabled no-holes (-O no-holes) 00:09:46.335 - enabled free-space-tree (-R free-space-tree) 00:09:46.335 00:09:46.335 Label: (null) 00:09:46.335 UUID: f7edb91b-b112-44c8-9b08-5f0665bfd8c5 00:09:46.335 Node size: 16384 00:09:46.335 Sector size: 4096 00:09:46.335 Filesystem size: 510.00MiB 00:09:46.335 Block group profiles: 00:09:46.335 Data: single 8.00MiB 00:09:46.335 Metadata: DUP 32.00MiB 00:09:46.335 System: DUP 8.00MiB 00:09:46.335 SSD detected: yes 00:09:46.335 Zoned device: no 00:09:46.335 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:09:46.335 Runtime features: free-space-tree 00:09:46.335 Checksum: crc32c 00:09:46.335 Number of devices: 1 00:09:46.335 Devices: 00:09:46.335 ID SIZE PATH 00:09:46.335 1 510.00MiB /dev/nvme0n1p1 00:09:46.335 00:09:46.335 22:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:09:46.335 22:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:47.269 22:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:47.269 22:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:09:47.269 22:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:47.269 22:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:09:47.269 22:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:09:47.269 22:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:47.269 22:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3799504 00:09:47.269 22:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:47.269 22:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:47.269 22:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:47.269 22:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:47.269 00:09:47.269 real 0m0.788s 00:09:47.269 user 0m0.020s 00:09:47.269 sys 0m0.111s 00:09:47.269 22:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:47.269 22:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:09:47.269 ************************************ 00:09:47.269 END TEST filesystem_btrfs 00:09:47.269 ************************************ 00:09:47.269 22:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:09:47.269 22:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:47.269 22:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:47.269 22:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:47.269 ************************************ 00:09:47.269 START TEST filesystem_xfs 00:09:47.269 ************************************ 00:09:47.269 22:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:09:47.269 22:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:09:47.269 22:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:47.269 22:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:47.269 22:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:09:47.269 22:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:09:47.269 22:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:09:47.269 22:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:09:47.269 22:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:09:47.269 22:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:09:47.270 22:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:47.270 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:47.270 = sectsz=512 attr=2, projid32bit=1 00:09:47.270 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:47.270 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:47.270 data = bsize=4096 blocks=130560, imaxpct=25 00:09:47.270 = sunit=0 swidth=0 blks 00:09:47.270 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:47.270 log =internal log bsize=4096 blocks=16384, version=2 00:09:47.270 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:47.270 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:48.639 Discarding blocks...Done. 00:09:48.640 22:19:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:09:48.640 22:19:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:50.543 22:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:50.543 22:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:09:50.543 22:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:50.543 22:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:09:50.543 22:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:09:50.543 22:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:50.543 22:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3799504 00:09:50.543 22:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:50.543 22:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:50.543 22:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:50.543 22:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:50.543 00:09:50.543 real 0m3.124s 00:09:50.543 user 0m0.018s 00:09:50.543 sys 0m0.063s 00:09:50.543 22:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:50.543 22:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:09:50.543 ************************************ 00:09:50.543 END TEST filesystem_xfs 00:09:50.543 ************************************ 00:09:50.543 22:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:50.543 22:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:09:50.543 22:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:50.543 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.543 22:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:50.543 22:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1217 -- # local i=0 00:09:50.543 22:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:09:50.543 22:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:50.543 22:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:09:50.543 22:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:50.800 22:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # return 0 00:09:50.800 22:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:50.800 22:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:50.800 22:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:50.800 22:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:50.800 22:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:50.800 22:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3799504 00:09:50.800 22:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 3799504 ']' 00:09:50.800 22:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 3799504 00:09:50.800 22:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:09:50.800 22:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:50.800 22:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3799504 00:09:50.800 22:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:50.800 22:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:50.800 22:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3799504' 00:09:50.800 killing process with pid 3799504 00:09:50.800 22:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 3799504 00:09:50.800 22:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 3799504 00:09:51.058 22:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:09:51.058 00:09:51.058 real 0m12.525s 00:09:51.058 user 0m47.934s 00:09:51.058 sys 0m1.832s 00:09:51.058 22:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:51.058 22:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:51.058 ************************************ 00:09:51.058 END TEST nvmf_filesystem_no_in_capsule 00:09:51.058 ************************************ 00:09:51.058 22:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:09:51.058 22:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:51.058 22:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:51.058 22:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:51.058 ************************************ 00:09:51.058 START TEST nvmf_filesystem_in_capsule 00:09:51.058 ************************************ 00:09:51.058 22:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:09:51.058 22:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:09:51.058 22:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:51.058 22:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:51.058 22:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:51.058 22:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:51.058 22:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3800834 00:09:51.058 22:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:51.058 22:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3800834 00:09:51.058 22:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 3800834 ']' 00:09:51.058 22:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:51.058 22:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:51.058 22:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:51.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:51.058 22:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:51.058 22:19:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:51.058 [2024-07-24 22:19:16.749360] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:09:51.058 [2024-07-24 22:19:16.749461] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:51.317 EAL: No free 2048 kB hugepages reported on node 1 00:09:51.317 [2024-07-24 22:19:16.814478] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:51.317 [2024-07-24 22:19:16.931559] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:51.317 [2024-07-24 22:19:16.931632] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:51.317 [2024-07-24 22:19:16.931649] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:51.317 [2024-07-24 22:19:16.931662] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:51.317 [2024-07-24 22:19:16.931674] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:51.317 [2024-07-24 22:19:16.931753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:51.317 [2024-07-24 22:19:16.931864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:51.317 [2024-07-24 22:19:16.931963] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:51.317 [2024-07-24 22:19:16.931967] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.576 22:19:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:51.576 22:19:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:09:51.576 22:19:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:51.576 22:19:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:51.576 22:19:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:51.576 22:19:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:51.576 22:19:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:51.576 22:19:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:09:51.576 22:19:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:51.576 22:19:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:51.576 [2024-07-24 22:19:17.081711] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:51.576 22:19:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:51.576 22:19:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:51.576 22:19:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:51.576 22:19:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:51.576 Malloc1 00:09:51.576 22:19:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:51.576 22:19:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:51.576 22:19:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:51.576 22:19:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:51.576 22:19:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:51.576 22:19:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:51.576 22:19:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:51.576 22:19:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:51.576 22:19:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:51.576 22:19:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:51.576 22:19:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:51.576 22:19:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:51.576 [2024-07-24 22:19:17.236114] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:51.576 22:19:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:51.576 22:19:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:51.576 22:19:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # local bdev_name=Malloc1 00:09:51.576 22:19:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local bdev_info 00:09:51.576 22:19:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bs 00:09:51.576 22:19:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local nb 00:09:51.576 22:19:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:51.576 22:19:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:51.576 22:19:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:51.576 22:19:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:51.576 22:19:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # bdev_info='[ 00:09:51.576 { 00:09:51.576 "name": "Malloc1", 00:09:51.576 "aliases": [ 00:09:51.576 "b9f2bbfb-0ca9-4ea6-aeed-8c68f0b6e5ba" 00:09:51.576 ], 00:09:51.576 "product_name": "Malloc disk", 00:09:51.576 "block_size": 512, 00:09:51.576 "num_blocks": 1048576, 00:09:51.576 "uuid": "b9f2bbfb-0ca9-4ea6-aeed-8c68f0b6e5ba", 00:09:51.576 "assigned_rate_limits": { 00:09:51.576 "rw_ios_per_sec": 0, 00:09:51.576 "rw_mbytes_per_sec": 0, 00:09:51.576 "r_mbytes_per_sec": 0, 00:09:51.576 "w_mbytes_per_sec": 0 00:09:51.576 }, 00:09:51.576 "claimed": true, 00:09:51.576 "claim_type": "exclusive_write", 00:09:51.576 "zoned": false, 00:09:51.576 "supported_io_types": { 00:09:51.576 "read": true, 00:09:51.576 "write": true, 00:09:51.576 "unmap": true, 00:09:51.576 "flush": true, 00:09:51.576 "reset": true, 00:09:51.576 "nvme_admin": false, 00:09:51.576 "nvme_io": false, 00:09:51.576 "nvme_io_md": false, 00:09:51.576 "write_zeroes": true, 00:09:51.576 "zcopy": true, 00:09:51.576 "get_zone_info": false, 00:09:51.576 "zone_management": false, 00:09:51.576 "zone_append": false, 00:09:51.576 "compare": false, 00:09:51.576 "compare_and_write": false, 00:09:51.576 "abort": true, 00:09:51.576 "seek_hole": false, 00:09:51.576 "seek_data": false, 00:09:51.576 "copy": true, 00:09:51.576 "nvme_iov_md": false 00:09:51.576 }, 00:09:51.576 "memory_domains": [ 00:09:51.576 { 00:09:51.576 "dma_device_id": "system", 00:09:51.576 "dma_device_type": 1 00:09:51.576 }, 00:09:51.576 { 00:09:51.576 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.576 "dma_device_type": 2 00:09:51.576 } 00:09:51.576 ], 00:09:51.576 "driver_specific": {} 00:09:51.576 } 00:09:51.576 ]' 00:09:51.576 22:19:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # jq '.[] .block_size' 00:09:51.834 22:19:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # bs=512 00:09:51.834 22:19:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # jq '.[] .num_blocks' 00:09:51.834 22:19:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # nb=1048576 00:09:51.834 22:19:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # bdev_size=512 00:09:51.834 22:19:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # echo 512 00:09:51.834 22:19:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:51.834 22:19:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:52.400 22:19:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:52.400 22:19:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1196 -- # local i=0 00:09:52.400 22:19:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:09:52.400 22:19:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:09:52.400 22:19:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # sleep 2 00:09:54.298 22:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:09:54.298 22:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:09:54.298 22:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:09:54.298 22:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:09:54.298 22:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:09:54.298 22:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # return 0 00:09:54.298 22:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:54.298 22:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:54.298 22:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:54.298 22:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:54.298 22:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:54.298 22:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:54.298 22:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:54.298 22:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:54.298 22:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:54.298 22:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:54.298 22:19:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:54.556 22:19:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:55.121 22:19:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:56.055 22:19:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:09:56.055 22:19:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:56.055 22:19:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:56.055 22:19:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:56.055 22:19:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:56.055 ************************************ 00:09:56.055 START TEST filesystem_in_capsule_ext4 00:09:56.055 ************************************ 00:09:56.055 22:19:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:56.055 22:19:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:56.055 22:19:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:56.055 22:19:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:56.055 22:19:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:09:56.055 22:19:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:09:56.055 22:19:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:09:56.055 22:19:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:09:56.055 22:19:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:09:56.055 22:19:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:09:56.055 22:19:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:56.055 mke2fs 1.46.5 (30-Dec-2021) 00:09:56.055 Discarding device blocks: 0/522240 done 00:09:56.055 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:56.055 Filesystem UUID: ff2dd4ab-4fd7-4457-97da-2999851949b4 00:09:56.055 Superblock backups stored on blocks: 00:09:56.055 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:56.055 00:09:56.055 Allocating group tables: 0/64 done 00:09:56.055 Writing inode tables: 0/64 done 00:09:56.314 Creating journal (8192 blocks): done 00:09:56.572 Writing superblocks and filesystem accounting information: 0/64 4/64 done 00:09:56.572 00:09:56.572 22:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:09:56.572 22:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:56.829 22:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:56.829 22:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:09:56.829 22:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:56.829 22:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:09:56.829 22:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:09:56.829 22:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:56.830 22:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3800834 00:09:56.830 22:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:56.830 22:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:56.830 22:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:56.830 22:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:56.830 00:09:56.830 real 0m0.819s 00:09:56.830 user 0m0.023s 00:09:56.830 sys 0m0.047s 00:09:56.830 22:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:56.830 22:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:09:56.830 ************************************ 00:09:56.830 END TEST filesystem_in_capsule_ext4 00:09:56.830 ************************************ 00:09:56.830 22:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:56.830 22:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:56.830 22:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:56.830 22:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:57.088 ************************************ 00:09:57.088 START TEST filesystem_in_capsule_btrfs 00:09:57.088 ************************************ 00:09:57.088 22:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:57.088 22:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:57.088 22:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:57.088 22:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:57.088 22:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:09:57.088 22:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:09:57.088 22:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:09:57.088 22:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:09:57.088 22:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:09:57.088 22:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:09:57.088 22:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:57.346 btrfs-progs v6.6.2 00:09:57.346 See https://btrfs.readthedocs.io for more information. 00:09:57.346 00:09:57.346 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:57.346 NOTE: several default settings have changed in version 5.15, please make sure 00:09:57.346 this does not affect your deployments: 00:09:57.346 - DUP for metadata (-m dup) 00:09:57.346 - enabled no-holes (-O no-holes) 00:09:57.346 - enabled free-space-tree (-R free-space-tree) 00:09:57.346 00:09:57.346 Label: (null) 00:09:57.346 UUID: e40197b3-29d6-410e-b787-df4bdc6df089 00:09:57.346 Node size: 16384 00:09:57.346 Sector size: 4096 00:09:57.346 Filesystem size: 510.00MiB 00:09:57.346 Block group profiles: 00:09:57.346 Data: single 8.00MiB 00:09:57.346 Metadata: DUP 32.00MiB 00:09:57.346 System: DUP 8.00MiB 00:09:57.346 SSD detected: yes 00:09:57.346 Zoned device: no 00:09:57.346 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:09:57.346 Runtime features: free-space-tree 00:09:57.346 Checksum: crc32c 00:09:57.346 Number of devices: 1 00:09:57.346 Devices: 00:09:57.346 ID SIZE PATH 00:09:57.346 1 510.00MiB /dev/nvme0n1p1 00:09:57.346 00:09:57.346 22:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:09:57.346 22:19:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:57.910 22:19:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:57.910 22:19:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:09:57.910 22:19:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:57.910 22:19:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:09:57.910 22:19:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:09:57.910 22:19:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:57.910 22:19:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3800834 00:09:57.910 22:19:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:57.910 22:19:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:57.910 22:19:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:57.910 22:19:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:57.910 00:09:57.910 real 0m1.027s 00:09:57.910 user 0m0.025s 00:09:57.910 sys 0m0.107s 00:09:57.910 22:19:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:57.910 22:19:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:09:57.910 ************************************ 00:09:57.910 END TEST filesystem_in_capsule_btrfs 00:09:57.910 ************************************ 00:09:57.910 22:19:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:09:57.910 22:19:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:57.910 22:19:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:57.910 22:19:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:57.910 ************************************ 00:09:57.910 START TEST filesystem_in_capsule_xfs 00:09:57.910 ************************************ 00:09:57.910 22:19:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:09:57.910 22:19:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:09:57.910 22:19:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:57.910 22:19:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:57.910 22:19:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:09:57.910 22:19:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:09:57.910 22:19:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:09:57.910 22:19:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:09:57.910 22:19:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:09:57.910 22:19:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:09:57.910 22:19:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:58.168 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:58.168 = sectsz=512 attr=2, projid32bit=1 00:09:58.168 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:58.168 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:58.168 data = bsize=4096 blocks=130560, imaxpct=25 00:09:58.168 = sunit=0 swidth=0 blks 00:09:58.168 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:58.168 log =internal log bsize=4096 blocks=16384, version=2 00:09:58.168 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:58.168 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:59.159 Discarding blocks...Done. 00:09:59.159 22:19:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:09:59.159 22:19:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:01.686 22:19:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:01.686 22:19:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:10:01.686 22:19:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:01.686 22:19:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:10:01.686 22:19:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:10:01.686 22:19:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:01.686 22:19:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3800834 00:10:01.686 22:19:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:01.686 22:19:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:01.686 22:19:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:01.686 22:19:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:01.686 00:10:01.686 real 0m3.542s 00:10:01.686 user 0m0.018s 00:10:01.686 sys 0m0.056s 00:10:01.686 22:19:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:01.686 22:19:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:01.686 ************************************ 00:10:01.686 END TEST filesystem_in_capsule_xfs 00:10:01.686 ************************************ 00:10:01.686 22:19:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:01.686 22:19:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:01.686 22:19:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:01.945 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:01.945 22:19:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:01.945 22:19:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1217 -- # local i=0 00:10:01.945 22:19:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:10:01.945 22:19:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:01.945 22:19:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:10:01.945 22:19:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:01.945 22:19:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # return 0 00:10:01.945 22:19:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:01.945 22:19:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.945 22:19:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:01.945 22:19:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.945 22:19:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:01.945 22:19:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3800834 00:10:01.945 22:19:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 3800834 ']' 00:10:01.945 22:19:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 3800834 00:10:01.945 22:19:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:10:01.945 22:19:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:01.945 22:19:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3800834 00:10:01.945 22:19:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:01.945 22:19:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:01.945 22:19:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3800834' 00:10:01.945 killing process with pid 3800834 00:10:01.945 22:19:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 3800834 00:10:01.945 22:19:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 3800834 00:10:02.206 22:19:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:02.206 00:10:02.206 real 0m11.120s 00:10:02.206 user 0m42.414s 00:10:02.206 sys 0m1.715s 00:10:02.206 22:19:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:02.206 22:19:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:02.206 ************************************ 00:10:02.206 END TEST nvmf_filesystem_in_capsule 00:10:02.206 ************************************ 00:10:02.206 22:19:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:10:02.206 22:19:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:02.206 22:19:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:10:02.206 22:19:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:02.206 22:19:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:10:02.206 22:19:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:02.206 22:19:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:02.206 rmmod nvme_tcp 00:10:02.206 rmmod nvme_fabrics 00:10:02.206 rmmod nvme_keyring 00:10:02.206 22:19:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:02.206 22:19:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:10:02.206 22:19:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:10:02.206 22:19:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:10:02.206 22:19:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:02.206 22:19:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:02.206 22:19:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:02.206 22:19:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:02.206 22:19:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:02.206 22:19:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:02.206 22:19:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:02.206 22:19:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:04.748 22:19:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:04.748 00:10:04.748 real 0m27.809s 00:10:04.748 user 1m31.105s 00:10:04.748 sys 0m4.940s 00:10:04.748 22:19:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:04.748 22:19:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:04.748 ************************************ 00:10:04.748 END TEST nvmf_filesystem 00:10:04.748 ************************************ 00:10:04.748 22:19:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:04.748 22:19:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:04.748 22:19:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:04.749 22:19:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:04.749 ************************************ 00:10:04.749 START TEST nvmf_target_discovery 00:10:04.749 ************************************ 00:10:04.749 22:19:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:04.749 * Looking for test storage... 00:10:04.749 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:04.749 22:19:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:04.749 22:19:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:10:04.749 22:19:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:04.749 22:19:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:04.749 22:19:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:04.749 22:19:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:04.749 22:19:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:04.749 22:19:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:04.749 22:19:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:04.749 22:19:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:04.749 22:19:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:04.749 22:19:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:04.749 22:19:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:10:04.749 22:19:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:10:04.749 22:19:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:04.749 22:19:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:04.749 22:19:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:04.749 22:19:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:04.749 22:19:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:04.749 22:19:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:04.749 22:19:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:04.749 22:19:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:04.749 22:19:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.749 22:19:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.749 22:19:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.749 22:19:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:10:04.749 22:19:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:04.749 22:19:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:10:04.749 22:19:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:04.749 22:19:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:04.749 22:19:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:04.749 22:19:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:04.749 22:19:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:04.749 22:19:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:04.749 22:19:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:04.749 22:19:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:04.749 22:19:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:10:04.749 22:19:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:10:04.749 22:19:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:10:04.749 22:19:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:10:04.749 22:19:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:10:04.749 22:19:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:04.749 22:19:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:04.749 22:19:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:04.749 22:19:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:04.749 22:19:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:04.749 22:19:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:04.749 22:19:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:04.749 22:19:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:04.749 22:19:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:04.749 22:19:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:04.749 22:19:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:10:04.749 22:19:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:06.131 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:06.131 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:10:06.131 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:06.131 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:06.131 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:06.131 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:06.131 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:06.131 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:10:06.131 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:06.131 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:10:06.131 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:10:06.131 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:10:06.131 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:10:06.131 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:10:06.131 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:10:06.131 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:06.131 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:06.131 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:06.131 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:06.131 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:06.131 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:06.131 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:06.131 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:06.131 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:06.131 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:06.131 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:06.131 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:06.131 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:06.131 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:06.131 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:06.131 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:06.131 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:06.131 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:06.131 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:10:06.131 Found 0000:08:00.0 (0x8086 - 0x159b) 00:10:06.131 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:06.131 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:06.131 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:06.131 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:06.131 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:06.131 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:06.131 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:10:06.131 Found 0000:08:00.1 (0x8086 - 0x159b) 00:10:06.131 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:06.131 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:06.131 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:06.131 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:06.131 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:06.131 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:06.131 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:06.131 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:06.131 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:06.131 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.131 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:06.131 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:06.131 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:06.131 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:06.131 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.131 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:10:06.131 Found net devices under 0000:08:00.0: cvl_0_0 00:10:06.131 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.131 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:06.131 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:06.131 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:06.131 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:06.131 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:06.131 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:06.131 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:06.132 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:10:06.132 Found net devices under 0000:08:00.1: cvl_0_1 00:10:06.132 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:06.132 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:06.132 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:10:06.132 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:06.132 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:06.132 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:06.132 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:06.132 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:06.132 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:06.132 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:06.132 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:06.132 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:06.132 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:06.132 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:06.132 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:06.132 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:06.132 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:06.132 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:06.132 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:06.132 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:06.132 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:06.132 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:06.132 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:06.132 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:06.132 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:06.132 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:06.132 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:06.132 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:10:06.132 00:10:06.132 --- 10.0.0.2 ping statistics --- 00:10:06.132 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.132 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:10:06.132 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:06.132 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:06.132 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:10:06.132 00:10:06.132 --- 10.0.0.1 ping statistics --- 00:10:06.132 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.132 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:10:06.132 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:06.132 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:10:06.132 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:06.132 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:06.132 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:06.132 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:06.132 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:06.132 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:06.132 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:06.391 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:10:06.391 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:06.391 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:06.391 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:06.391 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=3803535 00:10:06.391 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:06.391 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 3803535 00:10:06.391 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 3803535 ']' 00:10:06.391 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.391 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:06.391 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.391 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:06.391 22:19:31 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:06.391 [2024-07-24 22:19:31.912315] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:10:06.391 [2024-07-24 22:19:31.912413] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:06.391 EAL: No free 2048 kB hugepages reported on node 1 00:10:06.391 [2024-07-24 22:19:31.977535] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:06.391 [2024-07-24 22:19:32.094286] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:06.391 [2024-07-24 22:19:32.094348] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:06.391 [2024-07-24 22:19:32.094365] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:06.391 [2024-07-24 22:19:32.094379] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:06.391 [2024-07-24 22:19:32.094391] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:06.391 [2024-07-24 22:19:32.094509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:06.391 [2024-07-24 22:19:32.094552] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:06.391 [2024-07-24 22:19:32.094639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:06.391 [2024-07-24 22:19:32.094643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.650 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:06.650 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:10:06.650 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:06.650 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:06.650 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:06.650 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:06.650 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:06.650 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.650 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:06.650 [2024-07-24 22:19:32.240762] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:06.650 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.650 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:10:06.650 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:06.650 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:10:06.650 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.650 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:06.650 Null1 00:10:06.650 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.650 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:06.650 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.650 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:06.650 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.650 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:10:06.651 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.651 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:06.651 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.651 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:06.651 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.651 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:06.651 [2024-07-24 22:19:32.281054] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:06.651 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.651 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:06.651 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:10:06.651 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.651 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:06.651 Null2 00:10:06.651 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.651 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:10:06.651 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.651 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:06.651 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.651 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:10:06.651 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.651 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:06.651 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.651 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:06.651 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.651 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:06.651 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.651 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:06.651 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:10:06.651 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.651 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:06.651 Null3 00:10:06.651 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.651 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:10:06.651 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.651 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:06.651 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.651 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:10:06.651 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.651 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:06.651 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.651 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:10:06.651 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.651 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:06.651 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.651 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:06.651 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:10:06.651 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.651 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:06.909 Null4 00:10:06.909 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.910 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:10:06.910 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.910 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:06.910 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.910 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:10:06.910 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.910 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:06.910 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.910 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:10:06.910 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.910 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:06.910 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.910 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:06.910 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.910 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:06.910 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.910 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:10:06.910 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.910 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:06.910 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.910 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 4420 00:10:06.910 00:10:06.910 Discovery Log Number of Records 6, Generation counter 6 00:10:06.910 =====Discovery Log Entry 0====== 00:10:06.910 trtype: tcp 00:10:06.910 adrfam: ipv4 00:10:06.910 subtype: current discovery subsystem 00:10:06.910 treq: not required 00:10:06.910 portid: 0 00:10:06.910 trsvcid: 4420 00:10:06.910 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:06.910 traddr: 10.0.0.2 00:10:06.910 eflags: explicit discovery connections, duplicate discovery information 00:10:06.910 sectype: none 00:10:06.910 =====Discovery Log Entry 1====== 00:10:06.910 trtype: tcp 00:10:06.910 adrfam: ipv4 00:10:06.910 subtype: nvme subsystem 00:10:06.910 treq: not required 00:10:06.910 portid: 0 00:10:06.910 trsvcid: 4420 00:10:06.910 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:06.910 traddr: 10.0.0.2 00:10:06.910 eflags: none 00:10:06.910 sectype: none 00:10:06.910 =====Discovery Log Entry 2====== 00:10:06.910 trtype: tcp 00:10:06.910 adrfam: ipv4 00:10:06.910 subtype: nvme subsystem 00:10:06.910 treq: not required 00:10:06.910 portid: 0 00:10:06.910 trsvcid: 4420 00:10:06.910 subnqn: nqn.2016-06.io.spdk:cnode2 00:10:06.910 traddr: 10.0.0.2 00:10:06.910 eflags: none 00:10:06.910 sectype: none 00:10:06.910 =====Discovery Log Entry 3====== 00:10:06.910 trtype: tcp 00:10:06.910 adrfam: ipv4 00:10:06.910 subtype: nvme subsystem 00:10:06.910 treq: not required 00:10:06.910 portid: 0 00:10:06.910 trsvcid: 4420 00:10:06.910 subnqn: nqn.2016-06.io.spdk:cnode3 00:10:06.910 traddr: 10.0.0.2 00:10:06.910 eflags: none 00:10:06.910 sectype: none 00:10:06.910 =====Discovery Log Entry 4====== 00:10:06.910 trtype: tcp 00:10:06.910 adrfam: ipv4 00:10:06.910 subtype: nvme subsystem 00:10:06.910 treq: not required 00:10:06.910 portid: 0 00:10:06.910 trsvcid: 4420 00:10:06.910 subnqn: nqn.2016-06.io.spdk:cnode4 00:10:06.910 traddr: 10.0.0.2 00:10:06.910 eflags: none 00:10:06.910 sectype: none 00:10:06.910 =====Discovery Log Entry 5====== 00:10:06.910 trtype: tcp 00:10:06.910 adrfam: ipv4 00:10:06.910 subtype: discovery subsystem referral 00:10:06.910 treq: not required 00:10:06.910 portid: 0 00:10:06.910 trsvcid: 4430 00:10:06.910 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:06.910 traddr: 10.0.0.2 00:10:06.910 eflags: none 00:10:06.910 sectype: none 00:10:06.910 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:10:06.910 Perform nvmf subsystem discovery via RPC 00:10:06.910 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:10:06.910 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.910 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:06.910 [ 00:10:06.910 { 00:10:06.910 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:06.910 "subtype": "Discovery", 00:10:06.910 "listen_addresses": [ 00:10:06.910 { 00:10:06.910 "trtype": "TCP", 00:10:06.910 "adrfam": "IPv4", 00:10:06.910 "traddr": "10.0.0.2", 00:10:06.910 "trsvcid": "4420" 00:10:06.910 } 00:10:06.910 ], 00:10:06.910 "allow_any_host": true, 00:10:06.910 "hosts": [] 00:10:06.910 }, 00:10:06.910 { 00:10:06.910 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:06.910 "subtype": "NVMe", 00:10:06.910 "listen_addresses": [ 00:10:06.910 { 00:10:06.910 "trtype": "TCP", 00:10:06.910 "adrfam": "IPv4", 00:10:06.910 "traddr": "10.0.0.2", 00:10:06.910 "trsvcid": "4420" 00:10:06.910 } 00:10:06.910 ], 00:10:06.910 "allow_any_host": true, 00:10:06.910 "hosts": [], 00:10:06.910 "serial_number": "SPDK00000000000001", 00:10:06.910 "model_number": "SPDK bdev Controller", 00:10:06.910 "max_namespaces": 32, 00:10:06.910 "min_cntlid": 1, 00:10:06.910 "max_cntlid": 65519, 00:10:06.910 "namespaces": [ 00:10:06.910 { 00:10:06.910 "nsid": 1, 00:10:06.910 "bdev_name": "Null1", 00:10:06.910 "name": "Null1", 00:10:06.910 "nguid": "4EC4994CBB934FFA8515770B409AC303", 00:10:06.910 "uuid": "4ec4994c-bb93-4ffa-8515-770b409ac303" 00:10:06.910 } 00:10:06.910 ] 00:10:06.910 }, 00:10:06.910 { 00:10:06.910 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:06.910 "subtype": "NVMe", 00:10:06.910 "listen_addresses": [ 00:10:06.910 { 00:10:06.910 "trtype": "TCP", 00:10:06.910 "adrfam": "IPv4", 00:10:06.910 "traddr": "10.0.0.2", 00:10:06.910 "trsvcid": "4420" 00:10:06.910 } 00:10:06.910 ], 00:10:06.910 "allow_any_host": true, 00:10:06.910 "hosts": [], 00:10:06.910 "serial_number": "SPDK00000000000002", 00:10:06.910 "model_number": "SPDK bdev Controller", 00:10:06.910 "max_namespaces": 32, 00:10:06.910 "min_cntlid": 1, 00:10:06.910 "max_cntlid": 65519, 00:10:06.910 "namespaces": [ 00:10:06.910 { 00:10:06.910 "nsid": 1, 00:10:06.910 "bdev_name": "Null2", 00:10:06.910 "name": "Null2", 00:10:06.910 "nguid": "68FB189F8F4F4BD29CD04E5F0B1B66FF", 00:10:06.910 "uuid": "68fb189f-8f4f-4bd2-9cd0-4e5f0b1b66ff" 00:10:06.910 } 00:10:06.910 ] 00:10:06.910 }, 00:10:06.910 { 00:10:06.910 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:10:06.910 "subtype": "NVMe", 00:10:06.910 "listen_addresses": [ 00:10:06.910 { 00:10:06.910 "trtype": "TCP", 00:10:06.910 "adrfam": "IPv4", 00:10:06.910 "traddr": "10.0.0.2", 00:10:06.910 "trsvcid": "4420" 00:10:06.910 } 00:10:06.910 ], 00:10:06.910 "allow_any_host": true, 00:10:06.910 "hosts": [], 00:10:06.910 "serial_number": "SPDK00000000000003", 00:10:06.910 "model_number": "SPDK bdev Controller", 00:10:06.910 "max_namespaces": 32, 00:10:06.910 "min_cntlid": 1, 00:10:06.910 "max_cntlid": 65519, 00:10:06.910 "namespaces": [ 00:10:06.910 { 00:10:06.910 "nsid": 1, 00:10:06.910 "bdev_name": "Null3", 00:10:06.910 "name": "Null3", 00:10:06.910 "nguid": "7B87E29BE3534407AA412EE1D1D4BCED", 00:10:06.910 "uuid": "7b87e29b-e353-4407-aa41-2ee1d1d4bced" 00:10:06.910 } 00:10:06.910 ] 00:10:06.910 }, 00:10:06.910 { 00:10:06.910 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:10:06.910 "subtype": "NVMe", 00:10:06.910 "listen_addresses": [ 00:10:06.910 { 00:10:06.910 "trtype": "TCP", 00:10:06.910 "adrfam": "IPv4", 00:10:06.910 "traddr": "10.0.0.2", 00:10:06.910 "trsvcid": "4420" 00:10:06.910 } 00:10:06.910 ], 00:10:06.910 "allow_any_host": true, 00:10:06.910 "hosts": [], 00:10:06.910 "serial_number": "SPDK00000000000004", 00:10:06.910 "model_number": "SPDK bdev Controller", 00:10:06.910 "max_namespaces": 32, 00:10:06.910 "min_cntlid": 1, 00:10:06.910 "max_cntlid": 65519, 00:10:06.910 "namespaces": [ 00:10:06.910 { 00:10:06.910 "nsid": 1, 00:10:06.910 "bdev_name": "Null4", 00:10:06.910 "name": "Null4", 00:10:06.910 "nguid": "F21BC4299BB64FCC8496CEB8378C0BDD", 00:10:06.910 "uuid": "f21bc429-9bb6-4fcc-8496-ceb8378c0bdd" 00:10:06.910 } 00:10:06.910 ] 00:10:06.910 } 00:10:06.910 ] 00:10:06.910 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.910 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:10:06.910 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:06.911 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:06.911 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.911 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:06.911 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.911 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:10:06.911 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.911 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:06.911 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.911 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:06.911 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:10:06.911 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.911 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:06.911 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.911 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:10:06.911 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.911 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:07.169 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:07.169 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:07.170 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:10:07.170 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:07.170 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:07.170 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:07.170 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:10:07.170 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:07.170 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:07.170 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:07.170 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:07.170 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:10:07.170 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:07.170 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:07.170 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:07.170 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:10:07.170 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:07.170 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:07.170 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:07.170 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:10:07.170 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:07.170 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:07.170 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:07.170 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:10:07.170 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:10:07.170 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:07.170 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:07.170 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:07.170 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:10:07.170 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:10:07.170 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:10:07.170 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:10:07.170 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:07.170 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:10:07.170 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:07.170 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:10:07.170 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:07.170 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:07.170 rmmod nvme_tcp 00:10:07.170 rmmod nvme_fabrics 00:10:07.170 rmmod nvme_keyring 00:10:07.170 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:07.170 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:10:07.170 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:10:07.170 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 3803535 ']' 00:10:07.170 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 3803535 00:10:07.170 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 3803535 ']' 00:10:07.170 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 3803535 00:10:07.170 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:10:07.170 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:07.170 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3803535 00:10:07.170 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:07.170 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:07.170 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3803535' 00:10:07.170 killing process with pid 3803535 00:10:07.170 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 3803535 00:10:07.170 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 3803535 00:10:07.430 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:07.430 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:07.430 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:07.430 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:07.430 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:07.430 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:07.430 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:07.430 22:19:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:09.338 22:19:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:09.338 00:10:09.338 real 0m5.044s 00:10:09.338 user 0m4.191s 00:10:09.338 sys 0m1.574s 00:10:09.338 22:19:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:09.338 22:19:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:09.338 ************************************ 00:10:09.338 END TEST nvmf_target_discovery 00:10:09.338 ************************************ 00:10:09.597 22:19:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:09.597 22:19:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:09.597 22:19:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:09.597 22:19:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:09.597 ************************************ 00:10:09.597 START TEST nvmf_referrals 00:10:09.597 ************************************ 00:10:09.597 22:19:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:09.597 * Looking for test storage... 00:10:09.597 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:09.597 22:19:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:09.597 22:19:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:10:09.597 22:19:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:09.597 22:19:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:09.597 22:19:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:09.597 22:19:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:09.597 22:19:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:09.597 22:19:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:09.597 22:19:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:09.597 22:19:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:09.597 22:19:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:09.597 22:19:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:09.597 22:19:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:10:09.597 22:19:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:10:09.597 22:19:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:09.597 22:19:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:09.597 22:19:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:09.597 22:19:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:09.597 22:19:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:09.597 22:19:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:09.597 22:19:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:09.597 22:19:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:09.597 22:19:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.597 22:19:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.597 22:19:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.597 22:19:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:10:09.597 22:19:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.597 22:19:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:10:09.597 22:19:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:09.597 22:19:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:09.597 22:19:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:09.597 22:19:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:09.597 22:19:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:09.597 22:19:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:09.597 22:19:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:09.597 22:19:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:09.597 22:19:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:10:09.597 22:19:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:10:09.597 22:19:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:10:09.597 22:19:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:10:09.597 22:19:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:10:09.597 22:19:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:10:09.597 22:19:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:10:09.597 22:19:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:09.597 22:19:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:09.597 22:19:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:09.597 22:19:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:09.597 22:19:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:09.598 22:19:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:09.598 22:19:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:09.598 22:19:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:09.598 22:19:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:09.598 22:19:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:09.598 22:19:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:10:09.598 22:19:35 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:11.506 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:11.506 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:10:11.506 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:11.506 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:11.506 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:11.506 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:11.506 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:11.506 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:10:11.506 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:11.506 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:10:11.506 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:10:11.506 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:10:11.506 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:10:11.506 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:10:11.506 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:10:11.506 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:11.506 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:11.506 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:11.506 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:11.506 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:11.506 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:11.506 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:11.506 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:11.506 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:10:11.507 Found 0000:08:00.0 (0x8086 - 0x159b) 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:10:11.507 Found 0000:08:00.1 (0x8086 - 0x159b) 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:10:11.507 Found net devices under 0000:08:00.0: cvl_0_0 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:10:11.507 Found net devices under 0000:08:00.1: cvl_0_1 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:11.507 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:11.507 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:10:11.507 00:10:11.507 --- 10.0.0.2 ping statistics --- 00:10:11.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:11.507 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:11.507 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:11.507 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:10:11.507 00:10:11.507 --- 10.0.0.1 ping statistics --- 00:10:11.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:11.507 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=3805097 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 3805097 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 3805097 ']' 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:11.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:11.507 22:19:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:11.507 [2024-07-24 22:19:37.052897] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:10:11.508 [2024-07-24 22:19:37.052988] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:11.508 EAL: No free 2048 kB hugepages reported on node 1 00:10:11.508 [2024-07-24 22:19:37.119220] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:11.766 [2024-07-24 22:19:37.240283] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:11.766 [2024-07-24 22:19:37.240347] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:11.766 [2024-07-24 22:19:37.240362] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:11.766 [2024-07-24 22:19:37.240375] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:11.766 [2024-07-24 22:19:37.240387] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:11.766 [2024-07-24 22:19:37.240466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:11.766 [2024-07-24 22:19:37.240550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:11.766 [2024-07-24 22:19:37.240519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:11.766 [2024-07-24 22:19:37.240553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.766 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:11.766 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:10:11.766 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:11.766 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:11.766 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:11.766 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:11.766 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:11.766 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.766 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:11.766 [2024-07-24 22:19:37.389755] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:11.766 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.766 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:10:11.766 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.766 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:11.766 [2024-07-24 22:19:37.401985] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:10:11.766 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.766 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:10:11.766 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.766 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:11.766 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.766 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:10:11.766 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.766 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:11.766 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.766 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:10:11.766 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.766 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:11.766 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:11.766 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:11.766 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:10:11.766 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:11.766 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:11.766 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:12.024 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:10:12.024 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:10:12.024 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:12.024 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:12.024 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:12.024 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:12.024 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:12.024 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:12.024 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:12.024 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:12.024 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:12.024 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:10:12.024 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:12.024 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:12.024 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:12.024 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:12.024 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:12.024 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:12.024 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:12.024 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:10:12.024 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:12.024 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:12.293 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:12.293 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:10:12.293 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:12.293 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:12.293 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:12.293 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:10:12.293 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:12.293 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:12.293 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:12.293 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:12.293 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:10:12.293 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:12.293 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:12.293 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:12.293 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:10:12.293 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:10:12.293 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:12.293 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:12.293 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:12.293 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:12.293 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:12.293 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:12.293 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:10:12.293 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:10:12.293 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:12.293 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:12.293 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:12.293 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:12.293 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:12.293 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:12.293 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:12.293 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:10:12.293 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:12.293 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:12.293 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:12.293 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:12.293 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:12.293 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:12.293 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:12.293 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:10:12.293 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:12.293 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:10:12.293 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:12.293 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:12.293 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:12.293 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:12.293 22:19:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:12.555 22:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:10:12.555 22:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:12.555 22:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:10:12.555 22:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:12.555 22:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:10:12.555 22:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:12.555 22:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:12.813 22:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:12.813 22:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:10:12.813 22:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:10:12.813 22:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:12.813 22:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:12.813 22:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:12.813 22:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:12.813 22:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:12.813 22:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:12.813 22:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:12.813 22:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:12.813 22:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:10:12.813 22:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:12.813 22:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:12.813 22:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:12.813 22:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:12.813 22:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:12.813 22:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:12.813 22:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:12.813 22:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:10:12.813 22:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:12.813 22:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:10:12.813 22:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:12.813 22:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:12.813 22:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:12.813 22:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:12.813 22:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:13.071 22:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:10:13.071 22:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:13.071 22:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:10:13.071 22:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:10:13.071 22:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:13.071 22:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:13.071 22:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:13.071 22:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:10:13.071 22:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:10:13.071 22:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:13.071 22:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:10:13.071 22:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:13.071 22:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:13.328 22:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:13.328 22:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:10:13.328 22:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.328 22:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:13.328 22:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:13.328 22:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:13.328 22:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:10:13.328 22:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.328 22:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:13.328 22:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:13.328 22:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:10:13.328 22:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:10:13.328 22:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:13.328 22:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:13.329 22:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:13.329 22:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:13.329 22:19:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:13.329 22:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:13.329 22:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:10:13.329 22:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:10:13.329 22:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:10:13.329 22:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:13.329 22:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:10:13.329 22:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:13.329 22:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:10:13.329 22:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:13.329 22:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:13.329 rmmod nvme_tcp 00:10:13.329 rmmod nvme_fabrics 00:10:13.587 rmmod nvme_keyring 00:10:13.587 22:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:13.587 22:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:10:13.587 22:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:10:13.587 22:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 3805097 ']' 00:10:13.587 22:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 3805097 00:10:13.587 22:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 3805097 ']' 00:10:13.587 22:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 3805097 00:10:13.587 22:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:10:13.587 22:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:13.587 22:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3805097 00:10:13.587 22:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:13.587 22:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:13.587 22:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3805097' 00:10:13.587 killing process with pid 3805097 00:10:13.587 22:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 3805097 00:10:13.587 22:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 3805097 00:10:13.847 22:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:13.847 22:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:13.847 22:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:13.847 22:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:13.847 22:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:13.847 22:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:13.847 22:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:13.847 22:19:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:15.754 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:15.754 00:10:15.754 real 0m6.279s 00:10:15.754 user 0m9.514s 00:10:15.754 sys 0m1.930s 00:10:15.754 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:15.754 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:15.754 ************************************ 00:10:15.754 END TEST nvmf_referrals 00:10:15.754 ************************************ 00:10:15.754 22:19:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:15.754 22:19:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:15.754 22:19:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:15.754 22:19:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:15.754 ************************************ 00:10:15.754 START TEST nvmf_connect_disconnect 00:10:15.754 ************************************ 00:10:15.754 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:16.012 * Looking for test storage... 00:10:16.012 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:16.012 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:16.012 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:10:16.012 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:16.012 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:16.012 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:16.013 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:16.013 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:16.013 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:16.013 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:16.013 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:16.013 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:16.013 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:16.013 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:10:16.013 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:10:16.013 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:16.013 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:16.013 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:16.013 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:16.013 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:16.013 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:16.013 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:16.013 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:16.013 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.013 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.013 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.013 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:10:16.013 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:16.013 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:10:16.013 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:16.013 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:16.013 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:16.013 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:16.013 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:16.013 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:16.013 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:16.013 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:16.013 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:16.013 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:16.013 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:10:16.013 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:16.013 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:16.013 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:16.013 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:16.013 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:16.013 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:16.013 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:16.013 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:16.013 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:16.013 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:16.013 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:10:16.013 22:19:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:17.920 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:17.920 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:10:17.920 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:17.920 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:17.920 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:17.920 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:17.920 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:17.920 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:10:17.920 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:17.920 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:10:17.920 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:10:17.920 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:10:17.920 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:10:17.920 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:10:17.920 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:10:17.920 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:17.920 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:17.920 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:17.920 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:17.920 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:17.920 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:17.920 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:17.920 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:17.920 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:17.920 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:17.920 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:17.920 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:17.920 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:17.920 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:17.920 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:17.920 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:17.920 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:17.920 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:17.920 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:10:17.920 Found 0000:08:00.0 (0x8086 - 0x159b) 00:10:17.920 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:17.920 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:17.920 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:17.920 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:17.920 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:17.920 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:17.920 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:10:17.920 Found 0000:08:00.1 (0x8086 - 0x159b) 00:10:17.920 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:17.920 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:17.920 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:17.920 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:17.920 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:17.920 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:17.920 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:17.920 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:17.920 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:17.920 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:17.920 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:17.920 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:17.920 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:17.920 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:17.920 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:17.920 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:10:17.920 Found net devices under 0000:08:00.0: cvl_0_0 00:10:17.921 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:17.921 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:17.921 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:17.921 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:17.921 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:17.921 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:17.921 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:17.921 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:17.921 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:10:17.921 Found net devices under 0000:08:00.1: cvl_0_1 00:10:17.921 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:17.921 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:17.921 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:10:17.921 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:17.921 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:17.921 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:17.921 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:17.921 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:17.921 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:17.921 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:17.921 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:17.921 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:17.921 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:17.921 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:17.921 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:17.921 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:17.921 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:17.921 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:17.921 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:17.921 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:17.921 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:17.921 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:17.921 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:17.921 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:17.921 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:17.921 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:17.921 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:17.921 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.363 ms 00:10:17.921 00:10:17.921 --- 10.0.0.2 ping statistics --- 00:10:17.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:17.921 rtt min/avg/max/mdev = 0.363/0.363/0.363/0.000 ms 00:10:17.921 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:17.921 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:17.921 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:10:17.921 00:10:17.921 --- 10.0.0.1 ping statistics --- 00:10:17.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:17.921 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:10:17.921 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:17.921 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:10:17.921 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:17.921 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:17.921 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:17.921 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:17.921 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:17.921 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:17.921 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:17.921 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:10:17.921 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:17.921 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:17.921 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:17.921 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=3806857 00:10:17.921 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:17.921 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 3806857 00:10:17.921 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 3806857 ']' 00:10:17.921 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:17.921 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:17.921 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:17.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:17.921 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:17.921 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:17.921 [2024-07-24 22:19:43.351184] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:10:17.921 [2024-07-24 22:19:43.351289] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:17.921 EAL: No free 2048 kB hugepages reported on node 1 00:10:17.921 [2024-07-24 22:19:43.416852] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:17.921 [2024-07-24 22:19:43.533911] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:17.921 [2024-07-24 22:19:43.533974] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:17.921 [2024-07-24 22:19:43.533991] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:17.921 [2024-07-24 22:19:43.534004] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:17.921 [2024-07-24 22:19:43.534016] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:17.921 [2024-07-24 22:19:43.534136] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:17.921 [2024-07-24 22:19:43.534212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:17.921 [2024-07-24 22:19:43.534264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:17.921 [2024-07-24 22:19:43.534267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.178 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:18.178 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:10:18.178 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:18.178 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:18.178 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:18.178 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:18.178 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:18.178 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.178 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:18.178 [2024-07-24 22:19:43.683785] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:18.178 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.178 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:10:18.178 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.178 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:18.178 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.178 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:10:18.178 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:18.178 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.178 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:18.178 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.178 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:18.178 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.178 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:18.178 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.178 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:18.178 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:18.178 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:18.178 [2024-07-24 22:19:43.737515] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:18.178 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:18.178 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:10:18.178 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:10:18.178 22:19:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:10:20.705 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:23.233 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.761 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:28.287 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.828 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:30.828 22:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:10:30.828 22:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:10:30.828 22:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:30.828 22:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:10:30.828 22:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:30.828 22:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:10:30.828 22:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:30.828 22:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:30.828 rmmod nvme_tcp 00:10:31.090 rmmod nvme_fabrics 00:10:31.090 rmmod nvme_keyring 00:10:31.090 22:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:31.090 22:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:10:31.090 22:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:10:31.090 22:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 3806857 ']' 00:10:31.090 22:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 3806857 00:10:31.090 22:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 3806857 ']' 00:10:31.090 22:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 3806857 00:10:31.090 22:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:10:31.091 22:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:31.091 22:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3806857 00:10:31.091 22:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:31.091 22:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:31.091 22:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3806857' 00:10:31.091 killing process with pid 3806857 00:10:31.091 22:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 3806857 00:10:31.091 22:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 3806857 00:10:31.349 22:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:31.349 22:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:31.349 22:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:31.349 22:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:31.349 22:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:31.349 22:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:31.349 22:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:31.349 22:19:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:33.258 22:19:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:33.258 00:10:33.258 real 0m17.455s 00:10:33.258 user 0m52.447s 00:10:33.258 sys 0m2.986s 00:10:33.258 22:19:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:33.258 22:19:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:33.258 ************************************ 00:10:33.258 END TEST nvmf_connect_disconnect 00:10:33.258 ************************************ 00:10:33.258 22:19:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:10:33.258 22:19:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:33.258 22:19:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:33.258 22:19:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:33.258 ************************************ 00:10:33.258 START TEST nvmf_multitarget 00:10:33.258 ************************************ 00:10:33.258 22:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:10:33.518 * Looking for test storage... 00:10:33.518 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:33.518 22:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:33.518 22:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:10:33.518 22:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:33.518 22:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:33.518 22:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:33.518 22:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:33.518 22:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:33.518 22:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:33.518 22:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:33.518 22:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:33.518 22:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:33.518 22:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:33.518 22:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:10:33.518 22:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:10:33.518 22:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:33.518 22:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:33.518 22:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:33.518 22:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:33.518 22:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:33.518 22:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:33.518 22:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:33.518 22:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:33.518 22:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.518 22:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.518 22:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.518 22:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:10:33.518 22:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.518 22:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:10:33.518 22:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:33.518 22:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:33.518 22:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:33.518 22:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:33.518 22:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:33.518 22:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:33.519 22:19:58 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:33.519 22:19:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:33.519 22:19:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:10:33.519 22:19:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:10:33.519 22:19:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:33.519 22:19:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:33.519 22:19:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:33.519 22:19:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:33.519 22:19:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:33.519 22:19:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:33.519 22:19:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:33.519 22:19:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:33.519 22:19:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:33.519 22:19:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:33.519 22:19:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:10:33.519 22:19:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:10:34.900 Found 0000:08:00.0 (0x8086 - 0x159b) 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:10:34.900 Found 0000:08:00.1 (0x8086 - 0x159b) 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:10:34.900 Found net devices under 0000:08:00.0: cvl_0_0 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:10:34.900 Found net devices under 0000:08:00.1: cvl_0_1 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:34.900 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:35.159 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:35.159 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:35.159 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:35.159 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:35.159 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:35.159 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:35.159 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:35.159 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:35.159 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:10:35.159 00:10:35.159 --- 10.0.0.2 ping statistics --- 00:10:35.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:35.159 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:10:35.159 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:35.159 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:35.159 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:10:35.159 00:10:35.159 --- 10.0.0.1 ping statistics --- 00:10:35.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:35.159 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:10:35.159 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:35.159 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:10:35.159 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:35.159 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:35.159 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:35.159 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:35.159 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:35.159 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:35.159 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:35.159 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:10:35.159 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:35.159 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:35.159 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:35.159 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=3809655 00:10:35.159 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:35.159 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 3809655 00:10:35.160 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 3809655 ']' 00:10:35.160 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:35.160 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:35.160 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:35.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:35.160 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:35.160 22:20:00 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:35.160 [2024-07-24 22:20:00.755611] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:10:35.160 [2024-07-24 22:20:00.755709] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:35.160 EAL: No free 2048 kB hugepages reported on node 1 00:10:35.160 [2024-07-24 22:20:00.820384] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:35.418 [2024-07-24 22:20:00.937632] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:35.418 [2024-07-24 22:20:00.937697] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:35.418 [2024-07-24 22:20:00.937712] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:35.418 [2024-07-24 22:20:00.937726] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:35.418 [2024-07-24 22:20:00.937738] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:35.418 [2024-07-24 22:20:00.937841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:35.418 [2024-07-24 22:20:00.937915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:35.418 [2024-07-24 22:20:00.937963] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:35.418 [2024-07-24 22:20:00.937966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.418 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:35.418 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:10:35.418 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:35.418 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:35.418 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:35.418 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:35.418 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:10:35.418 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:35.418 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:10:35.676 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:10:35.676 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:10:35.676 "nvmf_tgt_1" 00:10:35.676 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:10:35.934 "nvmf_tgt_2" 00:10:35.934 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:35.934 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:10:35.934 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:10:35.934 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:10:36.192 true 00:10:36.192 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:10:36.192 true 00:10:36.192 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:36.192 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:10:36.451 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:10:36.451 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:36.451 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:10:36.451 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:36.451 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:10:36.451 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:36.451 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:10:36.451 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:36.451 22:20:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:36.451 rmmod nvme_tcp 00:10:36.451 rmmod nvme_fabrics 00:10:36.451 rmmod nvme_keyring 00:10:36.451 22:20:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:36.451 22:20:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:10:36.451 22:20:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:10:36.451 22:20:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 3809655 ']' 00:10:36.451 22:20:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 3809655 00:10:36.451 22:20:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 3809655 ']' 00:10:36.451 22:20:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 3809655 00:10:36.451 22:20:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:10:36.451 22:20:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:36.451 22:20:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3809655 00:10:36.451 22:20:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:36.451 22:20:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:36.451 22:20:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3809655' 00:10:36.451 killing process with pid 3809655 00:10:36.451 22:20:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 3809655 00:10:36.451 22:20:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 3809655 00:10:36.712 22:20:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:36.712 22:20:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:36.712 22:20:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:36.712 22:20:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:36.712 22:20:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:36.712 22:20:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:36.712 22:20:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:36.712 22:20:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:39.257 22:20:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:39.257 00:10:39.257 real 0m5.409s 00:10:39.257 user 0m6.717s 00:10:39.257 sys 0m1.647s 00:10:39.257 22:20:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:39.257 22:20:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:39.257 ************************************ 00:10:39.257 END TEST nvmf_multitarget 00:10:39.257 ************************************ 00:10:39.257 22:20:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:10:39.257 22:20:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:39.257 22:20:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:39.257 22:20:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:39.257 ************************************ 00:10:39.257 START TEST nvmf_rpc 00:10:39.257 ************************************ 00:10:39.257 22:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:10:39.257 * Looking for test storage... 00:10:39.257 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:39.257 22:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:39.257 22:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:10:39.257 22:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:39.257 22:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:39.257 22:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:39.257 22:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:39.257 22:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:39.257 22:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:39.257 22:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:39.257 22:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:39.257 22:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:39.257 22:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:39.257 22:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:10:39.257 22:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:10:39.257 22:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:39.257 22:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:39.257 22:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:39.257 22:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:39.257 22:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:39.257 22:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:39.257 22:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:39.257 22:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:39.258 22:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.258 22:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.258 22:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.258 22:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:10:39.258 22:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:39.258 22:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:10:39.258 22:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:39.258 22:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:39.258 22:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:39.258 22:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:39.258 22:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:39.258 22:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:39.258 22:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:39.258 22:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:39.258 22:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:10:39.258 22:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:10:39.258 22:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:39.258 22:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:39.258 22:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:39.258 22:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:39.258 22:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:39.258 22:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:39.258 22:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:39.258 22:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:39.258 22:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:39.258 22:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:39.258 22:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:10:39.258 22:20:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:40.637 22:20:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:40.637 22:20:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:10:40.637 22:20:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:40.637 22:20:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:40.637 22:20:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:40.637 22:20:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:40.637 22:20:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:40.637 22:20:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:10:40.637 22:20:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:40.637 22:20:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:10:40.637 22:20:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:10:40.637 22:20:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:10:40.637 22:20:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:10:40.637 22:20:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:10:40.637 22:20:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:10:40.637 22:20:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:40.637 22:20:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:40.637 22:20:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:40.637 22:20:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:40.637 22:20:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:40.637 22:20:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:40.637 22:20:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:40.637 22:20:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:40.637 22:20:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:40.637 22:20:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:40.637 22:20:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:40.637 22:20:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:40.637 22:20:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:40.637 22:20:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:40.637 22:20:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:40.637 22:20:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:40.637 22:20:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:40.637 22:20:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:40.637 22:20:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:10:40.637 Found 0000:08:00.0 (0x8086 - 0x159b) 00:10:40.637 22:20:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:40.637 22:20:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:40.637 22:20:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:40.637 22:20:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:40.637 22:20:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:40.637 22:20:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:40.637 22:20:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:10:40.637 Found 0000:08:00.1 (0x8086 - 0x159b) 00:10:40.637 22:20:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:40.637 22:20:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:40.637 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:40.637 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:40.637 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:40.637 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:40.637 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:40.637 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:40.637 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:40.638 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:40.638 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:40.638 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:40.638 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:40.638 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:40.638 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:40.638 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:10:40.638 Found net devices under 0000:08:00.0: cvl_0_0 00:10:40.638 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:40.638 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:40.638 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:40.638 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:40.638 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:40.638 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:40.638 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:40.638 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:40.638 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:10:40.638 Found net devices under 0000:08:00.1: cvl_0_1 00:10:40.638 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:40.638 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:40.638 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:10:40.638 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:40.638 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:40.638 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:40.638 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:40.638 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:40.638 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:40.638 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:40.638 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:40.638 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:40.638 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:40.638 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:40.638 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:40.638 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:40.638 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:40.638 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:40.638 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:40.638 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:40.638 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:40.638 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:40.638 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:40.638 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:40.638 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:40.638 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:40.638 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:40.638 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.275 ms 00:10:40.638 00:10:40.638 --- 10.0.0.2 ping statistics --- 00:10:40.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.638 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:10:40.638 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:40.638 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:40.638 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:10:40.638 00:10:40.638 --- 10.0.0.1 ping statistics --- 00:10:40.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.638 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:10:40.638 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:40.638 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:10:40.638 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:40.638 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:40.638 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:40.638 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:40.638 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:40.638 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:40.638 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:40.638 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:10:40.638 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:40.638 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:40.638 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:40.638 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=3811282 00:10:40.638 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:40.638 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 3811282 00:10:40.638 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 3811282 ']' 00:10:40.638 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:40.638 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:40.638 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:40.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:40.638 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:40.638 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:40.638 [2024-07-24 22:20:06.204839] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:10:40.638 [2024-07-24 22:20:06.204934] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:40.638 EAL: No free 2048 kB hugepages reported on node 1 00:10:40.638 [2024-07-24 22:20:06.271280] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:40.897 [2024-07-24 22:20:06.391267] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:40.897 [2024-07-24 22:20:06.391328] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:40.897 [2024-07-24 22:20:06.391344] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:40.897 [2024-07-24 22:20:06.391357] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:40.897 [2024-07-24 22:20:06.391369] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:40.897 [2024-07-24 22:20:06.391449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:40.897 [2024-07-24 22:20:06.391508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:40.897 [2024-07-24 22:20:06.391533] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:40.897 [2024-07-24 22:20:06.391536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.897 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:40.897 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:10:40.897 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:40.897 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:40.897 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:40.897 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:40.897 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:10:40.897 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:40.897 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:40.897 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:40.897 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:10:40.897 "tick_rate": 2700000000, 00:10:40.897 "poll_groups": [ 00:10:40.897 { 00:10:40.897 "name": "nvmf_tgt_poll_group_000", 00:10:40.897 "admin_qpairs": 0, 00:10:40.897 "io_qpairs": 0, 00:10:40.897 "current_admin_qpairs": 0, 00:10:40.897 "current_io_qpairs": 0, 00:10:40.897 "pending_bdev_io": 0, 00:10:40.897 "completed_nvme_io": 0, 00:10:40.897 "transports": [] 00:10:40.897 }, 00:10:40.897 { 00:10:40.897 "name": "nvmf_tgt_poll_group_001", 00:10:40.897 "admin_qpairs": 0, 00:10:40.897 "io_qpairs": 0, 00:10:40.897 "current_admin_qpairs": 0, 00:10:40.897 "current_io_qpairs": 0, 00:10:40.897 "pending_bdev_io": 0, 00:10:40.897 "completed_nvme_io": 0, 00:10:40.897 "transports": [] 00:10:40.897 }, 00:10:40.897 { 00:10:40.897 "name": "nvmf_tgt_poll_group_002", 00:10:40.897 "admin_qpairs": 0, 00:10:40.897 "io_qpairs": 0, 00:10:40.897 "current_admin_qpairs": 0, 00:10:40.897 "current_io_qpairs": 0, 00:10:40.897 "pending_bdev_io": 0, 00:10:40.897 "completed_nvme_io": 0, 00:10:40.897 "transports": [] 00:10:40.897 }, 00:10:40.897 { 00:10:40.897 "name": "nvmf_tgt_poll_group_003", 00:10:40.897 "admin_qpairs": 0, 00:10:40.897 "io_qpairs": 0, 00:10:40.897 "current_admin_qpairs": 0, 00:10:40.897 "current_io_qpairs": 0, 00:10:40.897 "pending_bdev_io": 0, 00:10:40.897 "completed_nvme_io": 0, 00:10:40.897 "transports": [] 00:10:40.897 } 00:10:40.897 ] 00:10:40.897 }' 00:10:40.897 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:10:40.897 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:10:40.898 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:10:40.898 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:10:41.158 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:10:41.158 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:10:41.158 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:10:41.158 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:41.158 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.158 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:41.158 [2024-07-24 22:20:06.650084] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:41.158 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.158 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:10:41.158 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.158 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:41.158 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.158 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:10:41.158 "tick_rate": 2700000000, 00:10:41.158 "poll_groups": [ 00:10:41.158 { 00:10:41.158 "name": "nvmf_tgt_poll_group_000", 00:10:41.158 "admin_qpairs": 0, 00:10:41.158 "io_qpairs": 0, 00:10:41.158 "current_admin_qpairs": 0, 00:10:41.158 "current_io_qpairs": 0, 00:10:41.158 "pending_bdev_io": 0, 00:10:41.158 "completed_nvme_io": 0, 00:10:41.158 "transports": [ 00:10:41.158 { 00:10:41.158 "trtype": "TCP" 00:10:41.158 } 00:10:41.158 ] 00:10:41.158 }, 00:10:41.158 { 00:10:41.158 "name": "nvmf_tgt_poll_group_001", 00:10:41.158 "admin_qpairs": 0, 00:10:41.158 "io_qpairs": 0, 00:10:41.158 "current_admin_qpairs": 0, 00:10:41.158 "current_io_qpairs": 0, 00:10:41.158 "pending_bdev_io": 0, 00:10:41.158 "completed_nvme_io": 0, 00:10:41.158 "transports": [ 00:10:41.158 { 00:10:41.158 "trtype": "TCP" 00:10:41.158 } 00:10:41.158 ] 00:10:41.158 }, 00:10:41.158 { 00:10:41.158 "name": "nvmf_tgt_poll_group_002", 00:10:41.158 "admin_qpairs": 0, 00:10:41.158 "io_qpairs": 0, 00:10:41.158 "current_admin_qpairs": 0, 00:10:41.158 "current_io_qpairs": 0, 00:10:41.158 "pending_bdev_io": 0, 00:10:41.158 "completed_nvme_io": 0, 00:10:41.158 "transports": [ 00:10:41.158 { 00:10:41.158 "trtype": "TCP" 00:10:41.158 } 00:10:41.158 ] 00:10:41.158 }, 00:10:41.158 { 00:10:41.158 "name": "nvmf_tgt_poll_group_003", 00:10:41.158 "admin_qpairs": 0, 00:10:41.158 "io_qpairs": 0, 00:10:41.158 "current_admin_qpairs": 0, 00:10:41.158 "current_io_qpairs": 0, 00:10:41.158 "pending_bdev_io": 0, 00:10:41.158 "completed_nvme_io": 0, 00:10:41.158 "transports": [ 00:10:41.158 { 00:10:41.158 "trtype": "TCP" 00:10:41.158 } 00:10:41.158 ] 00:10:41.158 } 00:10:41.158 ] 00:10:41.158 }' 00:10:41.158 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:10:41.158 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:10:41.158 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:10:41.158 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:41.158 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:10:41.158 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:10:41.158 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:10:41.158 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:10:41.158 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:41.158 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:10:41.158 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:10:41.158 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:10:41.158 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:10:41.158 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:10:41.158 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.158 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:41.158 Malloc1 00:10:41.158 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.158 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:41.158 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.158 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:41.158 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.158 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:41.158 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.158 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:41.158 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.158 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:10:41.158 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.159 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:41.159 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.159 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:41.159 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.159 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:41.159 [2024-07-24 22:20:06.808849] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:41.159 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.159 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -a 10.0.0.2 -s 4420 00:10:41.159 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:10:41.159 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -a 10.0.0.2 -s 4420 00:10:41.159 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:10:41.159 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:41.159 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:10:41.159 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:41.159 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:10:41.159 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:41.159 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:10:41.159 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:10:41.159 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -a 10.0.0.2 -s 4420 00:10:41.159 [2024-07-24 22:20:06.831246] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc' 00:10:41.159 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:41.159 could not add new controller: failed to write to nvme-fabrics device 00:10:41.159 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:10:41.159 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:41.159 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:41.159 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:41.159 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:10:41.159 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.159 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:41.425 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.425 22:20:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:41.686 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:10:41.686 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1196 -- # local i=0 00:10:41.686 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:10:41.686 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:10:41.686 22:20:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # sleep 2 00:10:44.228 22:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:10:44.228 22:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:10:44.228 22:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:10:44.228 22:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:10:44.228 22:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:10:44.228 22:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # return 0 00:10:44.228 22:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:44.228 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:44.228 22:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:44.228 22:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1217 -- # local i=0 00:10:44.228 22:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:10:44.228 22:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:44.228 22:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:10:44.228 22:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:44.228 22:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # return 0 00:10:44.228 22:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:10:44.228 22:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.228 22:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:44.228 22:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.228 22:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:44.228 22:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:10:44.228 22:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:44.228 22:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:10:44.228 22:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:44.228 22:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:10:44.228 22:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:44.228 22:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:10:44.228 22:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:44.228 22:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:10:44.228 22:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:10:44.228 22:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:44.228 [2024-07-24 22:20:09.491336] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc' 00:10:44.228 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:44.228 could not add new controller: failed to write to nvme-fabrics device 00:10:44.228 22:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:10:44.228 22:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:44.228 22:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:44.228 22:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:44.228 22:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:10:44.228 22:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:44.228 22:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:44.228 22:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:44.228 22:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:44.489 22:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:10:44.489 22:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1196 -- # local i=0 00:10:44.489 22:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:10:44.489 22:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:10:44.489 22:20:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # sleep 2 00:10:46.396 22:20:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:10:46.396 22:20:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:10:46.396 22:20:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:10:46.396 22:20:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:10:46.396 22:20:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:10:46.396 22:20:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # return 0 00:10:46.396 22:20:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:46.396 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.396 22:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:46.396 22:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1217 -- # local i=0 00:10:46.396 22:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:10:46.396 22:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:46.396 22:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:10:46.396 22:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:46.396 22:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # return 0 00:10:46.396 22:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:46.396 22:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.396 22:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:46.396 22:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.396 22:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:10:46.396 22:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:46.396 22:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:46.396 22:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.396 22:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:46.396 22:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.396 22:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:46.396 22:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.396 22:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:46.396 [2024-07-24 22:20:12.056702] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:46.396 22:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.396 22:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:46.396 22:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.396 22:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:46.396 22:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.396 22:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:46.396 22:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.396 22:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:46.396 22:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.396 22:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:46.967 22:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:46.967 22:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1196 -- # local i=0 00:10:46.967 22:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:10:46.967 22:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:10:46.967 22:20:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # sleep 2 00:10:49.503 22:20:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:10:49.503 22:20:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:10:49.503 22:20:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:10:49.503 22:20:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:10:49.503 22:20:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:10:49.503 22:20:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # return 0 00:10:49.503 22:20:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:49.503 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.503 22:20:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:49.503 22:20:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1217 -- # local i=0 00:10:49.503 22:20:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:10:49.503 22:20:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:49.503 22:20:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:10:49.503 22:20:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:49.503 22:20:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # return 0 00:10:49.503 22:20:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:49.503 22:20:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.503 22:20:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:49.503 22:20:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.503 22:20:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:49.503 22:20:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.503 22:20:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:49.503 22:20:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.503 22:20:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:49.503 22:20:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:49.503 22:20:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.503 22:20:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:49.503 22:20:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.503 22:20:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:49.503 22:20:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.503 22:20:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:49.503 [2024-07-24 22:20:14.697006] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:49.503 22:20:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.503 22:20:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:49.503 22:20:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.503 22:20:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:49.503 22:20:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.503 22:20:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:49.503 22:20:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:49.503 22:20:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:49.503 22:20:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:49.503 22:20:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:49.503 22:20:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:49.503 22:20:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1196 -- # local i=0 00:10:49.503 22:20:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:10:49.503 22:20:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:10:49.503 22:20:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # sleep 2 00:10:52.044 22:20:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:10:52.044 22:20:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:10:52.044 22:20:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:10:52.044 22:20:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:10:52.044 22:20:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:10:52.045 22:20:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # return 0 00:10:52.045 22:20:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:52.045 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.045 22:20:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:52.045 22:20:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1217 -- # local i=0 00:10:52.045 22:20:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:10:52.045 22:20:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:52.045 22:20:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:10:52.045 22:20:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:52.045 22:20:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # return 0 00:10:52.045 22:20:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:52.045 22:20:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.045 22:20:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.045 22:20:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.045 22:20:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:52.045 22:20:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.045 22:20:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.045 22:20:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.045 22:20:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:52.045 22:20:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:52.045 22:20:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.045 22:20:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.045 22:20:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.045 22:20:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:52.045 22:20:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.045 22:20:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.045 [2024-07-24 22:20:17.248023] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:52.045 22:20:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.045 22:20:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:52.045 22:20:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.045 22:20:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.045 22:20:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.045 22:20:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:52.045 22:20:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.045 22:20:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:52.045 22:20:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.045 22:20:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:52.304 22:20:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:52.304 22:20:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1196 -- # local i=0 00:10:52.304 22:20:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:10:52.304 22:20:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:10:52.304 22:20:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # sleep 2 00:10:54.209 22:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:10:54.209 22:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:10:54.209 22:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:10:54.209 22:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:10:54.209 22:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:10:54.209 22:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # return 0 00:10:54.209 22:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:54.209 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.209 22:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:54.209 22:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1217 -- # local i=0 00:10:54.209 22:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:10:54.209 22:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:54.209 22:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:10:54.209 22:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:54.209 22:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # return 0 00:10:54.209 22:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:54.209 22:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.209 22:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:54.209 22:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.209 22:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:54.209 22:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.209 22:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:54.209 22:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.209 22:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:54.209 22:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:54.209 22:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.209 22:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:54.209 22:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.209 22:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:54.209 22:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.209 22:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:54.209 [2024-07-24 22:20:19.890019] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:54.209 22:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.209 22:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:54.209 22:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.209 22:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:54.209 22:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.209 22:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:54.209 22:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.209 22:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:54.209 22:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.209 22:20:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:54.778 22:20:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:54.778 22:20:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1196 -- # local i=0 00:10:54.778 22:20:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:10:54.778 22:20:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:10:54.778 22:20:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # sleep 2 00:10:57.316 22:20:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:10:57.316 22:20:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:10:57.316 22:20:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:10:57.316 22:20:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:10:57.316 22:20:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:10:57.316 22:20:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # return 0 00:10:57.316 22:20:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:57.316 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.316 22:20:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:57.316 22:20:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1217 -- # local i=0 00:10:57.316 22:20:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:10:57.316 22:20:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:57.316 22:20:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:10:57.316 22:20:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:57.316 22:20:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # return 0 00:10:57.316 22:20:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:57.316 22:20:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.316 22:20:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:57.316 22:20:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.316 22:20:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:57.316 22:20:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.316 22:20:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:57.316 22:20:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.316 22:20:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:57.316 22:20:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:57.316 22:20:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.316 22:20:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:57.316 22:20:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.316 22:20:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:57.316 22:20:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.316 22:20:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:57.316 [2024-07-24 22:20:22.538650] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:57.316 22:20:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.316 22:20:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:57.316 22:20:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.316 22:20:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:57.316 22:20:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.316 22:20:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:57.316 22:20:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:57.316 22:20:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:57.316 22:20:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:57.316 22:20:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:57.574 22:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:57.574 22:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1196 -- # local i=0 00:10:57.574 22:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:10:57.574 22:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:10:57.574 22:20:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # sleep 2 00:10:59.482 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:10:59.482 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:10:59.482 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:10:59.482 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:10:59.482 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:10:59.482 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # return 0 00:10:59.482 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:59.482 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:59.482 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:59.482 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1217 -- # local i=0 00:10:59.482 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:10:59.482 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:59.482 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:10:59.482 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:59.482 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # return 0 00:10:59.482 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:59.482 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.482 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.482 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.482 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:59.482 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.482 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.482 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.743 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:10:59.743 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:59.743 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:59.743 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.743 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.743 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.743 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:59.743 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.743 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.743 [2024-07-24 22:20:25.202915] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:59.743 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.743 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:59.743 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.743 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.743 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.743 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:59.743 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.743 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.743 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.743 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:59.743 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.743 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.743 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.743 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:59.743 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.743 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.743 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.743 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:59.743 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:59.743 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.743 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.743 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.743 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:59.743 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.743 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.743 [2024-07-24 22:20:25.250985] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:59.743 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.743 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:59.743 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.743 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.743 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.743 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:59.743 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.743 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.743 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.743 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:59.743 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.743 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.743 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.743 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:59.743 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.743 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.743 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.743 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.744 [2024-07-24 22:20:25.299158] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.744 [2024-07-24 22:20:25.347328] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.744 [2024-07-24 22:20:25.395491] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:59.744 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:00.005 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:00.005 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:11:00.005 "tick_rate": 2700000000, 00:11:00.005 "poll_groups": [ 00:11:00.005 { 00:11:00.005 "name": "nvmf_tgt_poll_group_000", 00:11:00.005 "admin_qpairs": 2, 00:11:00.005 "io_qpairs": 56, 00:11:00.005 "current_admin_qpairs": 0, 00:11:00.005 "current_io_qpairs": 0, 00:11:00.005 "pending_bdev_io": 0, 00:11:00.005 "completed_nvme_io": 178, 00:11:00.005 "transports": [ 00:11:00.005 { 00:11:00.005 "trtype": "TCP" 00:11:00.005 } 00:11:00.005 ] 00:11:00.005 }, 00:11:00.005 { 00:11:00.005 "name": "nvmf_tgt_poll_group_001", 00:11:00.005 "admin_qpairs": 2, 00:11:00.005 "io_qpairs": 56, 00:11:00.005 "current_admin_qpairs": 0, 00:11:00.005 "current_io_qpairs": 0, 00:11:00.005 "pending_bdev_io": 0, 00:11:00.005 "completed_nvme_io": 107, 00:11:00.005 "transports": [ 00:11:00.005 { 00:11:00.005 "trtype": "TCP" 00:11:00.005 } 00:11:00.005 ] 00:11:00.005 }, 00:11:00.005 { 00:11:00.005 "name": "nvmf_tgt_poll_group_002", 00:11:00.005 "admin_qpairs": 1, 00:11:00.005 "io_qpairs": 56, 00:11:00.005 "current_admin_qpairs": 0, 00:11:00.005 "current_io_qpairs": 0, 00:11:00.005 "pending_bdev_io": 0, 00:11:00.005 "completed_nvme_io": 123, 00:11:00.005 "transports": [ 00:11:00.005 { 00:11:00.005 "trtype": "TCP" 00:11:00.005 } 00:11:00.005 ] 00:11:00.005 }, 00:11:00.005 { 00:11:00.005 "name": "nvmf_tgt_poll_group_003", 00:11:00.005 "admin_qpairs": 2, 00:11:00.005 "io_qpairs": 56, 00:11:00.005 "current_admin_qpairs": 0, 00:11:00.005 "current_io_qpairs": 0, 00:11:00.005 "pending_bdev_io": 0, 00:11:00.005 "completed_nvme_io": 166, 00:11:00.005 "transports": [ 00:11:00.005 { 00:11:00.005 "trtype": "TCP" 00:11:00.005 } 00:11:00.005 ] 00:11:00.005 } 00:11:00.005 ] 00:11:00.005 }' 00:11:00.005 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:11:00.005 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:00.005 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:00.005 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:00.005 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:11:00.005 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:11:00.005 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:00.005 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:00.005 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:00.005 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 224 > 0 )) 00:11:00.005 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:11:00.005 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:11:00.005 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:11:00.005 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:00.005 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:11:00.005 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:00.005 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:11:00.005 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:00.005 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:00.005 rmmod nvme_tcp 00:11:00.005 rmmod nvme_fabrics 00:11:00.005 rmmod nvme_keyring 00:11:00.005 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:00.005 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:11:00.005 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:11:00.005 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 3811282 ']' 00:11:00.005 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 3811282 00:11:00.005 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 3811282 ']' 00:11:00.005 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 3811282 00:11:00.005 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:11:00.005 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:00.005 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3811282 00:11:00.005 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:00.005 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:00.005 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3811282' 00:11:00.005 killing process with pid 3811282 00:11:00.005 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 3811282 00:11:00.005 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 3811282 00:11:00.266 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:00.266 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:00.266 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:00.266 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:00.266 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:00.266 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:00.266 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:00.266 22:20:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:02.846 22:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:02.846 00:11:02.846 real 0m23.534s 00:11:02.846 user 1m16.881s 00:11:02.846 sys 0m3.670s 00:11:02.846 22:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:02.846 22:20:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:02.846 ************************************ 00:11:02.846 END TEST nvmf_rpc 00:11:02.846 ************************************ 00:11:02.846 22:20:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:02.846 22:20:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:02.846 22:20:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:02.846 22:20:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:02.846 ************************************ 00:11:02.846 START TEST nvmf_invalid 00:11:02.846 ************************************ 00:11:02.846 22:20:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:02.846 * Looking for test storage... 00:11:02.846 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:02.846 22:20:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:02.846 22:20:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:11:02.846 22:20:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:02.846 22:20:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:02.846 22:20:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:02.846 22:20:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:02.846 22:20:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:02.846 22:20:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:02.847 22:20:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:02.847 22:20:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:02.847 22:20:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:02.847 22:20:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:02.847 22:20:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:11:02.847 22:20:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:11:02.847 22:20:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:02.847 22:20:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:02.847 22:20:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:02.847 22:20:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:02.847 22:20:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:02.847 22:20:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:02.847 22:20:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:02.847 22:20:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:02.847 22:20:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.847 22:20:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.847 22:20:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.847 22:20:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:11:02.847 22:20:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.847 22:20:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:11:02.847 22:20:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:02.847 22:20:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:02.847 22:20:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:02.847 22:20:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:02.847 22:20:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:02.847 22:20:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:02.847 22:20:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:02.847 22:20:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:02.847 22:20:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:02.847 22:20:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:02.847 22:20:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:11:02.847 22:20:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:11:02.847 22:20:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:11:02.847 22:20:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:11:02.847 22:20:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:02.847 22:20:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:02.847 22:20:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:02.847 22:20:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:02.847 22:20:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:02.847 22:20:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:02.847 22:20:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:02.847 22:20:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:02.847 22:20:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:02.847 22:20:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:02.847 22:20:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:11:02.847 22:20:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:04.232 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:04.232 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:11:04.232 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:04.232 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:04.232 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:04.232 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:04.232 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:04.232 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:11:04.232 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:04.232 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:11:04.232 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:11:04.232 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:11:04.232 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:11:04.232 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:11:04.232 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:11:04.233 Found 0000:08:00.0 (0x8086 - 0x159b) 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:11:04.233 Found 0000:08:00.1 (0x8086 - 0x159b) 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:11:04.233 Found net devices under 0000:08:00.0: cvl_0_0 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:11:04.233 Found net devices under 0000:08:00.1: cvl_0_1 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:04.233 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:04.233 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.402 ms 00:11:04.233 00:11:04.233 --- 10.0.0.2 ping statistics --- 00:11:04.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.233 rtt min/avg/max/mdev = 0.402/0.402/0.402/0.000 ms 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:04.233 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:04.233 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:11:04.233 00:11:04.233 --- 10.0.0.1 ping statistics --- 00:11:04.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:04.233 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:11:04.233 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:04.234 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:04.234 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:04.234 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=3814651 00:11:04.234 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:04.234 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 3814651 00:11:04.234 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 3814651 ']' 00:11:04.234 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:04.234 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:04.234 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:04.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:04.234 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:04.234 22:20:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:04.234 [2024-07-24 22:20:29.826471] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:11:04.234 [2024-07-24 22:20:29.826583] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:04.234 EAL: No free 2048 kB hugepages reported on node 1 00:11:04.234 [2024-07-24 22:20:29.892230] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:04.491 [2024-07-24 22:20:30.013108] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:04.491 [2024-07-24 22:20:30.013174] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:04.491 [2024-07-24 22:20:30.013190] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:04.491 [2024-07-24 22:20:30.013203] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:04.491 [2024-07-24 22:20:30.013215] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:04.491 [2024-07-24 22:20:30.013298] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:04.491 [2024-07-24 22:20:30.013351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:04.491 [2024-07-24 22:20:30.013404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:04.491 [2024-07-24 22:20:30.013407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.491 22:20:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:04.491 22:20:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:11:04.491 22:20:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:04.491 22:20:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:04.491 22:20:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:04.491 22:20:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:04.491 22:20:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:04.491 22:20:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode28749 00:11:04.750 [2024-07-24 22:20:30.425395] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:11:04.750 22:20:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:11:04.750 { 00:11:04.750 "nqn": "nqn.2016-06.io.spdk:cnode28749", 00:11:04.750 "tgt_name": "foobar", 00:11:04.750 "method": "nvmf_create_subsystem", 00:11:04.750 "req_id": 1 00:11:04.750 } 00:11:04.750 Got JSON-RPC error response 00:11:04.750 response: 00:11:04.750 { 00:11:04.750 "code": -32603, 00:11:04.750 "message": "Unable to find target foobar" 00:11:04.750 }' 00:11:04.750 22:20:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:11:04.750 { 00:11:04.750 "nqn": "nqn.2016-06.io.spdk:cnode28749", 00:11:04.750 "tgt_name": "foobar", 00:11:04.750 "method": "nvmf_create_subsystem", 00:11:04.750 "req_id": 1 00:11:04.750 } 00:11:04.750 Got JSON-RPC error response 00:11:04.750 response: 00:11:04.750 { 00:11:04.750 "code": -32603, 00:11:04.750 "message": "Unable to find target foobar" 00:11:04.750 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:11:04.750 22:20:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:11:04.750 22:20:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode2989 00:11:05.315 [2024-07-24 22:20:30.726456] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2989: invalid serial number 'SPDKISFASTANDAWESOME' 00:11:05.315 22:20:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:11:05.315 { 00:11:05.315 "nqn": "nqn.2016-06.io.spdk:cnode2989", 00:11:05.315 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:11:05.315 "method": "nvmf_create_subsystem", 00:11:05.315 "req_id": 1 00:11:05.315 } 00:11:05.315 Got JSON-RPC error response 00:11:05.315 response: 00:11:05.315 { 00:11:05.315 "code": -32602, 00:11:05.315 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:11:05.315 }' 00:11:05.315 22:20:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:11:05.315 { 00:11:05.315 "nqn": "nqn.2016-06.io.spdk:cnode2989", 00:11:05.315 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:11:05.316 "method": "nvmf_create_subsystem", 00:11:05.316 "req_id": 1 00:11:05.316 } 00:11:05.316 Got JSON-RPC error response 00:11:05.316 response: 00:11:05.316 { 00:11:05.316 "code": -32602, 00:11:05.316 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:11:05.316 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:05.316 22:20:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:11:05.316 22:20:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode17234 00:11:05.574 [2024-07-24 22:20:31.027463] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17234: invalid model number 'SPDK_Controller' 00:11:05.574 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:11:05.574 { 00:11:05.574 "nqn": "nqn.2016-06.io.spdk:cnode17234", 00:11:05.574 "model_number": "SPDK_Controller\u001f", 00:11:05.574 "method": "nvmf_create_subsystem", 00:11:05.574 "req_id": 1 00:11:05.574 } 00:11:05.574 Got JSON-RPC error response 00:11:05.574 response: 00:11:05.574 { 00:11:05.574 "code": -32602, 00:11:05.574 "message": "Invalid MN SPDK_Controller\u001f" 00:11:05.574 }' 00:11:05.574 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:11:05.574 { 00:11:05.574 "nqn": "nqn.2016-06.io.spdk:cnode17234", 00:11:05.574 "model_number": "SPDK_Controller\u001f", 00:11:05.574 "method": "nvmf_create_subsystem", 00:11:05.574 "req_id": 1 00:11:05.574 } 00:11:05.574 Got JSON-RPC error response 00:11:05.574 response: 00:11:05.574 { 00:11:05.574 "code": -32602, 00:11:05.574 "message": "Invalid MN SPDK_Controller\u001f" 00:11:05.574 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:05.574 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:11:05.574 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:11:05.574 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:05.574 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:11:05.574 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:11:05.574 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:05.574 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:05.574 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:11:05.574 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:11:05.574 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ J == \- ]] 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'J[9$PA>bsh<8e^GSmP)]_' 00:11:05.575 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'J[9$PA>bsh<8e^GSmP)]_' nqn.2016-06.io.spdk:cnode12798 00:11:05.835 [2024-07-24 22:20:31.408653] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12798: invalid serial number 'J[9$PA>bsh<8e^GSmP)]_' 00:11:05.835 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:11:05.835 { 00:11:05.835 "nqn": "nqn.2016-06.io.spdk:cnode12798", 00:11:05.835 "serial_number": "J[9$PA>bsh<8e^GSmP)]_", 00:11:05.835 "method": "nvmf_create_subsystem", 00:11:05.835 "req_id": 1 00:11:05.835 } 00:11:05.835 Got JSON-RPC error response 00:11:05.835 response: 00:11:05.835 { 00:11:05.835 "code": -32602, 00:11:05.835 "message": "Invalid SN J[9$PA>bsh<8e^GSmP)]_" 00:11:05.835 }' 00:11:05.835 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:11:05.835 { 00:11:05.835 "nqn": "nqn.2016-06.io.spdk:cnode12798", 00:11:05.835 "serial_number": "J[9$PA>bsh<8e^GSmP)]_", 00:11:05.835 "method": "nvmf_create_subsystem", 00:11:05.835 "req_id": 1 00:11:05.835 } 00:11:05.835 Got JSON-RPC error response 00:11:05.835 response: 00:11:05.835 { 00:11:05.835 "code": -32602, 00:11:05.835 "message": "Invalid SN J[9$PA>bsh<8e^GSmP)]_" 00:11:05.835 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:05.835 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:11:05.835 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:11:05.835 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:05.835 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:11:05.835 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:11:05.835 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:05.835 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:05.835 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:11:05.835 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:11:05.835 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:11:05.835 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:05.835 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:05.835 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:11:05.835 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:11:05.835 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:11:05.835 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:05.835 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:05.835 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:11:05.835 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:11:05.835 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:11:05.835 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:05.835 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:05.835 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:11:05.835 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:11:05.835 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:11:05.835 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:05.835 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:05.835 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:11:05.835 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:11:05.835 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:11:05.835 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:05.835 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:05.835 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:11:05.835 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:11:05.835 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:11:05.835 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:05.835 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:05.835 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:11:05.835 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:11:05.835 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:11:05.835 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:05.835 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:05.835 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:11:05.835 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:11:05.835 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:11:05.835 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:05.835 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:05.835 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:11:05.835 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:11:05.835 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:11:05.835 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:05.835 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:05.835 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:11:05.835 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:11:05.835 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:11:05.835 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:05.835 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:05.835 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:11:05.835 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:11:05.835 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:11:05.835 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:05.835 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:05.835 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:11:05.835 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:11:05.835 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:11:05.835 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:05.835 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:05.835 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:11:05.835 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:11:05.835 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:05.836 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:06.095 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:11:06.095 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:11:06.095 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:11:06.095 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:06.095 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:06.095 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:11:06.095 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:11:06.095 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:11:06.095 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:06.095 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:06.095 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:11:06.095 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:11:06.095 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:11:06.095 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:06.095 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:06.095 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:11:06.095 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:11:06.095 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:11:06.095 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:06.095 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:06.095 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:11:06.095 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:11:06.095 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:11:06.095 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:06.095 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:06.095 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ " == \- ]] 00:11:06.095 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '",]O2=FEz:exs{/?B8/27M]t14S4-yB=30A+w(kKG' 00:11:06.095 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '",]O2=FEz:exs{/?B8/27M]t14S4-yB=30A+w(kKG' nqn.2016-06.io.spdk:cnode6033 00:11:06.354 [2024-07-24 22:20:31.830049] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6033: invalid model number '",]O2=FEz:exs{/?B8/27M]t14S4-yB=30A+w(kKG' 00:11:06.354 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:11:06.354 { 00:11:06.354 "nqn": "nqn.2016-06.io.spdk:cnode6033", 00:11:06.354 "model_number": "\",]O2=FEz:exs{/?B8/27M]t14S4-yB=30A+w(kKG", 00:11:06.354 "method": "nvmf_create_subsystem", 00:11:06.354 "req_id": 1 00:11:06.354 } 00:11:06.354 Got JSON-RPC error response 00:11:06.354 response: 00:11:06.354 { 00:11:06.354 "code": -32602, 00:11:06.354 "message": "Invalid MN \",]O2=FEz:exs{/?B8/27M]t14S4-yB=30A+w(kKG" 00:11:06.354 }' 00:11:06.354 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:11:06.354 { 00:11:06.354 "nqn": "nqn.2016-06.io.spdk:cnode6033", 00:11:06.354 "model_number": "\",]O2=FEz:exs{/?B8/27M]t14S4-yB=30A+w(kKG", 00:11:06.354 "method": "nvmf_create_subsystem", 00:11:06.354 "req_id": 1 00:11:06.354 } 00:11:06.354 Got JSON-RPC error response 00:11:06.354 response: 00:11:06.354 { 00:11:06.354 "code": -32602, 00:11:06.354 "message": "Invalid MN \",]O2=FEz:exs{/?B8/27M]t14S4-yB=30A+w(kKG" 00:11:06.354 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:06.354 22:20:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:11:06.613 [2024-07-24 22:20:32.127094] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:06.613 22:20:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:11:06.872 22:20:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:11:06.872 22:20:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:11:06.872 22:20:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:11:06.872 22:20:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:11:06.872 22:20:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:11:07.130 [2024-07-24 22:20:32.729000] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:11:07.130 22:20:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:11:07.130 { 00:11:07.130 "nqn": "nqn.2016-06.io.spdk:cnode", 00:11:07.130 "listen_address": { 00:11:07.130 "trtype": "tcp", 00:11:07.130 "traddr": "", 00:11:07.130 "trsvcid": "4421" 00:11:07.130 }, 00:11:07.130 "method": "nvmf_subsystem_remove_listener", 00:11:07.130 "req_id": 1 00:11:07.130 } 00:11:07.130 Got JSON-RPC error response 00:11:07.130 response: 00:11:07.131 { 00:11:07.131 "code": -32602, 00:11:07.131 "message": "Invalid parameters" 00:11:07.131 }' 00:11:07.131 22:20:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:11:07.131 { 00:11:07.131 "nqn": "nqn.2016-06.io.spdk:cnode", 00:11:07.131 "listen_address": { 00:11:07.131 "trtype": "tcp", 00:11:07.131 "traddr": "", 00:11:07.131 "trsvcid": "4421" 00:11:07.131 }, 00:11:07.131 "method": "nvmf_subsystem_remove_listener", 00:11:07.131 "req_id": 1 00:11:07.131 } 00:11:07.131 Got JSON-RPC error response 00:11:07.131 response: 00:11:07.131 { 00:11:07.131 "code": -32602, 00:11:07.131 "message": "Invalid parameters" 00:11:07.131 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:11:07.131 22:20:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25271 -i 0 00:11:07.389 [2024-07-24 22:20:33.030034] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25271: invalid cntlid range [0-65519] 00:11:07.389 22:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:11:07.389 { 00:11:07.389 "nqn": "nqn.2016-06.io.spdk:cnode25271", 00:11:07.389 "min_cntlid": 0, 00:11:07.389 "method": "nvmf_create_subsystem", 00:11:07.389 "req_id": 1 00:11:07.389 } 00:11:07.389 Got JSON-RPC error response 00:11:07.389 response: 00:11:07.389 { 00:11:07.389 "code": -32602, 00:11:07.389 "message": "Invalid cntlid range [0-65519]" 00:11:07.389 }' 00:11:07.389 22:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:11:07.389 { 00:11:07.389 "nqn": "nqn.2016-06.io.spdk:cnode25271", 00:11:07.389 "min_cntlid": 0, 00:11:07.389 "method": "nvmf_create_subsystem", 00:11:07.389 "req_id": 1 00:11:07.389 } 00:11:07.389 Got JSON-RPC error response 00:11:07.389 response: 00:11:07.389 { 00:11:07.389 "code": -32602, 00:11:07.389 "message": "Invalid cntlid range [0-65519]" 00:11:07.389 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:07.389 22:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8833 -i 65520 00:11:07.648 [2024-07-24 22:20:33.326945] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8833: invalid cntlid range [65520-65519] 00:11:07.648 22:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:11:07.648 { 00:11:07.648 "nqn": "nqn.2016-06.io.spdk:cnode8833", 00:11:07.648 "min_cntlid": 65520, 00:11:07.648 "method": "nvmf_create_subsystem", 00:11:07.648 "req_id": 1 00:11:07.648 } 00:11:07.648 Got JSON-RPC error response 00:11:07.648 response: 00:11:07.648 { 00:11:07.648 "code": -32602, 00:11:07.648 "message": "Invalid cntlid range [65520-65519]" 00:11:07.648 }' 00:11:07.648 22:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:11:07.648 { 00:11:07.648 "nqn": "nqn.2016-06.io.spdk:cnode8833", 00:11:07.648 "min_cntlid": 65520, 00:11:07.648 "method": "nvmf_create_subsystem", 00:11:07.648 "req_id": 1 00:11:07.648 } 00:11:07.648 Got JSON-RPC error response 00:11:07.648 response: 00:11:07.648 { 00:11:07.648 "code": -32602, 00:11:07.648 "message": "Invalid cntlid range [65520-65519]" 00:11:07.648 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:07.648 22:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3874 -I 0 00:11:08.213 [2024-07-24 22:20:33.627909] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3874: invalid cntlid range [1-0] 00:11:08.213 22:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:11:08.213 { 00:11:08.213 "nqn": "nqn.2016-06.io.spdk:cnode3874", 00:11:08.213 "max_cntlid": 0, 00:11:08.213 "method": "nvmf_create_subsystem", 00:11:08.213 "req_id": 1 00:11:08.213 } 00:11:08.213 Got JSON-RPC error response 00:11:08.213 response: 00:11:08.213 { 00:11:08.213 "code": -32602, 00:11:08.213 "message": "Invalid cntlid range [1-0]" 00:11:08.213 }' 00:11:08.213 22:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:11:08.213 { 00:11:08.213 "nqn": "nqn.2016-06.io.spdk:cnode3874", 00:11:08.213 "max_cntlid": 0, 00:11:08.213 "method": "nvmf_create_subsystem", 00:11:08.213 "req_id": 1 00:11:08.213 } 00:11:08.213 Got JSON-RPC error response 00:11:08.213 response: 00:11:08.213 { 00:11:08.213 "code": -32602, 00:11:08.213 "message": "Invalid cntlid range [1-0]" 00:11:08.213 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:08.213 22:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7067 -I 65520 00:11:08.472 [2024-07-24 22:20:33.928936] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7067: invalid cntlid range [1-65520] 00:11:08.472 22:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:11:08.472 { 00:11:08.472 "nqn": "nqn.2016-06.io.spdk:cnode7067", 00:11:08.472 "max_cntlid": 65520, 00:11:08.472 "method": "nvmf_create_subsystem", 00:11:08.472 "req_id": 1 00:11:08.472 } 00:11:08.472 Got JSON-RPC error response 00:11:08.472 response: 00:11:08.472 { 00:11:08.472 "code": -32602, 00:11:08.472 "message": "Invalid cntlid range [1-65520]" 00:11:08.472 }' 00:11:08.472 22:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:11:08.472 { 00:11:08.472 "nqn": "nqn.2016-06.io.spdk:cnode7067", 00:11:08.472 "max_cntlid": 65520, 00:11:08.472 "method": "nvmf_create_subsystem", 00:11:08.472 "req_id": 1 00:11:08.472 } 00:11:08.472 Got JSON-RPC error response 00:11:08.472 response: 00:11:08.472 { 00:11:08.472 "code": -32602, 00:11:08.472 "message": "Invalid cntlid range [1-65520]" 00:11:08.472 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:08.472 22:20:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13623 -i 6 -I 5 00:11:08.731 [2024-07-24 22:20:34.213894] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13623: invalid cntlid range [6-5] 00:11:08.731 22:20:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:11:08.731 { 00:11:08.731 "nqn": "nqn.2016-06.io.spdk:cnode13623", 00:11:08.731 "min_cntlid": 6, 00:11:08.731 "max_cntlid": 5, 00:11:08.731 "method": "nvmf_create_subsystem", 00:11:08.731 "req_id": 1 00:11:08.731 } 00:11:08.731 Got JSON-RPC error response 00:11:08.731 response: 00:11:08.731 { 00:11:08.731 "code": -32602, 00:11:08.731 "message": "Invalid cntlid range [6-5]" 00:11:08.731 }' 00:11:08.731 22:20:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:11:08.731 { 00:11:08.731 "nqn": "nqn.2016-06.io.spdk:cnode13623", 00:11:08.731 "min_cntlid": 6, 00:11:08.731 "max_cntlid": 5, 00:11:08.731 "method": "nvmf_create_subsystem", 00:11:08.731 "req_id": 1 00:11:08.731 } 00:11:08.731 Got JSON-RPC error response 00:11:08.731 response: 00:11:08.731 { 00:11:08.731 "code": -32602, 00:11:08.731 "message": "Invalid cntlid range [6-5]" 00:11:08.731 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:08.731 22:20:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:11:08.731 22:20:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:11:08.731 { 00:11:08.731 "name": "foobar", 00:11:08.731 "method": "nvmf_delete_target", 00:11:08.731 "req_id": 1 00:11:08.731 } 00:11:08.731 Got JSON-RPC error response 00:11:08.731 response: 00:11:08.731 { 00:11:08.731 "code": -32602, 00:11:08.731 "message": "The specified target doesn'\''t exist, cannot delete it." 00:11:08.731 }' 00:11:08.731 22:20:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:11:08.731 { 00:11:08.731 "name": "foobar", 00:11:08.731 "method": "nvmf_delete_target", 00:11:08.731 "req_id": 1 00:11:08.731 } 00:11:08.731 Got JSON-RPC error response 00:11:08.731 response: 00:11:08.731 { 00:11:08.731 "code": -32602, 00:11:08.731 "message": "The specified target doesn't exist, cannot delete it." 00:11:08.731 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:11:08.731 22:20:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:11:08.731 22:20:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:11:08.731 22:20:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:08.731 22:20:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:11:08.731 22:20:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:08.731 22:20:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:11:08.731 22:20:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:08.731 22:20:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:08.731 rmmod nvme_tcp 00:11:08.731 rmmod nvme_fabrics 00:11:08.731 rmmod nvme_keyring 00:11:08.731 22:20:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:08.731 22:20:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:11:08.731 22:20:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:11:08.731 22:20:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 3814651 ']' 00:11:08.731 22:20:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 3814651 00:11:08.731 22:20:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 3814651 ']' 00:11:08.731 22:20:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 3814651 00:11:08.731 22:20:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:11:08.731 22:20:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:08.731 22:20:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3814651 00:11:08.991 22:20:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:08.991 22:20:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:08.991 22:20:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3814651' 00:11:08.991 killing process with pid 3814651 00:11:08.991 22:20:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 3814651 00:11:08.991 22:20:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 3814651 00:11:08.991 22:20:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:08.991 22:20:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:08.991 22:20:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:08.991 22:20:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:08.991 22:20:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:08.991 22:20:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:08.991 22:20:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:08.991 22:20:34 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:11.532 22:20:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:11.532 00:11:11.532 real 0m8.715s 00:11:11.532 user 0m22.451s 00:11:11.532 sys 0m2.147s 00:11:11.532 22:20:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:11.532 22:20:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:11.532 ************************************ 00:11:11.532 END TEST nvmf_invalid 00:11:11.532 ************************************ 00:11:11.532 22:20:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:11.532 22:20:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:11.532 22:20:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:11.532 22:20:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:11.532 ************************************ 00:11:11.532 START TEST nvmf_connect_stress 00:11:11.532 ************************************ 00:11:11.532 22:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:11.532 * Looking for test storage... 00:11:11.532 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:11.532 22:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:11.532 22:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:11:11.532 22:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:11.532 22:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:11.532 22:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:11.532 22:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:11.532 22:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:11.532 22:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:11.532 22:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:11.532 22:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:11.532 22:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:11.532 22:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:11.532 22:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:11:11.532 22:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:11:11.532 22:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:11.532 22:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:11.532 22:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:11.532 22:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:11.532 22:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:11.532 22:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:11.532 22:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:11.532 22:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:11.532 22:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.532 22:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.532 22:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.532 22:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:11:11.532 22:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.532 22:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:11:11.532 22:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:11.532 22:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:11.532 22:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:11.532 22:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:11.532 22:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:11.532 22:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:11.532 22:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:11.532 22:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:11.532 22:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:11:11.532 22:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:11.532 22:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:11.532 22:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:11.532 22:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:11.533 22:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:11.533 22:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:11.533 22:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:11.533 22:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:11.533 22:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:11.533 22:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:11.533 22:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:11:11.533 22:20:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:12.913 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:12.913 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:11:12.913 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:12.913 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:12.913 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:12.913 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:12.913 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:12.913 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:11:12.913 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:12.913 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:11:12.913 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:11:12.913 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:11:12.913 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:11:12.913 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:11:12.913 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:11:12.913 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:12.913 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:12.913 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:12.913 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:12.913 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:12.913 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:12.913 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:12.913 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:12.913 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:12.913 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:12.913 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:12.913 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:12.913 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:12.913 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:12.913 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:12.913 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:12.913 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:12.913 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:12.913 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:11:12.913 Found 0000:08:00.0 (0x8086 - 0x159b) 00:11:12.913 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:12.913 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:12.913 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:12.913 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:12.913 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:12.913 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:12.913 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:11:12.913 Found 0000:08:00.1 (0x8086 - 0x159b) 00:11:12.913 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:12.913 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:12.913 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:12.913 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:12.913 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:12.913 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:12.913 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:12.913 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:12.913 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:12.913 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:12.913 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:12.913 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:12.913 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:12.913 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:12.913 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:12.913 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:11:12.913 Found net devices under 0000:08:00.0: cvl_0_0 00:11:12.913 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:12.913 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:12.913 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:12.913 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:12.913 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:12.913 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:12.913 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:12.913 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:12.913 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:11:12.913 Found net devices under 0000:08:00.1: cvl_0_1 00:11:12.913 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:12.913 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:12.913 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:11:12.914 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:12.914 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:12.914 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:12.914 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:12.914 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:12.914 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:12.914 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:12.914 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:12.914 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:12.914 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:12.914 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:12.914 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:12.914 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:12.914 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:12.914 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:12.914 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:12.914 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:12.914 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:12.914 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:12.914 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:12.914 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:12.914 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:12.914 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:12.914 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:12.914 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.385 ms 00:11:12.914 00:11:12.914 --- 10.0.0.2 ping statistics --- 00:11:12.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:12.914 rtt min/avg/max/mdev = 0.385/0.385/0.385/0.000 ms 00:11:12.914 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:12.914 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:12.914 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:11:12.914 00:11:12.914 --- 10.0.0.1 ping statistics --- 00:11:12.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:12.914 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:11:12.914 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:12.914 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:11:12.914 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:12.914 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:12.914 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:12.914 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:12.914 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:12.914 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:12.914 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:12.914 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:11:12.914 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:12.914 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:12.914 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:12.914 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=3816726 00:11:12.914 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:12.914 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 3816726 00:11:12.914 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 3816726 ']' 00:11:12.914 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:12.914 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:12.914 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:12.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:12.914 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:12.914 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:12.914 [2024-07-24 22:20:38.590185] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:11:12.914 [2024-07-24 22:20:38.590287] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:13.174 EAL: No free 2048 kB hugepages reported on node 1 00:11:13.174 [2024-07-24 22:20:38.655246] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:13.174 [2024-07-24 22:20:38.771653] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:13.174 [2024-07-24 22:20:38.771722] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:13.174 [2024-07-24 22:20:38.771738] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:13.174 [2024-07-24 22:20:38.771751] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:13.174 [2024-07-24 22:20:38.771763] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:13.174 [2024-07-24 22:20:38.771848] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:13.174 [2024-07-24 22:20:38.771902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:13.174 [2024-07-24 22:20:38.771905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:13.433 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:13.433 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:11:13.433 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:13.433 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:13.433 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:13.433 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:13.433 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:13.433 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.433 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:13.433 [2024-07-24 22:20:38.909300] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:13.433 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.433 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:13.433 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.433 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:13.433 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.433 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:13.433 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.433 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:13.433 [2024-07-24 22:20:38.937815] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:13.433 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.433 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:13.433 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.433 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:13.433 NULL1 00:11:13.433 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.433 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3816747 00:11:13.433 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:13.433 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:13.433 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:11:13.433 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:11:13.433 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:13.433 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:13.433 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:13.433 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:13.433 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:13.433 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:13.433 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:13.433 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:13.433 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:13.433 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:13.433 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:13.433 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:13.433 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:13.433 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:13.433 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:13.433 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:13.433 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:13.433 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:13.433 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:13.433 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:13.433 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:13.433 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:13.433 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:13.433 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:13.433 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:13.433 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:13.433 EAL: No free 2048 kB hugepages reported on node 1 00:11:13.433 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:13.433 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:13.433 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:13.433 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:13.433 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:13.433 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:13.433 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:13.433 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:13.433 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:13.433 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:13.433 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:13.433 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:13.433 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:13.433 22:20:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:13.433 22:20:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3816747 00:11:13.433 22:20:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:13.433 22:20:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.433 22:20:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:13.692 22:20:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.692 22:20:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3816747 00:11:13.692 22:20:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:13.692 22:20:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.692 22:20:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:13.952 22:20:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:13.952 22:20:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3816747 00:11:13.952 22:20:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:13.952 22:20:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:13.952 22:20:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:14.521 22:20:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.521 22:20:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3816747 00:11:14.521 22:20:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:14.521 22:20:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.521 22:20:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:14.778 22:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:14.778 22:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3816747 00:11:14.778 22:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:14.778 22:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:14.778 22:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:15.038 22:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.038 22:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3816747 00:11:15.038 22:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:15.038 22:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.038 22:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:15.298 22:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.298 22:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3816747 00:11:15.298 22:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:15.298 22:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.298 22:20:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:15.556 22:20:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:15.556 22:20:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3816747 00:11:15.556 22:20:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:15.556 22:20:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:15.556 22:20:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:16.123 22:20:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.123 22:20:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3816747 00:11:16.123 22:20:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:16.123 22:20:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.123 22:20:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:16.382 22:20:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.382 22:20:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3816747 00:11:16.382 22:20:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:16.382 22:20:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.382 22:20:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:16.642 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.642 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3816747 00:11:16.642 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:16.642 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.642 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:16.901 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:16.901 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3816747 00:11:16.901 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:16.902 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:16.902 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:17.160 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.160 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3816747 00:11:17.160 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:17.160 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.160 22:20:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:17.728 22:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.728 22:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3816747 00:11:17.728 22:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:17.728 22:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.728 22:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:17.988 22:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.988 22:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3816747 00:11:17.988 22:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:17.988 22:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.988 22:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:18.245 22:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.246 22:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3816747 00:11:18.246 22:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:18.246 22:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.246 22:20:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:18.505 22:20:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.505 22:20:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3816747 00:11:18.505 22:20:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:18.505 22:20:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.505 22:20:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:18.762 22:20:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:18.762 22:20:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3816747 00:11:18.762 22:20:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:18.762 22:20:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.762 22:20:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:19.327 22:20:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:19.327 22:20:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3816747 00:11:19.327 22:20:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:19.327 22:20:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:19.327 22:20:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:19.586 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:19.586 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3816747 00:11:19.586 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:19.586 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:19.586 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:19.845 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:19.845 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3816747 00:11:19.845 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:19.845 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:19.845 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:20.103 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:20.103 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3816747 00:11:20.103 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:20.103 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:20.103 22:20:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:20.670 22:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:20.670 22:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3816747 00:11:20.670 22:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:20.670 22:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:20.670 22:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:20.929 22:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:20.929 22:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3816747 00:11:20.929 22:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:20.929 22:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:20.929 22:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:21.188 22:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.188 22:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3816747 00:11:21.188 22:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:21.188 22:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.188 22:20:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:21.446 22:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.446 22:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3816747 00:11:21.446 22:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:21.446 22:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.446 22:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:21.705 22:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:21.705 22:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3816747 00:11:21.705 22:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:21.705 22:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:21.705 22:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:22.274 22:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.274 22:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3816747 00:11:22.274 22:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:22.274 22:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.274 22:20:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:22.531 22:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.531 22:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3816747 00:11:22.531 22:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:22.531 22:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.531 22:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:22.788 22:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.788 22:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3816747 00:11:22.788 22:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:22.788 22:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.788 22:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:23.046 22:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:23.046 22:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3816747 00:11:23.046 22:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:23.046 22:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:23.046 22:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:23.305 22:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:23.305 22:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3816747 00:11:23.305 22:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:23.305 22:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:23.305 22:20:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:23.564 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:23.823 22:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:23.823 22:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3816747 00:11:23.823 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3816747) - No such process 00:11:23.823 22:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3816747 00:11:23.823 22:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:23.823 22:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:23.823 22:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:11:23.823 22:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:23.823 22:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:11:23.823 22:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:23.823 22:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:11:23.823 22:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:23.823 22:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:23.823 rmmod nvme_tcp 00:11:23.823 rmmod nvme_fabrics 00:11:23.823 rmmod nvme_keyring 00:11:23.823 22:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:23.823 22:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:11:23.823 22:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:11:23.823 22:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 3816726 ']' 00:11:23.823 22:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 3816726 00:11:23.823 22:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 3816726 ']' 00:11:23.823 22:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 3816726 00:11:23.823 22:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:11:23.823 22:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:23.823 22:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3816726 00:11:23.823 22:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:23.823 22:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:23.823 22:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3816726' 00:11:23.823 killing process with pid 3816726 00:11:23.823 22:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 3816726 00:11:23.823 22:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 3816726 00:11:24.081 22:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:24.081 22:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:24.081 22:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:24.081 22:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:24.081 22:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:24.081 22:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:24.081 22:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:24.081 22:20:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:25.989 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:25.989 00:11:25.989 real 0m14.901s 00:11:25.989 user 0m38.457s 00:11:25.989 sys 0m5.418s 00:11:25.989 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:25.989 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:25.989 ************************************ 00:11:25.989 END TEST nvmf_connect_stress 00:11:25.989 ************************************ 00:11:25.990 22:20:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:25.990 22:20:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:25.990 22:20:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:25.990 22:20:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:26.282 ************************************ 00:11:26.282 START TEST nvmf_fused_ordering 00:11:26.282 ************************************ 00:11:26.282 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:26.282 * Looking for test storage... 00:11:26.282 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:26.282 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:26.282 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:11:26.282 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:26.282 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:26.282 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:26.282 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:26.282 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:26.282 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:26.282 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:26.282 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:26.282 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:26.282 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:26.282 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:11:26.282 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:11:26.282 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:26.282 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:26.282 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:26.282 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:26.282 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:26.282 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:26.282 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:26.282 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:26.282 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.282 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.282 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.282 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:11:26.282 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.282 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:11:26.282 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:26.282 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:26.282 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:26.282 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:26.282 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:26.282 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:26.282 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:26.282 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:26.282 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:11:26.282 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:26.282 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:26.282 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:26.282 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:26.282 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:26.282 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:26.282 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:26.282 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:26.282 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:26.282 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:26.282 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:11:26.282 22:20:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:28.212 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:28.212 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:11:28.212 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:28.212 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:28.212 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:28.212 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:28.212 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:28.212 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:11:28.212 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:28.212 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:11:28.212 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:11:28.212 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:11:28.212 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:11:28.212 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:11:28.212 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:11:28.212 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:28.212 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:28.212 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:28.212 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:28.212 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:28.212 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:28.212 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:28.212 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:28.212 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:28.212 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:28.212 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:28.212 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:28.212 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:11:28.213 Found 0000:08:00.0 (0x8086 - 0x159b) 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:11:28.213 Found 0000:08:00.1 (0x8086 - 0x159b) 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:11:28.213 Found net devices under 0000:08:00.0: cvl_0_0 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:11:28.213 Found net devices under 0000:08:00.1: cvl_0_1 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:28.213 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:28.213 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.337 ms 00:11:28.213 00:11:28.213 --- 10.0.0.2 ping statistics --- 00:11:28.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.213 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:28.213 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:28.213 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:11:28.213 00:11:28.213 --- 10.0.0.1 ping statistics --- 00:11:28.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.213 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=3819255 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 3819255 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 3819255 ']' 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:28.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:28.213 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:28.213 [2024-07-24 22:20:53.641792] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:11:28.214 [2024-07-24 22:20:53.641888] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:28.214 EAL: No free 2048 kB hugepages reported on node 1 00:11:28.214 [2024-07-24 22:20:53.706851] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:28.214 [2024-07-24 22:20:53.824871] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:28.214 [2024-07-24 22:20:53.824941] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:28.214 [2024-07-24 22:20:53.824957] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:28.214 [2024-07-24 22:20:53.824970] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:28.214 [2024-07-24 22:20:53.824982] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:28.214 [2024-07-24 22:20:53.825013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:28.475 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:28.475 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:11:28.475 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:28.475 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:28.475 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:28.475 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:28.475 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:28.475 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.475 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:28.475 [2024-07-24 22:20:53.961248] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:28.475 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.475 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:28.475 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.475 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:28.475 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.475 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:28.475 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.475 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:28.475 [2024-07-24 22:20:53.977414] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:28.475 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.475 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:28.475 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.475 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:28.475 NULL1 00:11:28.475 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.475 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:11:28.475 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.475 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:28.475 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.475 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:28.475 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.475 22:20:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:28.475 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.475 22:20:54 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:28.475 [2024-07-24 22:20:54.024329] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:11:28.475 [2024-07-24 22:20:54.024378] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3819281 ] 00:11:28.475 EAL: No free 2048 kB hugepages reported on node 1 00:11:29.045 Attached to nqn.2016-06.io.spdk:cnode1 00:11:29.045 Namespace ID: 1 size: 1GB 00:11:29.045 fused_ordering(0) 00:11:29.045 fused_ordering(1) 00:11:29.045 fused_ordering(2) 00:11:29.045 fused_ordering(3) 00:11:29.045 fused_ordering(4) 00:11:29.045 fused_ordering(5) 00:11:29.045 fused_ordering(6) 00:11:29.045 fused_ordering(7) 00:11:29.045 fused_ordering(8) 00:11:29.045 fused_ordering(9) 00:11:29.045 fused_ordering(10) 00:11:29.045 fused_ordering(11) 00:11:29.045 fused_ordering(12) 00:11:29.045 fused_ordering(13) 00:11:29.045 fused_ordering(14) 00:11:29.045 fused_ordering(15) 00:11:29.045 fused_ordering(16) 00:11:29.045 fused_ordering(17) 00:11:29.045 fused_ordering(18) 00:11:29.045 fused_ordering(19) 00:11:29.045 fused_ordering(20) 00:11:29.045 fused_ordering(21) 00:11:29.045 fused_ordering(22) 00:11:29.045 fused_ordering(23) 00:11:29.045 fused_ordering(24) 00:11:29.045 fused_ordering(25) 00:11:29.045 fused_ordering(26) 00:11:29.045 fused_ordering(27) 00:11:29.046 fused_ordering(28) 00:11:29.046 fused_ordering(29) 00:11:29.046 fused_ordering(30) 00:11:29.046 fused_ordering(31) 00:11:29.046 fused_ordering(32) 00:11:29.046 fused_ordering(33) 00:11:29.046 fused_ordering(34) 00:11:29.046 fused_ordering(35) 00:11:29.046 fused_ordering(36) 00:11:29.046 fused_ordering(37) 00:11:29.046 fused_ordering(38) 00:11:29.046 fused_ordering(39) 00:11:29.046 fused_ordering(40) 00:11:29.046 fused_ordering(41) 00:11:29.046 fused_ordering(42) 00:11:29.046 fused_ordering(43) 00:11:29.046 fused_ordering(44) 00:11:29.046 fused_ordering(45) 00:11:29.046 fused_ordering(46) 00:11:29.046 fused_ordering(47) 00:11:29.046 fused_ordering(48) 00:11:29.046 fused_ordering(49) 00:11:29.046 fused_ordering(50) 00:11:29.046 fused_ordering(51) 00:11:29.046 fused_ordering(52) 00:11:29.046 fused_ordering(53) 00:11:29.046 fused_ordering(54) 00:11:29.046 fused_ordering(55) 00:11:29.046 fused_ordering(56) 00:11:29.046 fused_ordering(57) 00:11:29.046 fused_ordering(58) 00:11:29.046 fused_ordering(59) 00:11:29.046 fused_ordering(60) 00:11:29.046 fused_ordering(61) 00:11:29.046 fused_ordering(62) 00:11:29.046 fused_ordering(63) 00:11:29.046 fused_ordering(64) 00:11:29.046 fused_ordering(65) 00:11:29.046 fused_ordering(66) 00:11:29.046 fused_ordering(67) 00:11:29.046 fused_ordering(68) 00:11:29.046 fused_ordering(69) 00:11:29.046 fused_ordering(70) 00:11:29.046 fused_ordering(71) 00:11:29.046 fused_ordering(72) 00:11:29.046 fused_ordering(73) 00:11:29.046 fused_ordering(74) 00:11:29.046 fused_ordering(75) 00:11:29.046 fused_ordering(76) 00:11:29.046 fused_ordering(77) 00:11:29.046 fused_ordering(78) 00:11:29.046 fused_ordering(79) 00:11:29.046 fused_ordering(80) 00:11:29.046 fused_ordering(81) 00:11:29.046 fused_ordering(82) 00:11:29.046 fused_ordering(83) 00:11:29.046 fused_ordering(84) 00:11:29.046 fused_ordering(85) 00:11:29.046 fused_ordering(86) 00:11:29.046 fused_ordering(87) 00:11:29.046 fused_ordering(88) 00:11:29.046 fused_ordering(89) 00:11:29.046 fused_ordering(90) 00:11:29.046 fused_ordering(91) 00:11:29.046 fused_ordering(92) 00:11:29.046 fused_ordering(93) 00:11:29.046 fused_ordering(94) 00:11:29.046 fused_ordering(95) 00:11:29.046 fused_ordering(96) 00:11:29.046 fused_ordering(97) 00:11:29.046 fused_ordering(98) 00:11:29.046 fused_ordering(99) 00:11:29.046 fused_ordering(100) 00:11:29.046 fused_ordering(101) 00:11:29.046 fused_ordering(102) 00:11:29.046 fused_ordering(103) 00:11:29.046 fused_ordering(104) 00:11:29.046 fused_ordering(105) 00:11:29.046 fused_ordering(106) 00:11:29.046 fused_ordering(107) 00:11:29.046 fused_ordering(108) 00:11:29.046 fused_ordering(109) 00:11:29.046 fused_ordering(110) 00:11:29.046 fused_ordering(111) 00:11:29.046 fused_ordering(112) 00:11:29.046 fused_ordering(113) 00:11:29.046 fused_ordering(114) 00:11:29.046 fused_ordering(115) 00:11:29.046 fused_ordering(116) 00:11:29.046 fused_ordering(117) 00:11:29.046 fused_ordering(118) 00:11:29.046 fused_ordering(119) 00:11:29.046 fused_ordering(120) 00:11:29.046 fused_ordering(121) 00:11:29.046 fused_ordering(122) 00:11:29.046 fused_ordering(123) 00:11:29.046 fused_ordering(124) 00:11:29.046 fused_ordering(125) 00:11:29.046 fused_ordering(126) 00:11:29.046 fused_ordering(127) 00:11:29.046 fused_ordering(128) 00:11:29.046 fused_ordering(129) 00:11:29.046 fused_ordering(130) 00:11:29.046 fused_ordering(131) 00:11:29.046 fused_ordering(132) 00:11:29.046 fused_ordering(133) 00:11:29.046 fused_ordering(134) 00:11:29.046 fused_ordering(135) 00:11:29.046 fused_ordering(136) 00:11:29.046 fused_ordering(137) 00:11:29.046 fused_ordering(138) 00:11:29.046 fused_ordering(139) 00:11:29.046 fused_ordering(140) 00:11:29.046 fused_ordering(141) 00:11:29.046 fused_ordering(142) 00:11:29.046 fused_ordering(143) 00:11:29.046 fused_ordering(144) 00:11:29.046 fused_ordering(145) 00:11:29.046 fused_ordering(146) 00:11:29.046 fused_ordering(147) 00:11:29.046 fused_ordering(148) 00:11:29.046 fused_ordering(149) 00:11:29.046 fused_ordering(150) 00:11:29.046 fused_ordering(151) 00:11:29.046 fused_ordering(152) 00:11:29.046 fused_ordering(153) 00:11:29.046 fused_ordering(154) 00:11:29.046 fused_ordering(155) 00:11:29.046 fused_ordering(156) 00:11:29.046 fused_ordering(157) 00:11:29.046 fused_ordering(158) 00:11:29.046 fused_ordering(159) 00:11:29.046 fused_ordering(160) 00:11:29.046 fused_ordering(161) 00:11:29.046 fused_ordering(162) 00:11:29.046 fused_ordering(163) 00:11:29.046 fused_ordering(164) 00:11:29.046 fused_ordering(165) 00:11:29.046 fused_ordering(166) 00:11:29.046 fused_ordering(167) 00:11:29.046 fused_ordering(168) 00:11:29.046 fused_ordering(169) 00:11:29.046 fused_ordering(170) 00:11:29.046 fused_ordering(171) 00:11:29.046 fused_ordering(172) 00:11:29.046 fused_ordering(173) 00:11:29.046 fused_ordering(174) 00:11:29.046 fused_ordering(175) 00:11:29.046 fused_ordering(176) 00:11:29.046 fused_ordering(177) 00:11:29.046 fused_ordering(178) 00:11:29.046 fused_ordering(179) 00:11:29.046 fused_ordering(180) 00:11:29.046 fused_ordering(181) 00:11:29.046 fused_ordering(182) 00:11:29.046 fused_ordering(183) 00:11:29.046 fused_ordering(184) 00:11:29.046 fused_ordering(185) 00:11:29.046 fused_ordering(186) 00:11:29.046 fused_ordering(187) 00:11:29.046 fused_ordering(188) 00:11:29.046 fused_ordering(189) 00:11:29.046 fused_ordering(190) 00:11:29.046 fused_ordering(191) 00:11:29.046 fused_ordering(192) 00:11:29.046 fused_ordering(193) 00:11:29.046 fused_ordering(194) 00:11:29.046 fused_ordering(195) 00:11:29.046 fused_ordering(196) 00:11:29.046 fused_ordering(197) 00:11:29.046 fused_ordering(198) 00:11:29.046 fused_ordering(199) 00:11:29.046 fused_ordering(200) 00:11:29.046 fused_ordering(201) 00:11:29.046 fused_ordering(202) 00:11:29.046 fused_ordering(203) 00:11:29.046 fused_ordering(204) 00:11:29.046 fused_ordering(205) 00:11:29.305 fused_ordering(206) 00:11:29.305 fused_ordering(207) 00:11:29.305 fused_ordering(208) 00:11:29.305 fused_ordering(209) 00:11:29.306 fused_ordering(210) 00:11:29.306 fused_ordering(211) 00:11:29.306 fused_ordering(212) 00:11:29.306 fused_ordering(213) 00:11:29.306 fused_ordering(214) 00:11:29.306 fused_ordering(215) 00:11:29.306 fused_ordering(216) 00:11:29.306 fused_ordering(217) 00:11:29.306 fused_ordering(218) 00:11:29.306 fused_ordering(219) 00:11:29.306 fused_ordering(220) 00:11:29.306 fused_ordering(221) 00:11:29.306 fused_ordering(222) 00:11:29.306 fused_ordering(223) 00:11:29.306 fused_ordering(224) 00:11:29.306 fused_ordering(225) 00:11:29.306 fused_ordering(226) 00:11:29.306 fused_ordering(227) 00:11:29.306 fused_ordering(228) 00:11:29.306 fused_ordering(229) 00:11:29.306 fused_ordering(230) 00:11:29.306 fused_ordering(231) 00:11:29.306 fused_ordering(232) 00:11:29.306 fused_ordering(233) 00:11:29.306 fused_ordering(234) 00:11:29.306 fused_ordering(235) 00:11:29.306 fused_ordering(236) 00:11:29.306 fused_ordering(237) 00:11:29.306 fused_ordering(238) 00:11:29.306 fused_ordering(239) 00:11:29.306 fused_ordering(240) 00:11:29.306 fused_ordering(241) 00:11:29.306 fused_ordering(242) 00:11:29.306 fused_ordering(243) 00:11:29.306 fused_ordering(244) 00:11:29.306 fused_ordering(245) 00:11:29.306 fused_ordering(246) 00:11:29.306 fused_ordering(247) 00:11:29.306 fused_ordering(248) 00:11:29.306 fused_ordering(249) 00:11:29.306 fused_ordering(250) 00:11:29.306 fused_ordering(251) 00:11:29.306 fused_ordering(252) 00:11:29.306 fused_ordering(253) 00:11:29.306 fused_ordering(254) 00:11:29.306 fused_ordering(255) 00:11:29.306 fused_ordering(256) 00:11:29.306 fused_ordering(257) 00:11:29.306 fused_ordering(258) 00:11:29.306 fused_ordering(259) 00:11:29.306 fused_ordering(260) 00:11:29.306 fused_ordering(261) 00:11:29.306 fused_ordering(262) 00:11:29.306 fused_ordering(263) 00:11:29.306 fused_ordering(264) 00:11:29.306 fused_ordering(265) 00:11:29.306 fused_ordering(266) 00:11:29.306 fused_ordering(267) 00:11:29.306 fused_ordering(268) 00:11:29.306 fused_ordering(269) 00:11:29.306 fused_ordering(270) 00:11:29.306 fused_ordering(271) 00:11:29.306 fused_ordering(272) 00:11:29.306 fused_ordering(273) 00:11:29.306 fused_ordering(274) 00:11:29.306 fused_ordering(275) 00:11:29.306 fused_ordering(276) 00:11:29.306 fused_ordering(277) 00:11:29.306 fused_ordering(278) 00:11:29.306 fused_ordering(279) 00:11:29.306 fused_ordering(280) 00:11:29.306 fused_ordering(281) 00:11:29.306 fused_ordering(282) 00:11:29.306 fused_ordering(283) 00:11:29.306 fused_ordering(284) 00:11:29.306 fused_ordering(285) 00:11:29.306 fused_ordering(286) 00:11:29.306 fused_ordering(287) 00:11:29.306 fused_ordering(288) 00:11:29.306 fused_ordering(289) 00:11:29.306 fused_ordering(290) 00:11:29.306 fused_ordering(291) 00:11:29.306 fused_ordering(292) 00:11:29.306 fused_ordering(293) 00:11:29.306 fused_ordering(294) 00:11:29.306 fused_ordering(295) 00:11:29.306 fused_ordering(296) 00:11:29.306 fused_ordering(297) 00:11:29.306 fused_ordering(298) 00:11:29.306 fused_ordering(299) 00:11:29.306 fused_ordering(300) 00:11:29.306 fused_ordering(301) 00:11:29.306 fused_ordering(302) 00:11:29.306 fused_ordering(303) 00:11:29.306 fused_ordering(304) 00:11:29.306 fused_ordering(305) 00:11:29.306 fused_ordering(306) 00:11:29.306 fused_ordering(307) 00:11:29.306 fused_ordering(308) 00:11:29.306 fused_ordering(309) 00:11:29.306 fused_ordering(310) 00:11:29.306 fused_ordering(311) 00:11:29.306 fused_ordering(312) 00:11:29.306 fused_ordering(313) 00:11:29.306 fused_ordering(314) 00:11:29.306 fused_ordering(315) 00:11:29.306 fused_ordering(316) 00:11:29.306 fused_ordering(317) 00:11:29.306 fused_ordering(318) 00:11:29.306 fused_ordering(319) 00:11:29.306 fused_ordering(320) 00:11:29.306 fused_ordering(321) 00:11:29.306 fused_ordering(322) 00:11:29.306 fused_ordering(323) 00:11:29.306 fused_ordering(324) 00:11:29.306 fused_ordering(325) 00:11:29.306 fused_ordering(326) 00:11:29.306 fused_ordering(327) 00:11:29.306 fused_ordering(328) 00:11:29.306 fused_ordering(329) 00:11:29.306 fused_ordering(330) 00:11:29.306 fused_ordering(331) 00:11:29.306 fused_ordering(332) 00:11:29.306 fused_ordering(333) 00:11:29.306 fused_ordering(334) 00:11:29.306 fused_ordering(335) 00:11:29.306 fused_ordering(336) 00:11:29.306 fused_ordering(337) 00:11:29.306 fused_ordering(338) 00:11:29.306 fused_ordering(339) 00:11:29.306 fused_ordering(340) 00:11:29.306 fused_ordering(341) 00:11:29.306 fused_ordering(342) 00:11:29.306 fused_ordering(343) 00:11:29.306 fused_ordering(344) 00:11:29.306 fused_ordering(345) 00:11:29.306 fused_ordering(346) 00:11:29.306 fused_ordering(347) 00:11:29.306 fused_ordering(348) 00:11:29.306 fused_ordering(349) 00:11:29.306 fused_ordering(350) 00:11:29.306 fused_ordering(351) 00:11:29.306 fused_ordering(352) 00:11:29.306 fused_ordering(353) 00:11:29.306 fused_ordering(354) 00:11:29.306 fused_ordering(355) 00:11:29.306 fused_ordering(356) 00:11:29.306 fused_ordering(357) 00:11:29.306 fused_ordering(358) 00:11:29.306 fused_ordering(359) 00:11:29.306 fused_ordering(360) 00:11:29.306 fused_ordering(361) 00:11:29.306 fused_ordering(362) 00:11:29.306 fused_ordering(363) 00:11:29.306 fused_ordering(364) 00:11:29.306 fused_ordering(365) 00:11:29.306 fused_ordering(366) 00:11:29.306 fused_ordering(367) 00:11:29.306 fused_ordering(368) 00:11:29.306 fused_ordering(369) 00:11:29.306 fused_ordering(370) 00:11:29.306 fused_ordering(371) 00:11:29.306 fused_ordering(372) 00:11:29.306 fused_ordering(373) 00:11:29.306 fused_ordering(374) 00:11:29.306 fused_ordering(375) 00:11:29.306 fused_ordering(376) 00:11:29.306 fused_ordering(377) 00:11:29.306 fused_ordering(378) 00:11:29.306 fused_ordering(379) 00:11:29.306 fused_ordering(380) 00:11:29.306 fused_ordering(381) 00:11:29.306 fused_ordering(382) 00:11:29.306 fused_ordering(383) 00:11:29.306 fused_ordering(384) 00:11:29.306 fused_ordering(385) 00:11:29.306 fused_ordering(386) 00:11:29.306 fused_ordering(387) 00:11:29.306 fused_ordering(388) 00:11:29.306 fused_ordering(389) 00:11:29.306 fused_ordering(390) 00:11:29.306 fused_ordering(391) 00:11:29.306 fused_ordering(392) 00:11:29.306 fused_ordering(393) 00:11:29.306 fused_ordering(394) 00:11:29.306 fused_ordering(395) 00:11:29.306 fused_ordering(396) 00:11:29.306 fused_ordering(397) 00:11:29.306 fused_ordering(398) 00:11:29.306 fused_ordering(399) 00:11:29.306 fused_ordering(400) 00:11:29.306 fused_ordering(401) 00:11:29.306 fused_ordering(402) 00:11:29.306 fused_ordering(403) 00:11:29.306 fused_ordering(404) 00:11:29.306 fused_ordering(405) 00:11:29.306 fused_ordering(406) 00:11:29.306 fused_ordering(407) 00:11:29.306 fused_ordering(408) 00:11:29.306 fused_ordering(409) 00:11:29.306 fused_ordering(410) 00:11:29.877 fused_ordering(411) 00:11:29.877 fused_ordering(412) 00:11:29.877 fused_ordering(413) 00:11:29.877 fused_ordering(414) 00:11:29.877 fused_ordering(415) 00:11:29.877 fused_ordering(416) 00:11:29.877 fused_ordering(417) 00:11:29.877 fused_ordering(418) 00:11:29.877 fused_ordering(419) 00:11:29.877 fused_ordering(420) 00:11:29.877 fused_ordering(421) 00:11:29.877 fused_ordering(422) 00:11:29.877 fused_ordering(423) 00:11:29.877 fused_ordering(424) 00:11:29.877 fused_ordering(425) 00:11:29.877 fused_ordering(426) 00:11:29.877 fused_ordering(427) 00:11:29.877 fused_ordering(428) 00:11:29.877 fused_ordering(429) 00:11:29.877 fused_ordering(430) 00:11:29.877 fused_ordering(431) 00:11:29.877 fused_ordering(432) 00:11:29.877 fused_ordering(433) 00:11:29.877 fused_ordering(434) 00:11:29.877 fused_ordering(435) 00:11:29.877 fused_ordering(436) 00:11:29.877 fused_ordering(437) 00:11:29.877 fused_ordering(438) 00:11:29.877 fused_ordering(439) 00:11:29.877 fused_ordering(440) 00:11:29.877 fused_ordering(441) 00:11:29.877 fused_ordering(442) 00:11:29.877 fused_ordering(443) 00:11:29.877 fused_ordering(444) 00:11:29.877 fused_ordering(445) 00:11:29.877 fused_ordering(446) 00:11:29.877 fused_ordering(447) 00:11:29.877 fused_ordering(448) 00:11:29.877 fused_ordering(449) 00:11:29.877 fused_ordering(450) 00:11:29.877 fused_ordering(451) 00:11:29.877 fused_ordering(452) 00:11:29.877 fused_ordering(453) 00:11:29.877 fused_ordering(454) 00:11:29.877 fused_ordering(455) 00:11:29.877 fused_ordering(456) 00:11:29.877 fused_ordering(457) 00:11:29.877 fused_ordering(458) 00:11:29.877 fused_ordering(459) 00:11:29.877 fused_ordering(460) 00:11:29.877 fused_ordering(461) 00:11:29.877 fused_ordering(462) 00:11:29.877 fused_ordering(463) 00:11:29.877 fused_ordering(464) 00:11:29.877 fused_ordering(465) 00:11:29.877 fused_ordering(466) 00:11:29.877 fused_ordering(467) 00:11:29.877 fused_ordering(468) 00:11:29.877 fused_ordering(469) 00:11:29.877 fused_ordering(470) 00:11:29.877 fused_ordering(471) 00:11:29.877 fused_ordering(472) 00:11:29.877 fused_ordering(473) 00:11:29.877 fused_ordering(474) 00:11:29.877 fused_ordering(475) 00:11:29.877 fused_ordering(476) 00:11:29.877 fused_ordering(477) 00:11:29.877 fused_ordering(478) 00:11:29.877 fused_ordering(479) 00:11:29.877 fused_ordering(480) 00:11:29.877 fused_ordering(481) 00:11:29.877 fused_ordering(482) 00:11:29.877 fused_ordering(483) 00:11:29.877 fused_ordering(484) 00:11:29.877 fused_ordering(485) 00:11:29.877 fused_ordering(486) 00:11:29.877 fused_ordering(487) 00:11:29.877 fused_ordering(488) 00:11:29.877 fused_ordering(489) 00:11:29.877 fused_ordering(490) 00:11:29.877 fused_ordering(491) 00:11:29.877 fused_ordering(492) 00:11:29.878 fused_ordering(493) 00:11:29.878 fused_ordering(494) 00:11:29.878 fused_ordering(495) 00:11:29.878 fused_ordering(496) 00:11:29.878 fused_ordering(497) 00:11:29.878 fused_ordering(498) 00:11:29.878 fused_ordering(499) 00:11:29.878 fused_ordering(500) 00:11:29.878 fused_ordering(501) 00:11:29.878 fused_ordering(502) 00:11:29.878 fused_ordering(503) 00:11:29.878 fused_ordering(504) 00:11:29.878 fused_ordering(505) 00:11:29.878 fused_ordering(506) 00:11:29.878 fused_ordering(507) 00:11:29.878 fused_ordering(508) 00:11:29.878 fused_ordering(509) 00:11:29.878 fused_ordering(510) 00:11:29.878 fused_ordering(511) 00:11:29.878 fused_ordering(512) 00:11:29.878 fused_ordering(513) 00:11:29.878 fused_ordering(514) 00:11:29.878 fused_ordering(515) 00:11:29.878 fused_ordering(516) 00:11:29.878 fused_ordering(517) 00:11:29.878 fused_ordering(518) 00:11:29.878 fused_ordering(519) 00:11:29.878 fused_ordering(520) 00:11:29.878 fused_ordering(521) 00:11:29.878 fused_ordering(522) 00:11:29.878 fused_ordering(523) 00:11:29.878 fused_ordering(524) 00:11:29.878 fused_ordering(525) 00:11:29.878 fused_ordering(526) 00:11:29.878 fused_ordering(527) 00:11:29.878 fused_ordering(528) 00:11:29.878 fused_ordering(529) 00:11:29.878 fused_ordering(530) 00:11:29.878 fused_ordering(531) 00:11:29.878 fused_ordering(532) 00:11:29.878 fused_ordering(533) 00:11:29.878 fused_ordering(534) 00:11:29.878 fused_ordering(535) 00:11:29.878 fused_ordering(536) 00:11:29.878 fused_ordering(537) 00:11:29.878 fused_ordering(538) 00:11:29.878 fused_ordering(539) 00:11:29.878 fused_ordering(540) 00:11:29.878 fused_ordering(541) 00:11:29.878 fused_ordering(542) 00:11:29.878 fused_ordering(543) 00:11:29.878 fused_ordering(544) 00:11:29.878 fused_ordering(545) 00:11:29.878 fused_ordering(546) 00:11:29.878 fused_ordering(547) 00:11:29.878 fused_ordering(548) 00:11:29.878 fused_ordering(549) 00:11:29.878 fused_ordering(550) 00:11:29.878 fused_ordering(551) 00:11:29.878 fused_ordering(552) 00:11:29.878 fused_ordering(553) 00:11:29.878 fused_ordering(554) 00:11:29.878 fused_ordering(555) 00:11:29.878 fused_ordering(556) 00:11:29.878 fused_ordering(557) 00:11:29.878 fused_ordering(558) 00:11:29.878 fused_ordering(559) 00:11:29.878 fused_ordering(560) 00:11:29.878 fused_ordering(561) 00:11:29.878 fused_ordering(562) 00:11:29.878 fused_ordering(563) 00:11:29.878 fused_ordering(564) 00:11:29.878 fused_ordering(565) 00:11:29.878 fused_ordering(566) 00:11:29.878 fused_ordering(567) 00:11:29.878 fused_ordering(568) 00:11:29.878 fused_ordering(569) 00:11:29.878 fused_ordering(570) 00:11:29.878 fused_ordering(571) 00:11:29.878 fused_ordering(572) 00:11:29.878 fused_ordering(573) 00:11:29.878 fused_ordering(574) 00:11:29.878 fused_ordering(575) 00:11:29.878 fused_ordering(576) 00:11:29.878 fused_ordering(577) 00:11:29.878 fused_ordering(578) 00:11:29.878 fused_ordering(579) 00:11:29.878 fused_ordering(580) 00:11:29.878 fused_ordering(581) 00:11:29.878 fused_ordering(582) 00:11:29.878 fused_ordering(583) 00:11:29.878 fused_ordering(584) 00:11:29.878 fused_ordering(585) 00:11:29.878 fused_ordering(586) 00:11:29.878 fused_ordering(587) 00:11:29.878 fused_ordering(588) 00:11:29.878 fused_ordering(589) 00:11:29.878 fused_ordering(590) 00:11:29.878 fused_ordering(591) 00:11:29.878 fused_ordering(592) 00:11:29.878 fused_ordering(593) 00:11:29.878 fused_ordering(594) 00:11:29.878 fused_ordering(595) 00:11:29.878 fused_ordering(596) 00:11:29.878 fused_ordering(597) 00:11:29.878 fused_ordering(598) 00:11:29.878 fused_ordering(599) 00:11:29.878 fused_ordering(600) 00:11:29.878 fused_ordering(601) 00:11:29.878 fused_ordering(602) 00:11:29.878 fused_ordering(603) 00:11:29.878 fused_ordering(604) 00:11:29.878 fused_ordering(605) 00:11:29.878 fused_ordering(606) 00:11:29.878 fused_ordering(607) 00:11:29.878 fused_ordering(608) 00:11:29.878 fused_ordering(609) 00:11:29.878 fused_ordering(610) 00:11:29.878 fused_ordering(611) 00:11:29.878 fused_ordering(612) 00:11:29.878 fused_ordering(613) 00:11:29.878 fused_ordering(614) 00:11:29.878 fused_ordering(615) 00:11:30.820 fused_ordering(616) 00:11:30.820 fused_ordering(617) 00:11:30.820 fused_ordering(618) 00:11:30.820 fused_ordering(619) 00:11:30.820 fused_ordering(620) 00:11:30.820 fused_ordering(621) 00:11:30.820 fused_ordering(622) 00:11:30.820 fused_ordering(623) 00:11:30.820 fused_ordering(624) 00:11:30.820 fused_ordering(625) 00:11:30.820 fused_ordering(626) 00:11:30.820 fused_ordering(627) 00:11:30.820 fused_ordering(628) 00:11:30.820 fused_ordering(629) 00:11:30.820 fused_ordering(630) 00:11:30.820 fused_ordering(631) 00:11:30.820 fused_ordering(632) 00:11:30.820 fused_ordering(633) 00:11:30.820 fused_ordering(634) 00:11:30.820 fused_ordering(635) 00:11:30.820 fused_ordering(636) 00:11:30.820 fused_ordering(637) 00:11:30.820 fused_ordering(638) 00:11:30.820 fused_ordering(639) 00:11:30.820 fused_ordering(640) 00:11:30.820 fused_ordering(641) 00:11:30.820 fused_ordering(642) 00:11:30.820 fused_ordering(643) 00:11:30.820 fused_ordering(644) 00:11:30.820 fused_ordering(645) 00:11:30.820 fused_ordering(646) 00:11:30.820 fused_ordering(647) 00:11:30.820 fused_ordering(648) 00:11:30.820 fused_ordering(649) 00:11:30.820 fused_ordering(650) 00:11:30.820 fused_ordering(651) 00:11:30.820 fused_ordering(652) 00:11:30.820 fused_ordering(653) 00:11:30.820 fused_ordering(654) 00:11:30.820 fused_ordering(655) 00:11:30.820 fused_ordering(656) 00:11:30.820 fused_ordering(657) 00:11:30.820 fused_ordering(658) 00:11:30.820 fused_ordering(659) 00:11:30.820 fused_ordering(660) 00:11:30.820 fused_ordering(661) 00:11:30.820 fused_ordering(662) 00:11:30.820 fused_ordering(663) 00:11:30.820 fused_ordering(664) 00:11:30.820 fused_ordering(665) 00:11:30.820 fused_ordering(666) 00:11:30.820 fused_ordering(667) 00:11:30.820 fused_ordering(668) 00:11:30.820 fused_ordering(669) 00:11:30.820 fused_ordering(670) 00:11:30.820 fused_ordering(671) 00:11:30.820 fused_ordering(672) 00:11:30.820 fused_ordering(673) 00:11:30.820 fused_ordering(674) 00:11:30.820 fused_ordering(675) 00:11:30.820 fused_ordering(676) 00:11:30.820 fused_ordering(677) 00:11:30.820 fused_ordering(678) 00:11:30.820 fused_ordering(679) 00:11:30.820 fused_ordering(680) 00:11:30.820 fused_ordering(681) 00:11:30.820 fused_ordering(682) 00:11:30.821 fused_ordering(683) 00:11:30.821 fused_ordering(684) 00:11:30.821 fused_ordering(685) 00:11:30.821 fused_ordering(686) 00:11:30.821 fused_ordering(687) 00:11:30.821 fused_ordering(688) 00:11:30.821 fused_ordering(689) 00:11:30.821 fused_ordering(690) 00:11:30.821 fused_ordering(691) 00:11:30.821 fused_ordering(692) 00:11:30.821 fused_ordering(693) 00:11:30.821 fused_ordering(694) 00:11:30.821 fused_ordering(695) 00:11:30.821 fused_ordering(696) 00:11:30.821 fused_ordering(697) 00:11:30.821 fused_ordering(698) 00:11:30.821 fused_ordering(699) 00:11:30.821 fused_ordering(700) 00:11:30.821 fused_ordering(701) 00:11:30.821 fused_ordering(702) 00:11:30.821 fused_ordering(703) 00:11:30.821 fused_ordering(704) 00:11:30.821 fused_ordering(705) 00:11:30.821 fused_ordering(706) 00:11:30.821 fused_ordering(707) 00:11:30.821 fused_ordering(708) 00:11:30.821 fused_ordering(709) 00:11:30.821 fused_ordering(710) 00:11:30.821 fused_ordering(711) 00:11:30.821 fused_ordering(712) 00:11:30.821 fused_ordering(713) 00:11:30.821 fused_ordering(714) 00:11:30.821 fused_ordering(715) 00:11:30.821 fused_ordering(716) 00:11:30.821 fused_ordering(717) 00:11:30.821 fused_ordering(718) 00:11:30.821 fused_ordering(719) 00:11:30.821 fused_ordering(720) 00:11:30.821 fused_ordering(721) 00:11:30.821 fused_ordering(722) 00:11:30.821 fused_ordering(723) 00:11:30.821 fused_ordering(724) 00:11:30.821 fused_ordering(725) 00:11:30.821 fused_ordering(726) 00:11:30.821 fused_ordering(727) 00:11:30.821 fused_ordering(728) 00:11:30.821 fused_ordering(729) 00:11:30.821 fused_ordering(730) 00:11:30.821 fused_ordering(731) 00:11:30.821 fused_ordering(732) 00:11:30.821 fused_ordering(733) 00:11:30.821 fused_ordering(734) 00:11:30.821 fused_ordering(735) 00:11:30.821 fused_ordering(736) 00:11:30.821 fused_ordering(737) 00:11:30.821 fused_ordering(738) 00:11:30.821 fused_ordering(739) 00:11:30.821 fused_ordering(740) 00:11:30.821 fused_ordering(741) 00:11:30.821 fused_ordering(742) 00:11:30.821 fused_ordering(743) 00:11:30.821 fused_ordering(744) 00:11:30.821 fused_ordering(745) 00:11:30.821 fused_ordering(746) 00:11:30.821 fused_ordering(747) 00:11:30.821 fused_ordering(748) 00:11:30.821 fused_ordering(749) 00:11:30.821 fused_ordering(750) 00:11:30.821 fused_ordering(751) 00:11:30.821 fused_ordering(752) 00:11:30.821 fused_ordering(753) 00:11:30.821 fused_ordering(754) 00:11:30.821 fused_ordering(755) 00:11:30.821 fused_ordering(756) 00:11:30.821 fused_ordering(757) 00:11:30.821 fused_ordering(758) 00:11:30.821 fused_ordering(759) 00:11:30.821 fused_ordering(760) 00:11:30.821 fused_ordering(761) 00:11:30.821 fused_ordering(762) 00:11:30.821 fused_ordering(763) 00:11:30.821 fused_ordering(764) 00:11:30.821 fused_ordering(765) 00:11:30.821 fused_ordering(766) 00:11:30.821 fused_ordering(767) 00:11:30.821 fused_ordering(768) 00:11:30.821 fused_ordering(769) 00:11:30.821 fused_ordering(770) 00:11:30.821 fused_ordering(771) 00:11:30.821 fused_ordering(772) 00:11:30.821 fused_ordering(773) 00:11:30.821 fused_ordering(774) 00:11:30.821 fused_ordering(775) 00:11:30.821 fused_ordering(776) 00:11:30.821 fused_ordering(777) 00:11:30.821 fused_ordering(778) 00:11:30.821 fused_ordering(779) 00:11:30.821 fused_ordering(780) 00:11:30.821 fused_ordering(781) 00:11:30.821 fused_ordering(782) 00:11:30.821 fused_ordering(783) 00:11:30.821 fused_ordering(784) 00:11:30.821 fused_ordering(785) 00:11:30.821 fused_ordering(786) 00:11:30.821 fused_ordering(787) 00:11:30.821 fused_ordering(788) 00:11:30.821 fused_ordering(789) 00:11:30.821 fused_ordering(790) 00:11:30.821 fused_ordering(791) 00:11:30.821 fused_ordering(792) 00:11:30.821 fused_ordering(793) 00:11:30.821 fused_ordering(794) 00:11:30.821 fused_ordering(795) 00:11:30.821 fused_ordering(796) 00:11:30.821 fused_ordering(797) 00:11:30.821 fused_ordering(798) 00:11:30.821 fused_ordering(799) 00:11:30.821 fused_ordering(800) 00:11:30.821 fused_ordering(801) 00:11:30.821 fused_ordering(802) 00:11:30.821 fused_ordering(803) 00:11:30.821 fused_ordering(804) 00:11:30.821 fused_ordering(805) 00:11:30.821 fused_ordering(806) 00:11:30.821 fused_ordering(807) 00:11:30.821 fused_ordering(808) 00:11:30.821 fused_ordering(809) 00:11:30.821 fused_ordering(810) 00:11:30.821 fused_ordering(811) 00:11:30.821 fused_ordering(812) 00:11:30.821 fused_ordering(813) 00:11:30.821 fused_ordering(814) 00:11:30.821 fused_ordering(815) 00:11:30.821 fused_ordering(816) 00:11:30.821 fused_ordering(817) 00:11:30.821 fused_ordering(818) 00:11:30.821 fused_ordering(819) 00:11:30.821 fused_ordering(820) 00:11:31.762 fused_ordering(821) 00:11:31.762 fused_ordering(822) 00:11:31.762 fused_ordering(823) 00:11:31.762 fused_ordering(824) 00:11:31.762 fused_ordering(825) 00:11:31.762 fused_ordering(826) 00:11:31.762 fused_ordering(827) 00:11:31.762 fused_ordering(828) 00:11:31.762 fused_ordering(829) 00:11:31.762 fused_ordering(830) 00:11:31.762 fused_ordering(831) 00:11:31.762 fused_ordering(832) 00:11:31.762 fused_ordering(833) 00:11:31.762 fused_ordering(834) 00:11:31.762 fused_ordering(835) 00:11:31.762 fused_ordering(836) 00:11:31.762 fused_ordering(837) 00:11:31.762 fused_ordering(838) 00:11:31.762 fused_ordering(839) 00:11:31.762 fused_ordering(840) 00:11:31.762 fused_ordering(841) 00:11:31.762 fused_ordering(842) 00:11:31.762 fused_ordering(843) 00:11:31.762 fused_ordering(844) 00:11:31.762 fused_ordering(845) 00:11:31.762 fused_ordering(846) 00:11:31.762 fused_ordering(847) 00:11:31.762 fused_ordering(848) 00:11:31.762 fused_ordering(849) 00:11:31.762 fused_ordering(850) 00:11:31.762 fused_ordering(851) 00:11:31.762 fused_ordering(852) 00:11:31.762 fused_ordering(853) 00:11:31.762 fused_ordering(854) 00:11:31.762 fused_ordering(855) 00:11:31.762 fused_ordering(856) 00:11:31.762 fused_ordering(857) 00:11:31.762 fused_ordering(858) 00:11:31.762 fused_ordering(859) 00:11:31.762 fused_ordering(860) 00:11:31.762 fused_ordering(861) 00:11:31.762 fused_ordering(862) 00:11:31.762 fused_ordering(863) 00:11:31.762 fused_ordering(864) 00:11:31.762 fused_ordering(865) 00:11:31.762 fused_ordering(866) 00:11:31.762 fused_ordering(867) 00:11:31.762 fused_ordering(868) 00:11:31.762 fused_ordering(869) 00:11:31.762 fused_ordering(870) 00:11:31.762 fused_ordering(871) 00:11:31.762 fused_ordering(872) 00:11:31.762 fused_ordering(873) 00:11:31.762 fused_ordering(874) 00:11:31.762 fused_ordering(875) 00:11:31.762 fused_ordering(876) 00:11:31.762 fused_ordering(877) 00:11:31.762 fused_ordering(878) 00:11:31.762 fused_ordering(879) 00:11:31.762 fused_ordering(880) 00:11:31.762 fused_ordering(881) 00:11:31.762 fused_ordering(882) 00:11:31.762 fused_ordering(883) 00:11:31.762 fused_ordering(884) 00:11:31.762 fused_ordering(885) 00:11:31.762 fused_ordering(886) 00:11:31.762 fused_ordering(887) 00:11:31.762 fused_ordering(888) 00:11:31.762 fused_ordering(889) 00:11:31.762 fused_ordering(890) 00:11:31.762 fused_ordering(891) 00:11:31.762 fused_ordering(892) 00:11:31.762 fused_ordering(893) 00:11:31.762 fused_ordering(894) 00:11:31.762 fused_ordering(895) 00:11:31.762 fused_ordering(896) 00:11:31.762 fused_ordering(897) 00:11:31.762 fused_ordering(898) 00:11:31.762 fused_ordering(899) 00:11:31.762 fused_ordering(900) 00:11:31.762 fused_ordering(901) 00:11:31.763 fused_ordering(902) 00:11:31.763 fused_ordering(903) 00:11:31.763 fused_ordering(904) 00:11:31.763 fused_ordering(905) 00:11:31.763 fused_ordering(906) 00:11:31.763 fused_ordering(907) 00:11:31.763 fused_ordering(908) 00:11:31.763 fused_ordering(909) 00:11:31.763 fused_ordering(910) 00:11:31.763 fused_ordering(911) 00:11:31.763 fused_ordering(912) 00:11:31.763 fused_ordering(913) 00:11:31.763 fused_ordering(914) 00:11:31.763 fused_ordering(915) 00:11:31.763 fused_ordering(916) 00:11:31.763 fused_ordering(917) 00:11:31.763 fused_ordering(918) 00:11:31.763 fused_ordering(919) 00:11:31.763 fused_ordering(920) 00:11:31.763 fused_ordering(921) 00:11:31.763 fused_ordering(922) 00:11:31.763 fused_ordering(923) 00:11:31.763 fused_ordering(924) 00:11:31.763 fused_ordering(925) 00:11:31.763 fused_ordering(926) 00:11:31.763 fused_ordering(927) 00:11:31.763 fused_ordering(928) 00:11:31.763 fused_ordering(929) 00:11:31.763 fused_ordering(930) 00:11:31.763 fused_ordering(931) 00:11:31.763 fused_ordering(932) 00:11:31.763 fused_ordering(933) 00:11:31.763 fused_ordering(934) 00:11:31.763 fused_ordering(935) 00:11:31.763 fused_ordering(936) 00:11:31.763 fused_ordering(937) 00:11:31.763 fused_ordering(938) 00:11:31.763 fused_ordering(939) 00:11:31.763 fused_ordering(940) 00:11:31.763 fused_ordering(941) 00:11:31.763 fused_ordering(942) 00:11:31.763 fused_ordering(943) 00:11:31.763 fused_ordering(944) 00:11:31.763 fused_ordering(945) 00:11:31.763 fused_ordering(946) 00:11:31.763 fused_ordering(947) 00:11:31.763 fused_ordering(948) 00:11:31.763 fused_ordering(949) 00:11:31.763 fused_ordering(950) 00:11:31.763 fused_ordering(951) 00:11:31.763 fused_ordering(952) 00:11:31.763 fused_ordering(953) 00:11:31.763 fused_ordering(954) 00:11:31.763 fused_ordering(955) 00:11:31.763 fused_ordering(956) 00:11:31.763 fused_ordering(957) 00:11:31.763 fused_ordering(958) 00:11:31.763 fused_ordering(959) 00:11:31.763 fused_ordering(960) 00:11:31.763 fused_ordering(961) 00:11:31.763 fused_ordering(962) 00:11:31.763 fused_ordering(963) 00:11:31.763 fused_ordering(964) 00:11:31.763 fused_ordering(965) 00:11:31.763 fused_ordering(966) 00:11:31.763 fused_ordering(967) 00:11:31.763 fused_ordering(968) 00:11:31.763 fused_ordering(969) 00:11:31.763 fused_ordering(970) 00:11:31.763 fused_ordering(971) 00:11:31.763 fused_ordering(972) 00:11:31.763 fused_ordering(973) 00:11:31.763 fused_ordering(974) 00:11:31.763 fused_ordering(975) 00:11:31.763 fused_ordering(976) 00:11:31.763 fused_ordering(977) 00:11:31.763 fused_ordering(978) 00:11:31.763 fused_ordering(979) 00:11:31.763 fused_ordering(980) 00:11:31.763 fused_ordering(981) 00:11:31.763 fused_ordering(982) 00:11:31.763 fused_ordering(983) 00:11:31.763 fused_ordering(984) 00:11:31.763 fused_ordering(985) 00:11:31.763 fused_ordering(986) 00:11:31.763 fused_ordering(987) 00:11:31.763 fused_ordering(988) 00:11:31.763 fused_ordering(989) 00:11:31.763 fused_ordering(990) 00:11:31.763 fused_ordering(991) 00:11:31.763 fused_ordering(992) 00:11:31.763 fused_ordering(993) 00:11:31.763 fused_ordering(994) 00:11:31.763 fused_ordering(995) 00:11:31.763 fused_ordering(996) 00:11:31.763 fused_ordering(997) 00:11:31.763 fused_ordering(998) 00:11:31.763 fused_ordering(999) 00:11:31.763 fused_ordering(1000) 00:11:31.763 fused_ordering(1001) 00:11:31.763 fused_ordering(1002) 00:11:31.763 fused_ordering(1003) 00:11:31.763 fused_ordering(1004) 00:11:31.763 fused_ordering(1005) 00:11:31.763 fused_ordering(1006) 00:11:31.763 fused_ordering(1007) 00:11:31.763 fused_ordering(1008) 00:11:31.763 fused_ordering(1009) 00:11:31.763 fused_ordering(1010) 00:11:31.763 fused_ordering(1011) 00:11:31.763 fused_ordering(1012) 00:11:31.763 fused_ordering(1013) 00:11:31.763 fused_ordering(1014) 00:11:31.763 fused_ordering(1015) 00:11:31.763 fused_ordering(1016) 00:11:31.763 fused_ordering(1017) 00:11:31.763 fused_ordering(1018) 00:11:31.763 fused_ordering(1019) 00:11:31.763 fused_ordering(1020) 00:11:31.763 fused_ordering(1021) 00:11:31.763 fused_ordering(1022) 00:11:31.763 fused_ordering(1023) 00:11:31.763 22:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:11:31.763 22:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:11:31.763 22:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:31.763 22:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:11:31.763 22:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:31.763 22:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:11:31.763 22:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:31.763 22:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:31.763 rmmod nvme_tcp 00:11:31.763 rmmod nvme_fabrics 00:11:31.763 rmmod nvme_keyring 00:11:31.763 22:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:31.763 22:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:11:31.763 22:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:11:31.763 22:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 3819255 ']' 00:11:31.763 22:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 3819255 00:11:31.763 22:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 3819255 ']' 00:11:31.763 22:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 3819255 00:11:31.763 22:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:11:31.763 22:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:31.763 22:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3819255 00:11:31.763 22:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:31.763 22:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:31.763 22:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3819255' 00:11:31.763 killing process with pid 3819255 00:11:31.763 22:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 3819255 00:11:31.763 22:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 3819255 00:11:32.024 22:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:32.024 22:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:32.024 22:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:32.024 22:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:32.024 22:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:32.024 22:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:32.024 22:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:32.024 22:20:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:33.937 22:20:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:33.937 00:11:33.937 real 0m7.824s 00:11:33.937 user 0m6.213s 00:11:33.937 sys 0m3.077s 00:11:33.937 22:20:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:33.937 22:20:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:33.937 ************************************ 00:11:33.937 END TEST nvmf_fused_ordering 00:11:33.937 ************************************ 00:11:33.937 22:20:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:11:33.937 22:20:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:33.937 22:20:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:33.937 22:20:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:33.937 ************************************ 00:11:33.937 START TEST nvmf_ns_masking 00:11:33.937 ************************************ 00:11:33.937 22:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:11:33.937 * Looking for test storage... 00:11:33.937 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:34.197 22:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:34.197 22:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:11:34.197 22:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:34.197 22:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:34.197 22:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:34.197 22:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:34.197 22:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:34.197 22:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:34.197 22:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:34.197 22:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:34.197 22:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:34.197 22:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:34.197 22:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:11:34.197 22:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:11:34.197 22:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:34.197 22:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:34.197 22:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:34.197 22:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:34.197 22:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:34.197 22:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:34.197 22:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:34.197 22:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:34.197 22:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.197 22:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.198 22:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.198 22:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:11:34.198 22:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.198 22:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:11:34.198 22:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:34.198 22:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:34.198 22:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:34.198 22:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:34.198 22:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:34.198 22:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:34.198 22:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:34.198 22:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:34.198 22:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:34.198 22:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:11:34.198 22:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:11:34.198 22:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:11:34.198 22:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=038e1b9e-6dc9-4a23-9962-cb62c75aa222 00:11:34.198 22:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:11:34.198 22:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=73e281b1-0955-45d6-ae3a-5c7b96126707 00:11:34.198 22:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:11:34.198 22:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:11:34.198 22:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:11:34.198 22:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:11:34.198 22:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=1312e911-54ba-426d-9f2e-b33ca9064c6b 00:11:34.198 22:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:11:34.198 22:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:34.198 22:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:34.198 22:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:34.198 22:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:34.198 22:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:34.198 22:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:34.198 22:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:34.198 22:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:34.198 22:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:34.198 22:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:34.198 22:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:11:34.198 22:20:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:35.578 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:35.578 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:11:35.578 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:35.578 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:35.578 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:35.578 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:35.578 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:35.578 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:11:35.578 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:35.578 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:11:35.578 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:11:35.578 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:11:35.578 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:11:35.578 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:11:35.578 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:11:35.578 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:35.578 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:35.578 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:35.578 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:35.578 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:35.578 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:35.578 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:35.578 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:35.578 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:35.578 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:35.578 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:35.578 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:35.578 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:35.578 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:35.578 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:35.578 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:35.578 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:35.578 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:35.578 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:11:35.578 Found 0000:08:00.0 (0x8086 - 0x159b) 00:11:35.578 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:35.578 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:35.578 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:35.578 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:35.578 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:35.578 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:35.578 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:11:35.578 Found 0000:08:00.1 (0x8086 - 0x159b) 00:11:35.578 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:35.578 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:35.578 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:35.578 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:35.578 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:35.578 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:35.578 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:35.578 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:35.578 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:35.578 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:35.578 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:35.578 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:35.578 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:35.838 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:35.838 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:35.838 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:11:35.838 Found net devices under 0000:08:00.0: cvl_0_0 00:11:35.838 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:35.838 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:35.838 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:35.838 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:35.838 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:35.838 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:35.838 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:35.838 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:35.838 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:11:35.838 Found net devices under 0000:08:00.1: cvl_0_1 00:11:35.838 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:35.838 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:35.838 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:11:35.838 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:35.838 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:35.838 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:35.838 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:35.838 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:35.838 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:35.838 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:35.838 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:35.838 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:35.838 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:35.838 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:35.838 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:35.839 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:35.839 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:35.839 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:35.839 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:35.839 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:35.839 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:35.839 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:35.839 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:35.839 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:35.839 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:35.839 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:35.839 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:35.839 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:11:35.839 00:11:35.839 --- 10.0.0.2 ping statistics --- 00:11:35.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:35.839 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:11:35.839 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:35.839 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:35.839 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:11:35.839 00:11:35.839 --- 10.0.0.1 ping statistics --- 00:11:35.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:35.839 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:11:35.839 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:35.839 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:11:35.839 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:35.839 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:35.839 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:35.839 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:35.839 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:35.839 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:35.839 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:35.839 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:11:35.839 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:35.839 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:35.839 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:35.839 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=3821141 00:11:35.839 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 3821141 00:11:35.839 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:11:35.839 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 3821141 ']' 00:11:35.839 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:35.839 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:35.839 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:35.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:35.839 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:35.839 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:35.839 [2024-07-24 22:21:01.474735] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:11:35.839 [2024-07-24 22:21:01.474846] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:35.839 EAL: No free 2048 kB hugepages reported on node 1 00:11:35.839 [2024-07-24 22:21:01.541116] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:36.098 [2024-07-24 22:21:01.656717] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:36.098 [2024-07-24 22:21:01.656770] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:36.098 [2024-07-24 22:21:01.656786] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:36.098 [2024-07-24 22:21:01.656799] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:36.098 [2024-07-24 22:21:01.656811] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:36.098 [2024-07-24 22:21:01.656846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.098 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:36.098 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:11:36.098 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:36.098 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:36.098 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:36.098 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:36.098 22:21:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:36.358 [2024-07-24 22:21:02.056960] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:36.617 22:21:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:11:36.617 22:21:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:11:36.617 22:21:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:36.878 Malloc1 00:11:36.878 22:21:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:37.137 Malloc2 00:11:37.137 22:21:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:37.396 22:21:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:11:37.654 22:21:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:37.911 [2024-07-24 22:21:03.578823] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:37.911 22:21:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:11:37.911 22:21:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 1312e911-54ba-426d-9f2e-b33ca9064c6b -a 10.0.0.2 -s 4420 -i 4 00:11:38.170 22:21:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:11:38.170 22:21:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # local i=0 00:11:38.170 22:21:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:11:38.170 22:21:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:11:38.170 22:21:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # sleep 2 00:11:40.082 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:11:40.082 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:11:40.082 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:11:40.082 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:11:40.082 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:11:40.082 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # return 0 00:11:40.082 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:40.082 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:40.340 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:40.340 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:40.340 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:11:40.340 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:40.340 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:40.340 [ 0]:0x1 00:11:40.340 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:40.340 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:40.340 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fe1e56bd52c54e938ab19467800cfe5c 00:11:40.340 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fe1e56bd52c54e938ab19467800cfe5c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:40.340 22:21:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:11:40.599 22:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:11:40.599 22:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:40.599 22:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:40.599 [ 0]:0x1 00:11:40.599 22:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:40.599 22:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:40.599 22:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fe1e56bd52c54e938ab19467800cfe5c 00:11:40.599 22:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fe1e56bd52c54e938ab19467800cfe5c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:40.599 22:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:11:40.599 22:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:40.599 22:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:40.599 [ 1]:0x2 00:11:40.599 22:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:40.599 22:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:40.599 22:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2e43670f64af4cc292d790704adc0e93 00:11:40.599 22:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2e43670f64af4cc292d790704adc0e93 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:40.599 22:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:11:40.599 22:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:40.857 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.857 22:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:41.115 22:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:11:41.379 22:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:11:41.379 22:21:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 1312e911-54ba-426d-9f2e-b33ca9064c6b -a 10.0.0.2 -s 4420 -i 4 00:11:41.637 22:21:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:11:41.637 22:21:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # local i=0 00:11:41.637 22:21:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:11:41.637 22:21:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # [[ -n 1 ]] 00:11:41.637 22:21:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # nvme_device_counter=1 00:11:41.637 22:21:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # sleep 2 00:11:43.538 22:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:11:43.538 22:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:11:43.538 22:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:11:43.538 22:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:11:43.538 22:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:11:43.538 22:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # return 0 00:11:43.538 22:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:43.538 22:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:43.538 22:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:43.538 22:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:43.538 22:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:11:43.538 22:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:43.538 22:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:43.538 22:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:43.538 22:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:43.538 22:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:43.538 22:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:43.538 22:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:43.538 22:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:43.538 22:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:43.797 22:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:43.797 22:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:43.797 22:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:43.797 22:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:43.797 22:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:43.797 22:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:43.797 22:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:43.797 22:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:43.797 22:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:11:43.797 22:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:43.797 22:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:43.797 [ 0]:0x2 00:11:43.797 22:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:43.797 22:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:43.797 22:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2e43670f64af4cc292d790704adc0e93 00:11:43.797 22:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2e43670f64af4cc292d790704adc0e93 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:43.797 22:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:44.055 22:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:11:44.055 22:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:44.055 22:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:44.055 [ 0]:0x1 00:11:44.055 22:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:44.055 22:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:44.055 22:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fe1e56bd52c54e938ab19467800cfe5c 00:11:44.055 22:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fe1e56bd52c54e938ab19467800cfe5c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:44.055 22:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:11:44.055 22:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:44.055 22:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:44.055 [ 1]:0x2 00:11:44.055 22:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:44.055 22:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:44.313 22:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2e43670f64af4cc292d790704adc0e93 00:11:44.313 22:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2e43670f64af4cc292d790704adc0e93 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:44.313 22:21:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:44.571 22:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:11:44.571 22:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:44.572 22:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:44.572 22:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:44.572 22:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:44.572 22:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:44.572 22:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:44.572 22:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:44.572 22:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:44.572 22:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:44.572 22:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:44.572 22:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:44.572 22:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:44.572 22:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:44.572 22:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:44.572 22:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:44.572 22:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:44.572 22:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:44.572 22:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:11:44.572 22:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:44.572 22:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:44.572 [ 0]:0x2 00:11:44.572 22:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:44.572 22:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:44.572 22:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2e43670f64af4cc292d790704adc0e93 00:11:44.572 22:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2e43670f64af4cc292d790704adc0e93 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:44.572 22:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:11:44.572 22:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:44.572 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.572 22:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:44.830 22:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:11:44.830 22:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 1312e911-54ba-426d-9f2e-b33ca9064c6b -a 10.0.0.2 -s 4420 -i 4 00:11:45.088 22:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:45.088 22:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # local i=0 00:11:45.088 22:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:11:45.088 22:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # [[ -n 2 ]] 00:11:45.088 22:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # nvme_device_counter=2 00:11:45.088 22:21:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # sleep 2 00:11:47.616 22:21:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:11:47.616 22:21:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:11:47.616 22:21:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:11:47.616 22:21:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_devices=2 00:11:47.616 22:21:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:11:47.617 22:21:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # return 0 00:11:47.617 22:21:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:47.617 22:21:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:47.617 22:21:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:47.617 22:21:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:47.617 22:21:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:11:47.617 22:21:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:47.617 22:21:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:47.617 [ 0]:0x1 00:11:47.617 22:21:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:47.617 22:21:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:47.617 22:21:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fe1e56bd52c54e938ab19467800cfe5c 00:11:47.617 22:21:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fe1e56bd52c54e938ab19467800cfe5c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:47.617 22:21:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:11:47.617 22:21:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:47.617 22:21:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:47.617 [ 1]:0x2 00:11:47.617 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:47.617 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:47.617 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2e43670f64af4cc292d790704adc0e93 00:11:47.617 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2e43670f64af4cc292d790704adc0e93 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:47.617 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:47.875 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:11:47.875 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:47.875 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:47.875 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:47.875 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:47.875 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:47.875 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:47.875 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:47.875 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:47.875 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:47.875 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:47.875 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:47.875 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:47.875 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:47.875 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:47.875 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:47.875 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:47.875 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:47.875 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:11:47.875 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:47.875 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:47.875 [ 0]:0x2 00:11:47.875 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:47.875 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:47.875 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2e43670f64af4cc292d790704adc0e93 00:11:47.875 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2e43670f64af4cc292d790704adc0e93 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:47.875 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:47.875 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:47.875 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:47.875 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:47.875 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:47.875 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:47.875 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:47.875 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:47.875 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:47.875 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:47.875 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:47.875 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:11:48.133 [2024-07-24 22:21:13.722042] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:11:48.133 request: 00:11:48.133 { 00:11:48.133 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:48.133 "nsid": 2, 00:11:48.133 "host": "nqn.2016-06.io.spdk:host1", 00:11:48.133 "method": "nvmf_ns_remove_host", 00:11:48.133 "req_id": 1 00:11:48.133 } 00:11:48.133 Got JSON-RPC error response 00:11:48.133 response: 00:11:48.133 { 00:11:48.133 "code": -32602, 00:11:48.133 "message": "Invalid parameters" 00:11:48.133 } 00:11:48.133 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:48.133 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:48.133 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:48.133 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:48.133 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:11:48.133 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:11:48.133 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:11:48.133 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:11:48.133 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:48.133 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:11:48.133 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:48.133 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:11:48.133 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:48.133 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:48.133 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:48.133 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:48.133 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:11:48.133 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:48.133 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:11:48.133 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:48.133 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:48.133 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:48.133 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:11:48.133 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:48.133 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:48.133 [ 0]:0x2 00:11:48.133 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:48.133 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:48.133 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2e43670f64af4cc292d790704adc0e93 00:11:48.133 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2e43670f64af4cc292d790704adc0e93 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:48.392 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:11:48.392 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:48.392 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.392 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3822977 00:11:48.392 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:11:48.392 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:11:48.392 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3822977 /var/tmp/host.sock 00:11:48.392 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 3822977 ']' 00:11:48.392 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:11:48.392 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:48.392 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:11:48.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:11:48.392 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:48.392 22:21:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:48.392 [2024-07-24 22:21:13.954604] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:11:48.392 [2024-07-24 22:21:13.954704] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3822977 ] 00:11:48.392 EAL: No free 2048 kB hugepages reported on node 1 00:11:48.392 [2024-07-24 22:21:14.015960] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:48.650 [2024-07-24 22:21:14.132827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:48.909 22:21:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:48.909 22:21:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:11:48.909 22:21:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:49.167 22:21:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:49.425 22:21:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 038e1b9e-6dc9-4a23-9962-cb62c75aa222 00:11:49.425 22:21:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:11:49.425 22:21:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 038E1B9E6DC94A239962CB62C75AA222 -i 00:11:49.683 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 73e281b1-0955-45d6-ae3a-5c7b96126707 00:11:49.683 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:11:49.683 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 73E281B1095545D6AE3A5C7B96126707 -i 00:11:49.941 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:11:50.199 22:21:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:11:50.762 22:21:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:11:50.762 22:21:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:11:51.020 nvme0n1 00:11:51.020 22:21:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:11:51.020 22:21:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:11:51.585 nvme1n2 00:11:51.585 22:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:11:51.585 22:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:11:51.585 22:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:11:51.585 22:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:11:51.585 22:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:11:51.843 22:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:11:51.843 22:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:11:51.843 22:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:11:51.843 22:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:11:52.101 22:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 038e1b9e-6dc9-4a23-9962-cb62c75aa222 == \0\3\8\e\1\b\9\e\-\6\d\c\9\-\4\a\2\3\-\9\9\6\2\-\c\b\6\2\c\7\5\a\a\2\2\2 ]] 00:11:52.101 22:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:11:52.101 22:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:11:52.101 22:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:11:52.361 22:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 73e281b1-0955-45d6-ae3a-5c7b96126707 == \7\3\e\2\8\1\b\1\-\0\9\5\5\-\4\5\d\6\-\a\e\3\a\-\5\c\7\b\9\6\1\2\6\7\0\7 ]] 00:11:52.361 22:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 3822977 00:11:52.361 22:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 3822977 ']' 00:11:52.361 22:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 3822977 00:11:52.361 22:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:11:52.361 22:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:52.361 22:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3822977 00:11:52.361 22:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:52.361 22:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:52.361 22:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3822977' 00:11:52.361 killing process with pid 3822977 00:11:52.361 22:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 3822977 00:11:52.361 22:21:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 3822977 00:11:52.635 22:21:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:53.202 22:21:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:11:53.202 22:21:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:11:53.202 22:21:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:53.202 22:21:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:11:53.202 22:21:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:53.203 22:21:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:11:53.203 22:21:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:53.203 22:21:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:53.203 rmmod nvme_tcp 00:11:53.203 rmmod nvme_fabrics 00:11:53.203 rmmod nvme_keyring 00:11:53.203 22:21:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:53.203 22:21:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:11:53.203 22:21:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:11:53.203 22:21:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 3821141 ']' 00:11:53.203 22:21:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 3821141 00:11:53.203 22:21:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 3821141 ']' 00:11:53.203 22:21:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 3821141 00:11:53.203 22:21:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:11:53.203 22:21:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:53.203 22:21:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3821141 00:11:53.203 22:21:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:53.203 22:21:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:53.203 22:21:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3821141' 00:11:53.203 killing process with pid 3821141 00:11:53.203 22:21:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 3821141 00:11:53.203 22:21:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 3821141 00:11:53.462 22:21:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:53.462 22:21:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:53.462 22:21:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:53.462 22:21:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:53.462 22:21:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:53.462 22:21:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:53.462 22:21:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:53.462 22:21:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:55.365 22:21:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:55.365 00:11:55.365 real 0m21.406s 00:11:55.365 user 0m29.348s 00:11:55.365 sys 0m3.769s 00:11:55.365 22:21:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:55.365 22:21:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:55.365 ************************************ 00:11:55.365 END TEST nvmf_ns_masking 00:11:55.365 ************************************ 00:11:55.365 22:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:11:55.365 22:21:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:55.365 22:21:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:55.365 22:21:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:55.365 22:21:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:55.365 ************************************ 00:11:55.365 START TEST nvmf_nvme_cli 00:11:55.365 ************************************ 00:11:55.365 22:21:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:11:55.623 * Looking for test storage... 00:11:55.623 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:55.623 22:21:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:55.623 22:21:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:11:55.623 22:21:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:55.623 22:21:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:55.623 22:21:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:55.623 22:21:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:55.623 22:21:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:55.623 22:21:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:55.623 22:21:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:55.623 22:21:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:55.623 22:21:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:55.623 22:21:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:55.623 22:21:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:11:55.623 22:21:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:11:55.623 22:21:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:55.623 22:21:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:55.623 22:21:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:55.623 22:21:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:55.623 22:21:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:55.623 22:21:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:55.623 22:21:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:55.623 22:21:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:55.623 22:21:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.623 22:21:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.623 22:21:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.623 22:21:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:11:55.623 22:21:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.623 22:21:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:11:55.623 22:21:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:55.623 22:21:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:55.623 22:21:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:55.623 22:21:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:55.623 22:21:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:55.623 22:21:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:55.623 22:21:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:55.623 22:21:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:55.623 22:21:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:55.623 22:21:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:55.623 22:21:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:11:55.623 22:21:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:11:55.623 22:21:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:55.623 22:21:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:55.623 22:21:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:55.623 22:21:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:55.623 22:21:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:55.623 22:21:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:55.623 22:21:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:55.623 22:21:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:55.623 22:21:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:55.623 22:21:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:55.623 22:21:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:11:55.623 22:21:21 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:11:57.527 Found 0000:08:00.0 (0x8086 - 0x159b) 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:11:57.527 Found 0000:08:00.1 (0x8086 - 0x159b) 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:11:57.527 Found net devices under 0000:08:00.0: cvl_0_0 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:11:57.527 Found net devices under 0000:08:00.1: cvl_0_1 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:57.527 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:57.528 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:57.528 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:57.528 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:57.528 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:57.528 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:57.528 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:57.528 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:57.528 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:57.528 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:57.528 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:57.528 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:57.528 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:57.528 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.271 ms 00:11:57.528 00:11:57.528 --- 10.0.0.2 ping statistics --- 00:11:57.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:57.528 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:11:57.528 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:57.528 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:57.528 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:11:57.528 00:11:57.528 --- 10.0.0.1 ping statistics --- 00:11:57.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:57.528 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:11:57.528 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:57.528 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:11:57.528 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:57.528 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:57.528 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:57.528 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:57.528 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:57.528 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:57.528 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:57.528 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:11:57.528 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:57.528 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:57.528 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:57.528 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=3824901 00:11:57.528 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:57.528 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 3824901 00:11:57.528 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 3824901 ']' 00:11:57.528 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:57.528 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:57.528 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:57.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:57.528 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:57.528 22:21:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:57.528 [2024-07-24 22:21:22.924251] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:11:57.528 [2024-07-24 22:21:22.924347] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:57.528 EAL: No free 2048 kB hugepages reported on node 1 00:11:57.528 [2024-07-24 22:21:22.992696] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:57.528 [2024-07-24 22:21:23.114107] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:57.528 [2024-07-24 22:21:23.114171] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:57.528 [2024-07-24 22:21:23.114186] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:57.528 [2024-07-24 22:21:23.114199] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:57.528 [2024-07-24 22:21:23.114211] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:57.528 [2024-07-24 22:21:23.116504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:57.528 [2024-07-24 22:21:23.116600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:57.528 [2024-07-24 22:21:23.116683] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:57.528 [2024-07-24 22:21:23.116715] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.786 22:21:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:57.786 22:21:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:11:57.786 22:21:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:57.786 22:21:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:57.787 22:21:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:57.787 22:21:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:57.787 22:21:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:57.787 22:21:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.787 22:21:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:57.787 [2024-07-24 22:21:23.272769] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:57.787 22:21:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.787 22:21:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:57.787 22:21:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.787 22:21:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:57.787 Malloc0 00:11:57.787 22:21:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.787 22:21:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:57.787 22:21:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.787 22:21:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:57.787 Malloc1 00:11:57.787 22:21:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.787 22:21:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:11:57.787 22:21:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.787 22:21:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:57.787 22:21:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.787 22:21:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:57.787 22:21:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.787 22:21:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:57.787 22:21:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.787 22:21:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:57.787 22:21:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.787 22:21:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:57.787 22:21:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.787 22:21:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:57.787 22:21:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.787 22:21:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:57.787 [2024-07-24 22:21:23.350801] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:57.787 22:21:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.787 22:21:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:57.787 22:21:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.787 22:21:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:11:57.787 22:21:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.787 22:21:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -a 10.0.0.2 -s 4420 00:11:57.787 00:11:57.787 Discovery Log Number of Records 2, Generation counter 2 00:11:57.787 =====Discovery Log Entry 0====== 00:11:57.787 trtype: tcp 00:11:57.787 adrfam: ipv4 00:11:57.787 subtype: current discovery subsystem 00:11:57.787 treq: not required 00:11:57.787 portid: 0 00:11:57.787 trsvcid: 4420 00:11:57.787 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:57.787 traddr: 10.0.0.2 00:11:57.787 eflags: explicit discovery connections, duplicate discovery information 00:11:57.787 sectype: none 00:11:57.787 =====Discovery Log Entry 1====== 00:11:57.787 trtype: tcp 00:11:57.787 adrfam: ipv4 00:11:57.787 subtype: nvme subsystem 00:11:57.787 treq: not required 00:11:57.787 portid: 0 00:11:57.787 trsvcid: 4420 00:11:57.787 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:57.787 traddr: 10.0.0.2 00:11:57.787 eflags: none 00:11:57.787 sectype: none 00:11:57.787 22:21:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:11:57.787 22:21:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:11:57.787 22:21:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:11:57.787 22:21:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:57.787 22:21:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:11:57.787 22:21:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:11:57.787 22:21:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:57.787 22:21:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:11:57.787 22:21:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:11:57.787 22:21:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:11:57.787 22:21:23 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:58.352 22:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:11:58.352 22:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1196 -- # local i=0 00:11:58.352 22:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:11:58.352 22:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # [[ -n 2 ]] 00:11:58.352 22:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # nvme_device_counter=2 00:11:58.352 22:21:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # sleep 2 00:12:00.878 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:12:00.878 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:12:00.878 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:12:00.878 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_devices=2 00:12:00.878 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:12:00.878 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # return 0 00:12:00.878 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:12:00.878 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:00.878 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:00.878 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:00.878 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:00.878 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:00.878 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:00.878 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:00.878 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:00.878 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:12:00.878 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:00.879 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:00.879 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:12:00.879 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:00.879 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:12:00.879 /dev/nvme0n1 ]] 00:12:00.879 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:12:00.879 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:12:00.879 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:00.879 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:00.879 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:00.879 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:00.879 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:00.879 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:00.879 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:00.879 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:00.879 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:12:00.879 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:00.879 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:00.879 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:12:00.879 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:00.879 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:12:00.879 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:00.879 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.879 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:00.879 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1217 -- # local i=0 00:12:00.879 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:12:00.879 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:00.879 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:12:00.879 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:00.879 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # return 0 00:12:00.879 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:12:00.879 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:00.879 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.879 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:00.879 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.879 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:00.879 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:12:00.879 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:00.879 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:12:00.879 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:00.879 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:12:00.879 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:00.879 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:00.879 rmmod nvme_tcp 00:12:00.879 rmmod nvme_fabrics 00:12:01.137 rmmod nvme_keyring 00:12:01.137 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:01.137 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:12:01.137 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:12:01.137 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 3824901 ']' 00:12:01.137 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 3824901 00:12:01.137 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 3824901 ']' 00:12:01.137 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 3824901 00:12:01.137 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:12:01.137 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:01.137 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3824901 00:12:01.137 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:01.137 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:01.137 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3824901' 00:12:01.137 killing process with pid 3824901 00:12:01.137 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 3824901 00:12:01.137 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 3824901 00:12:01.397 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:01.397 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:01.397 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:01.397 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:01.397 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:01.397 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:01.397 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:01.397 22:21:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:03.307 22:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:03.307 00:12:03.307 real 0m7.900s 00:12:03.307 user 0m15.323s 00:12:03.307 sys 0m1.916s 00:12:03.307 22:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:03.307 22:21:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:03.307 ************************************ 00:12:03.307 END TEST nvmf_nvme_cli 00:12:03.307 ************************************ 00:12:03.307 22:21:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:12:03.307 22:21:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:03.307 22:21:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:03.307 22:21:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:03.307 22:21:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:03.307 ************************************ 00:12:03.307 START TEST nvmf_vfio_user 00:12:03.307 ************************************ 00:12:03.307 22:21:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:03.567 * Looking for test storage... 00:12:03.567 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:03.567 22:21:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:03.567 22:21:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:12:03.567 22:21:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:03.567 22:21:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:03.567 22:21:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:03.567 22:21:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:03.567 22:21:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:03.567 22:21:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:03.567 22:21:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:03.567 22:21:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:03.567 22:21:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:03.567 22:21:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:03.567 22:21:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:12:03.567 22:21:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:12:03.567 22:21:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:03.567 22:21:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:03.567 22:21:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:03.567 22:21:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:03.567 22:21:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:03.567 22:21:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:03.567 22:21:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:03.567 22:21:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:03.567 22:21:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.567 22:21:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.567 22:21:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.567 22:21:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:12:03.567 22:21:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.567 22:21:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:12:03.568 22:21:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:03.568 22:21:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:03.568 22:21:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:03.568 22:21:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:03.568 22:21:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:03.568 22:21:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:03.568 22:21:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:03.568 22:21:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:03.568 22:21:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:03.568 22:21:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:03.568 22:21:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:12:03.568 22:21:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:03.568 22:21:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:03.568 22:21:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:03.568 22:21:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:12:03.568 22:21:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:12:03.568 22:21:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:12:03.568 22:21:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:12:03.568 22:21:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3825638 00:12:03.568 22:21:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:12:03.568 22:21:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3825638' 00:12:03.568 Process pid: 3825638 00:12:03.568 22:21:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:03.568 22:21:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3825638 00:12:03.568 22:21:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 3825638 ']' 00:12:03.568 22:21:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:03.568 22:21:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:03.568 22:21:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:03.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:03.568 22:21:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:03.568 22:21:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:03.568 [2024-07-24 22:21:29.128100] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:12:03.568 [2024-07-24 22:21:29.128201] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:03.568 EAL: No free 2048 kB hugepages reported on node 1 00:12:03.568 [2024-07-24 22:21:29.189625] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:03.826 [2024-07-24 22:21:29.306627] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:03.826 [2024-07-24 22:21:29.306684] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:03.826 [2024-07-24 22:21:29.306701] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:03.826 [2024-07-24 22:21:29.306715] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:03.826 [2024-07-24 22:21:29.306726] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:03.826 [2024-07-24 22:21:29.306790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:03.826 [2024-07-24 22:21:29.306883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:03.826 [2024-07-24 22:21:29.306932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:03.826 [2024-07-24 22:21:29.306936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:03.826 22:21:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:03.826 22:21:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:12:03.826 22:21:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:04.760 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:12:05.018 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:05.275 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:05.275 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:05.275 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:05.275 22:21:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:05.533 Malloc1 00:12:05.533 22:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:05.791 22:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:06.049 22:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:06.307 22:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:06.307 22:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:06.307 22:21:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:06.565 Malloc2 00:12:06.565 22:21:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:06.823 22:21:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:07.081 22:21:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:07.081 22:21:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:12:07.081 22:21:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:12:07.081 22:21:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:07.081 22:21:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:07.081 22:21:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:12:07.081 22:21:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:07.340 [2024-07-24 22:21:32.797224] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:12:07.340 [2024-07-24 22:21:32.797279] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3825959 ] 00:12:07.340 EAL: No free 2048 kB hugepages reported on node 1 00:12:07.340 [2024-07-24 22:21:32.839441] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:12:07.340 [2024-07-24 22:21:32.841979] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:07.340 [2024-07-24 22:21:32.842011] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fbcb5cc5000 00:12:07.341 [2024-07-24 22:21:32.842963] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:07.341 [2024-07-24 22:21:32.843958] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:07.341 [2024-07-24 22:21:32.844976] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:07.341 [2024-07-24 22:21:32.845969] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:07.341 [2024-07-24 22:21:32.846977] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:07.341 [2024-07-24 22:21:32.847979] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:07.341 [2024-07-24 22:21:32.848990] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:07.341 [2024-07-24 22:21:32.849996] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:07.341 [2024-07-24 22:21:32.851013] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:07.341 [2024-07-24 22:21:32.851035] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fbcb5cba000 00:12:07.341 [2024-07-24 22:21:32.852490] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:07.341 [2024-07-24 22:21:32.873043] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:12:07.341 [2024-07-24 22:21:32.873083] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:12:07.341 [2024-07-24 22:21:32.876145] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:07.341 [2024-07-24 22:21:32.876209] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:07.341 [2024-07-24 22:21:32.876319] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:12:07.341 [2024-07-24 22:21:32.876352] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:12:07.341 [2024-07-24 22:21:32.876364] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:12:07.341 [2024-07-24 22:21:32.877134] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:12:07.341 [2024-07-24 22:21:32.877162] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:12:07.341 [2024-07-24 22:21:32.877177] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:12:07.341 [2024-07-24 22:21:32.878136] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:07.341 [2024-07-24 22:21:32.878156] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:12:07.341 [2024-07-24 22:21:32.878171] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:12:07.341 [2024-07-24 22:21:32.880499] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:12:07.341 [2024-07-24 22:21:32.880519] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:07.341 [2024-07-24 22:21:32.881153] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:12:07.341 [2024-07-24 22:21:32.881172] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:12:07.341 [2024-07-24 22:21:32.881182] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:12:07.341 [2024-07-24 22:21:32.881195] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:07.341 [2024-07-24 22:21:32.881306] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:12:07.341 [2024-07-24 22:21:32.881321] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:07.341 [2024-07-24 22:21:32.881332] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:12:07.341 [2024-07-24 22:21:32.882168] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:12:07.341 [2024-07-24 22:21:32.883158] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:12:07.341 [2024-07-24 22:21:32.884179] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:07.341 [2024-07-24 22:21:32.885168] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:07.341 [2024-07-24 22:21:32.885288] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:07.341 [2024-07-24 22:21:32.886185] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:12:07.341 [2024-07-24 22:21:32.886204] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:07.341 [2024-07-24 22:21:32.886214] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:12:07.341 [2024-07-24 22:21:32.886242] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:12:07.341 [2024-07-24 22:21:32.886257] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:12:07.341 [2024-07-24 22:21:32.886288] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:07.341 [2024-07-24 22:21:32.886298] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:07.341 [2024-07-24 22:21:32.886306] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:07.341 [2024-07-24 22:21:32.886329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:07.341 [2024-07-24 22:21:32.886386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:07.341 [2024-07-24 22:21:32.886405] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:12:07.341 [2024-07-24 22:21:32.886415] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:12:07.341 [2024-07-24 22:21:32.886424] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:12:07.341 [2024-07-24 22:21:32.886432] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:07.341 [2024-07-24 22:21:32.886442] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:12:07.341 [2024-07-24 22:21:32.886451] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:12:07.341 [2024-07-24 22:21:32.886459] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:12:07.341 [2024-07-24 22:21:32.886473] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:12:07.341 [2024-07-24 22:21:32.886502] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:07.341 [2024-07-24 22:21:32.886527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:07.341 [2024-07-24 22:21:32.886553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:07.341 [2024-07-24 22:21:32.886568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:07.341 [2024-07-24 22:21:32.886582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:07.341 [2024-07-24 22:21:32.886596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:07.341 [2024-07-24 22:21:32.886605] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:12:07.341 [2024-07-24 22:21:32.886622] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:07.341 [2024-07-24 22:21:32.886639] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:07.341 [2024-07-24 22:21:32.886652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:07.341 [2024-07-24 22:21:32.886664] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:12:07.341 [2024-07-24 22:21:32.886674] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:07.341 [2024-07-24 22:21:32.886690] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:12:07.342 [2024-07-24 22:21:32.886702] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:12:07.342 [2024-07-24 22:21:32.886717] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:07.342 [2024-07-24 22:21:32.886730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:07.342 [2024-07-24 22:21:32.886806] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:12:07.342 [2024-07-24 22:21:32.886823] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:12:07.342 [2024-07-24 22:21:32.886838] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:07.342 [2024-07-24 22:21:32.886847] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:07.342 [2024-07-24 22:21:32.886854] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:07.342 [2024-07-24 22:21:32.886865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:07.342 [2024-07-24 22:21:32.886881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:07.342 [2024-07-24 22:21:32.886901] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:12:07.342 [2024-07-24 22:21:32.886919] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:12:07.342 [2024-07-24 22:21:32.886935] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:12:07.342 [2024-07-24 22:21:32.886953] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:07.342 [2024-07-24 22:21:32.886963] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:07.342 [2024-07-24 22:21:32.886970] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:07.342 [2024-07-24 22:21:32.886980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:07.342 [2024-07-24 22:21:32.887013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:07.342 [2024-07-24 22:21:32.887037] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:07.342 [2024-07-24 22:21:32.887054] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:07.342 [2024-07-24 22:21:32.887068] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:07.342 [2024-07-24 22:21:32.887077] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:07.342 [2024-07-24 22:21:32.887084] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:07.342 [2024-07-24 22:21:32.887095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:07.342 [2024-07-24 22:21:32.887111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:07.342 [2024-07-24 22:21:32.887127] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:07.342 [2024-07-24 22:21:32.887140] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:12:07.342 [2024-07-24 22:21:32.887156] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:12:07.342 [2024-07-24 22:21:32.887171] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:12:07.342 [2024-07-24 22:21:32.887181] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:07.342 [2024-07-24 22:21:32.887191] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:12:07.342 [2024-07-24 22:21:32.887201] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:12:07.342 [2024-07-24 22:21:32.887210] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:12:07.342 [2024-07-24 22:21:32.887219] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:12:07.342 [2024-07-24 22:21:32.887248] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:07.342 [2024-07-24 22:21:32.887268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:07.342 [2024-07-24 22:21:32.887290] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:07.342 [2024-07-24 22:21:32.887307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:07.342 [2024-07-24 22:21:32.887326] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:07.342 [2024-07-24 22:21:32.887343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:07.342 [2024-07-24 22:21:32.887362] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:07.342 [2024-07-24 22:21:32.887376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:07.342 [2024-07-24 22:21:32.887400] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:07.342 [2024-07-24 22:21:32.887411] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:07.342 [2024-07-24 22:21:32.887419] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:07.342 [2024-07-24 22:21:32.887425] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:07.342 [2024-07-24 22:21:32.887432] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:12:07.342 [2024-07-24 22:21:32.887443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:07.342 [2024-07-24 22:21:32.887456] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:07.342 [2024-07-24 22:21:32.887465] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:07.342 [2024-07-24 22:21:32.887472] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:07.342 [2024-07-24 22:21:32.887489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:07.342 [2024-07-24 22:21:32.887504] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:07.342 [2024-07-24 22:21:32.887513] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:07.342 [2024-07-24 22:21:32.887520] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:07.342 [2024-07-24 22:21:32.887530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:07.342 [2024-07-24 22:21:32.887544] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:07.342 [2024-07-24 22:21:32.887553] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:07.342 [2024-07-24 22:21:32.887560] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:07.342 [2024-07-24 22:21:32.887570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:07.342 [2024-07-24 22:21:32.887583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:07.342 [2024-07-24 22:21:32.887606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:07.342 [2024-07-24 22:21:32.887629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:07.342 [2024-07-24 22:21:32.887642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:07.342 ===================================================== 00:12:07.342 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:07.342 ===================================================== 00:12:07.342 Controller Capabilities/Features 00:12:07.342 ================================ 00:12:07.342 Vendor ID: 4e58 00:12:07.342 Subsystem Vendor ID: 4e58 00:12:07.342 Serial Number: SPDK1 00:12:07.342 Model Number: SPDK bdev Controller 00:12:07.342 Firmware Version: 24.09 00:12:07.342 Recommended Arb Burst: 6 00:12:07.342 IEEE OUI Identifier: 8d 6b 50 00:12:07.342 Multi-path I/O 00:12:07.342 May have multiple subsystem ports: Yes 00:12:07.342 May have multiple controllers: Yes 00:12:07.342 Associated with SR-IOV VF: No 00:12:07.342 Max Data Transfer Size: 131072 00:12:07.342 Max Number of Namespaces: 32 00:12:07.342 Max Number of I/O Queues: 127 00:12:07.342 NVMe Specification Version (VS): 1.3 00:12:07.342 NVMe Specification Version (Identify): 1.3 00:12:07.342 Maximum Queue Entries: 256 00:12:07.342 Contiguous Queues Required: Yes 00:12:07.342 Arbitration Mechanisms Supported 00:12:07.342 Weighted Round Robin: Not Supported 00:12:07.342 Vendor Specific: Not Supported 00:12:07.342 Reset Timeout: 15000 ms 00:12:07.342 Doorbell Stride: 4 bytes 00:12:07.342 NVM Subsystem Reset: Not Supported 00:12:07.342 Command Sets Supported 00:12:07.342 NVM Command Set: Supported 00:12:07.342 Boot Partition: Not Supported 00:12:07.342 Memory Page Size Minimum: 4096 bytes 00:12:07.342 Memory Page Size Maximum: 4096 bytes 00:12:07.342 Persistent Memory Region: Not Supported 00:12:07.342 Optional Asynchronous Events Supported 00:12:07.342 Namespace Attribute Notices: Supported 00:12:07.342 Firmware Activation Notices: Not Supported 00:12:07.342 ANA Change Notices: Not Supported 00:12:07.342 PLE Aggregate Log Change Notices: Not Supported 00:12:07.342 LBA Status Info Alert Notices: Not Supported 00:12:07.342 EGE Aggregate Log Change Notices: Not Supported 00:12:07.343 Normal NVM Subsystem Shutdown event: Not Supported 00:12:07.343 Zone Descriptor Change Notices: Not Supported 00:12:07.343 Discovery Log Change Notices: Not Supported 00:12:07.343 Controller Attributes 00:12:07.343 128-bit Host Identifier: Supported 00:12:07.343 Non-Operational Permissive Mode: Not Supported 00:12:07.343 NVM Sets: Not Supported 00:12:07.343 Read Recovery Levels: Not Supported 00:12:07.343 Endurance Groups: Not Supported 00:12:07.343 Predictable Latency Mode: Not Supported 00:12:07.343 Traffic Based Keep ALive: Not Supported 00:12:07.343 Namespace Granularity: Not Supported 00:12:07.343 SQ Associations: Not Supported 00:12:07.343 UUID List: Not Supported 00:12:07.343 Multi-Domain Subsystem: Not Supported 00:12:07.343 Fixed Capacity Management: Not Supported 00:12:07.343 Variable Capacity Management: Not Supported 00:12:07.343 Delete Endurance Group: Not Supported 00:12:07.343 Delete NVM Set: Not Supported 00:12:07.343 Extended LBA Formats Supported: Not Supported 00:12:07.343 Flexible Data Placement Supported: Not Supported 00:12:07.343 00:12:07.343 Controller Memory Buffer Support 00:12:07.343 ================================ 00:12:07.343 Supported: No 00:12:07.343 00:12:07.343 Persistent Memory Region Support 00:12:07.343 ================================ 00:12:07.343 Supported: No 00:12:07.343 00:12:07.343 Admin Command Set Attributes 00:12:07.343 ============================ 00:12:07.343 Security Send/Receive: Not Supported 00:12:07.343 Format NVM: Not Supported 00:12:07.343 Firmware Activate/Download: Not Supported 00:12:07.343 Namespace Management: Not Supported 00:12:07.343 Device Self-Test: Not Supported 00:12:07.343 Directives: Not Supported 00:12:07.343 NVMe-MI: Not Supported 00:12:07.343 Virtualization Management: Not Supported 00:12:07.343 Doorbell Buffer Config: Not Supported 00:12:07.343 Get LBA Status Capability: Not Supported 00:12:07.343 Command & Feature Lockdown Capability: Not Supported 00:12:07.343 Abort Command Limit: 4 00:12:07.343 Async Event Request Limit: 4 00:12:07.343 Number of Firmware Slots: N/A 00:12:07.343 Firmware Slot 1 Read-Only: N/A 00:12:07.343 Firmware Activation Without Reset: N/A 00:12:07.343 Multiple Update Detection Support: N/A 00:12:07.343 Firmware Update Granularity: No Information Provided 00:12:07.343 Per-Namespace SMART Log: No 00:12:07.343 Asymmetric Namespace Access Log Page: Not Supported 00:12:07.343 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:12:07.343 Command Effects Log Page: Supported 00:12:07.343 Get Log Page Extended Data: Supported 00:12:07.343 Telemetry Log Pages: Not Supported 00:12:07.343 Persistent Event Log Pages: Not Supported 00:12:07.343 Supported Log Pages Log Page: May Support 00:12:07.343 Commands Supported & Effects Log Page: Not Supported 00:12:07.343 Feature Identifiers & Effects Log Page:May Support 00:12:07.343 NVMe-MI Commands & Effects Log Page: May Support 00:12:07.343 Data Area 4 for Telemetry Log: Not Supported 00:12:07.343 Error Log Page Entries Supported: 128 00:12:07.343 Keep Alive: Supported 00:12:07.343 Keep Alive Granularity: 10000 ms 00:12:07.343 00:12:07.343 NVM Command Set Attributes 00:12:07.343 ========================== 00:12:07.343 Submission Queue Entry Size 00:12:07.343 Max: 64 00:12:07.343 Min: 64 00:12:07.343 Completion Queue Entry Size 00:12:07.343 Max: 16 00:12:07.343 Min: 16 00:12:07.343 Number of Namespaces: 32 00:12:07.343 Compare Command: Supported 00:12:07.343 Write Uncorrectable Command: Not Supported 00:12:07.343 Dataset Management Command: Supported 00:12:07.343 Write Zeroes Command: Supported 00:12:07.343 Set Features Save Field: Not Supported 00:12:07.343 Reservations: Not Supported 00:12:07.343 Timestamp: Not Supported 00:12:07.343 Copy: Supported 00:12:07.343 Volatile Write Cache: Present 00:12:07.343 Atomic Write Unit (Normal): 1 00:12:07.343 Atomic Write Unit (PFail): 1 00:12:07.343 Atomic Compare & Write Unit: 1 00:12:07.343 Fused Compare & Write: Supported 00:12:07.343 Scatter-Gather List 00:12:07.343 SGL Command Set: Supported (Dword aligned) 00:12:07.343 SGL Keyed: Not Supported 00:12:07.343 SGL Bit Bucket Descriptor: Not Supported 00:12:07.343 SGL Metadata Pointer: Not Supported 00:12:07.343 Oversized SGL: Not Supported 00:12:07.343 SGL Metadata Address: Not Supported 00:12:07.343 SGL Offset: Not Supported 00:12:07.343 Transport SGL Data Block: Not Supported 00:12:07.343 Replay Protected Memory Block: Not Supported 00:12:07.343 00:12:07.343 Firmware Slot Information 00:12:07.343 ========================= 00:12:07.343 Active slot: 1 00:12:07.343 Slot 1 Firmware Revision: 24.09 00:12:07.343 00:12:07.343 00:12:07.343 Commands Supported and Effects 00:12:07.343 ============================== 00:12:07.343 Admin Commands 00:12:07.343 -------------- 00:12:07.343 Get Log Page (02h): Supported 00:12:07.343 Identify (06h): Supported 00:12:07.343 Abort (08h): Supported 00:12:07.343 Set Features (09h): Supported 00:12:07.343 Get Features (0Ah): Supported 00:12:07.343 Asynchronous Event Request (0Ch): Supported 00:12:07.343 Keep Alive (18h): Supported 00:12:07.343 I/O Commands 00:12:07.343 ------------ 00:12:07.343 Flush (00h): Supported LBA-Change 00:12:07.343 Write (01h): Supported LBA-Change 00:12:07.343 Read (02h): Supported 00:12:07.343 Compare (05h): Supported 00:12:07.343 Write Zeroes (08h): Supported LBA-Change 00:12:07.343 Dataset Management (09h): Supported LBA-Change 00:12:07.343 Copy (19h): Supported LBA-Change 00:12:07.343 00:12:07.343 Error Log 00:12:07.343 ========= 00:12:07.343 00:12:07.343 Arbitration 00:12:07.343 =========== 00:12:07.343 Arbitration Burst: 1 00:12:07.343 00:12:07.343 Power Management 00:12:07.343 ================ 00:12:07.343 Number of Power States: 1 00:12:07.343 Current Power State: Power State #0 00:12:07.343 Power State #0: 00:12:07.343 Max Power: 0.00 W 00:12:07.343 Non-Operational State: Operational 00:12:07.343 Entry Latency: Not Reported 00:12:07.343 Exit Latency: Not Reported 00:12:07.343 Relative Read Throughput: 0 00:12:07.343 Relative Read Latency: 0 00:12:07.343 Relative Write Throughput: 0 00:12:07.343 Relative Write Latency: 0 00:12:07.343 Idle Power: Not Reported 00:12:07.343 Active Power: Not Reported 00:12:07.343 Non-Operational Permissive Mode: Not Supported 00:12:07.343 00:12:07.343 Health Information 00:12:07.343 ================== 00:12:07.343 Critical Warnings: 00:12:07.343 Available Spare Space: OK 00:12:07.343 Temperature: OK 00:12:07.343 Device Reliability: OK 00:12:07.343 Read Only: No 00:12:07.343 Volatile Memory Backup: OK 00:12:07.343 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:07.343 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:07.343 Available Spare: 0% 00:12:07.343 Available Sp[2024-07-24 22:21:32.887780] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:07.343 [2024-07-24 22:21:32.887798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:07.343 [2024-07-24 22:21:32.887844] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:12:07.343 [2024-07-24 22:21:32.887867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:07.343 [2024-07-24 22:21:32.887880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:07.344 [2024-07-24 22:21:32.887892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:07.344 [2024-07-24 22:21:32.887903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:07.344 [2024-07-24 22:21:32.889494] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:07.344 [2024-07-24 22:21:32.889518] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:12:07.344 [2024-07-24 22:21:32.890201] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:07.344 [2024-07-24 22:21:32.890290] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:12:07.344 [2024-07-24 22:21:32.890305] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:12:07.344 [2024-07-24 22:21:32.891209] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:12:07.344 [2024-07-24 22:21:32.891233] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:12:07.344 [2024-07-24 22:21:32.891315] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:12:07.344 [2024-07-24 22:21:32.894497] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:07.344 are Threshold: 0% 00:12:07.344 Life Percentage Used: 0% 00:12:07.344 Data Units Read: 0 00:12:07.344 Data Units Written: 0 00:12:07.344 Host Read Commands: 0 00:12:07.344 Host Write Commands: 0 00:12:07.344 Controller Busy Time: 0 minutes 00:12:07.344 Power Cycles: 0 00:12:07.344 Power On Hours: 0 hours 00:12:07.344 Unsafe Shutdowns: 0 00:12:07.344 Unrecoverable Media Errors: 0 00:12:07.344 Lifetime Error Log Entries: 0 00:12:07.344 Warning Temperature Time: 0 minutes 00:12:07.344 Critical Temperature Time: 0 minutes 00:12:07.344 00:12:07.344 Number of Queues 00:12:07.344 ================ 00:12:07.344 Number of I/O Submission Queues: 127 00:12:07.344 Number of I/O Completion Queues: 127 00:12:07.344 00:12:07.344 Active Namespaces 00:12:07.344 ================= 00:12:07.344 Namespace ID:1 00:12:07.344 Error Recovery Timeout: Unlimited 00:12:07.344 Command Set Identifier: NVM (00h) 00:12:07.344 Deallocate: Supported 00:12:07.344 Deallocated/Unwritten Error: Not Supported 00:12:07.344 Deallocated Read Value: Unknown 00:12:07.344 Deallocate in Write Zeroes: Not Supported 00:12:07.344 Deallocated Guard Field: 0xFFFF 00:12:07.344 Flush: Supported 00:12:07.344 Reservation: Supported 00:12:07.344 Namespace Sharing Capabilities: Multiple Controllers 00:12:07.344 Size (in LBAs): 131072 (0GiB) 00:12:07.344 Capacity (in LBAs): 131072 (0GiB) 00:12:07.344 Utilization (in LBAs): 131072 (0GiB) 00:12:07.344 NGUID: 54DC6740C39B4B6F8DE4D7F53A57FC49 00:12:07.344 UUID: 54dc6740-c39b-4b6f-8de4-d7f53a57fc49 00:12:07.344 Thin Provisioning: Not Supported 00:12:07.344 Per-NS Atomic Units: Yes 00:12:07.344 Atomic Boundary Size (Normal): 0 00:12:07.344 Atomic Boundary Size (PFail): 0 00:12:07.344 Atomic Boundary Offset: 0 00:12:07.344 Maximum Single Source Range Length: 65535 00:12:07.344 Maximum Copy Length: 65535 00:12:07.344 Maximum Source Range Count: 1 00:12:07.344 NGUID/EUI64 Never Reused: No 00:12:07.344 Namespace Write Protected: No 00:12:07.344 Number of LBA Formats: 1 00:12:07.344 Current LBA Format: LBA Format #00 00:12:07.344 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:07.344 00:12:07.344 22:21:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:07.344 EAL: No free 2048 kB hugepages reported on node 1 00:12:07.603 [2024-07-24 22:21:33.121657] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:12.874 Initializing NVMe Controllers 00:12:12.874 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:12.874 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:12.874 Initialization complete. Launching workers. 00:12:12.874 ======================================================== 00:12:12.874 Latency(us) 00:12:12.874 Device Information : IOPS MiB/s Average min max 00:12:12.874 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 24083.82 94.08 5314.72 1488.38 10542.04 00:12:12.874 ======================================================== 00:12:12.874 Total : 24083.82 94.08 5314.72 1488.38 10542.04 00:12:12.874 00:12:12.874 [2024-07-24 22:21:38.143709] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:12.874 22:21:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:12.874 EAL: No free 2048 kB hugepages reported on node 1 00:12:12.874 [2024-07-24 22:21:38.384922] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:18.136 Initializing NVMe Controllers 00:12:18.136 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:18.136 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:18.136 Initialization complete. Launching workers. 00:12:18.136 ======================================================== 00:12:18.136 Latency(us) 00:12:18.136 Device Information : IOPS MiB/s Average min max 00:12:18.136 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16043.81 62.67 7983.22 4970.75 15234.02 00:12:18.136 ======================================================== 00:12:18.136 Total : 16043.81 62.67 7983.22 4970.75 15234.02 00:12:18.136 00:12:18.136 [2024-07-24 22:21:43.426444] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:18.136 22:21:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:18.136 EAL: No free 2048 kB hugepages reported on node 1 00:12:18.136 [2024-07-24 22:21:43.660688] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:23.397 [2024-07-24 22:21:48.732755] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:23.397 Initializing NVMe Controllers 00:12:23.397 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:23.397 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:23.397 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:12:23.397 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:12:23.397 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:12:23.397 Initialization complete. Launching workers. 00:12:23.397 Starting thread on core 2 00:12:23.397 Starting thread on core 3 00:12:23.397 Starting thread on core 1 00:12:23.397 22:21:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:12:23.397 EAL: No free 2048 kB hugepages reported on node 1 00:12:23.397 [2024-07-24 22:21:49.031957] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:26.686 [2024-07-24 22:21:52.096649] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:26.686 Initializing NVMe Controllers 00:12:26.686 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:26.686 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:26.686 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:12:26.686 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:12:26.686 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:12:26.686 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:12:26.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:26.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:26.686 Initialization complete. Launching workers. 00:12:26.686 Starting thread on core 1 with urgent priority queue 00:12:26.686 Starting thread on core 2 with urgent priority queue 00:12:26.686 Starting thread on core 3 with urgent priority queue 00:12:26.686 Starting thread on core 0 with urgent priority queue 00:12:26.686 SPDK bdev Controller (SPDK1 ) core 0: 7231.00 IO/s 13.83 secs/100000 ios 00:12:26.686 SPDK bdev Controller (SPDK1 ) core 1: 7725.00 IO/s 12.94 secs/100000 ios 00:12:26.686 SPDK bdev Controller (SPDK1 ) core 2: 7258.67 IO/s 13.78 secs/100000 ios 00:12:26.686 SPDK bdev Controller (SPDK1 ) core 3: 6616.67 IO/s 15.11 secs/100000 ios 00:12:26.686 ======================================================== 00:12:26.686 00:12:26.686 22:21:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:26.686 EAL: No free 2048 kB hugepages reported on node 1 00:12:26.950 [2024-07-24 22:21:52.391069] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:26.950 Initializing NVMe Controllers 00:12:26.950 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:26.950 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:26.950 Namespace ID: 1 size: 0GB 00:12:26.950 Initialization complete. 00:12:26.950 INFO: using host memory buffer for IO 00:12:26.950 Hello world! 00:12:26.950 [2024-07-24 22:21:52.424893] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:26.950 22:21:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:26.950 EAL: No free 2048 kB hugepages reported on node 1 00:12:27.210 [2024-07-24 22:21:52.704025] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:28.218 Initializing NVMe Controllers 00:12:28.218 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:28.218 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:28.218 Initialization complete. Launching workers. 00:12:28.218 submit (in ns) avg, min, max = 15171.8, 4505.2, 4018887.4 00:12:28.218 complete (in ns) avg, min, max = 25284.4, 2640.0, 4018330.4 00:12:28.218 00:12:28.218 Submit histogram 00:12:28.218 ================ 00:12:28.218 Range in us Cumulative Count 00:12:28.218 4.504 - 4.527: 0.0868% ( 10) 00:12:28.218 4.527 - 4.551: 1.1452% ( 122) 00:12:28.218 4.551 - 4.575: 4.6504% ( 404) 00:12:28.218 4.575 - 4.599: 9.7172% ( 584) 00:12:28.218 4.599 - 4.622: 15.8077% ( 702) 00:12:28.218 4.622 - 4.646: 19.5124% ( 427) 00:12:28.218 4.646 - 4.670: 21.5686% ( 237) 00:12:28.218 4.670 - 4.693: 22.1412% ( 66) 00:12:28.218 4.693 - 4.717: 22.8093% ( 77) 00:12:28.218 4.717 - 4.741: 24.6573% ( 213) 00:12:28.218 4.741 - 4.764: 28.8825% ( 487) 00:12:28.218 4.764 - 4.788: 35.3288% ( 743) 00:12:28.218 4.788 - 4.812: 42.1048% ( 781) 00:12:28.218 4.812 - 4.836: 46.7465% ( 535) 00:12:28.218 4.836 - 4.859: 48.0479% ( 150) 00:12:28.218 4.859 - 4.883: 48.4904% ( 51) 00:12:28.218 4.883 - 4.907: 49.0717% ( 67) 00:12:28.218 4.907 - 4.930: 50.0260% ( 110) 00:12:28.218 4.930 - 4.954: 51.4142% ( 160) 00:12:28.218 4.954 - 4.978: 52.9499% ( 177) 00:12:28.219 4.978 - 5.001: 54.7198% ( 204) 00:12:28.219 5.001 - 5.025: 55.9604% ( 143) 00:12:28.219 5.025 - 5.049: 56.7239% ( 88) 00:12:28.219 5.049 - 5.073: 57.3920% ( 77) 00:12:28.219 5.073 - 5.096: 57.6783% ( 33) 00:12:28.219 5.096 - 5.120: 57.8431% ( 19) 00:12:28.219 5.120 - 5.144: 58.0253% ( 21) 00:12:28.219 5.144 - 5.167: 58.7541% ( 84) 00:12:28.219 5.167 - 5.191: 61.1314% ( 274) 00:12:28.219 5.191 - 5.215: 64.6104% ( 401) 00:12:28.219 5.215 - 5.239: 68.2197% ( 416) 00:12:28.219 5.239 - 5.262: 70.1805% ( 226) 00:12:28.219 5.262 - 5.286: 70.9092% ( 84) 00:12:28.219 5.286 - 5.310: 71.5947% ( 79) 00:12:28.219 5.310 - 5.333: 72.8006% ( 139) 00:12:28.219 5.333 - 5.357: 75.0390% ( 258) 00:12:28.219 5.357 - 5.381: 77.8154% ( 320) 00:12:28.219 5.381 - 5.404: 79.5159% ( 196) 00:12:28.219 5.404 - 5.428: 80.2620% ( 86) 00:12:28.219 5.428 - 5.452: 80.8867% ( 72) 00:12:28.219 5.452 - 5.476: 82.0927% ( 139) 00:12:28.219 5.476 - 5.499: 82.4657% ( 43) 00:12:28.219 5.499 - 5.523: 82.5612% ( 11) 00:12:28.219 5.523 - 5.547: 82.6826% ( 14) 00:12:28.219 5.547 - 5.570: 84.5653% ( 217) 00:12:28.219 5.570 - 5.594: 87.5499% ( 344) 00:12:28.219 5.594 - 5.618: 92.0094% ( 514) 00:12:28.219 5.618 - 5.641: 93.8921% ( 217) 00:12:28.219 5.641 - 5.665: 94.2912% ( 46) 00:12:28.219 5.665 - 5.689: 94.4560% ( 19) 00:12:28.219 5.689 - 5.713: 94.6382% ( 21) 00:12:28.219 5.713 - 5.736: 94.7076% ( 8) 00:12:28.219 5.736 - 5.760: 94.7250% ( 2) 00:12:28.219 5.760 - 5.784: 94.7857% ( 7) 00:12:28.219 5.784 - 5.807: 94.8204% ( 4) 00:12:28.219 5.807 - 5.831: 94.8378% ( 2) 00:12:28.219 5.831 - 5.855: 94.9419% ( 12) 00:12:28.219 5.855 - 5.879: 95.0373% ( 11) 00:12:28.219 5.879 - 5.902: 95.1848% ( 17) 00:12:28.219 5.902 - 5.926: 95.2889% ( 12) 00:12:28.219 5.926 - 5.950: 95.3930% ( 12) 00:12:28.219 5.950 - 5.973: 95.4971% ( 12) 00:12:28.219 5.973 - 5.997: 95.6012% ( 12) 00:12:28.219 5.997 - 6.021: 95.6446% ( 5) 00:12:28.219 6.021 - 6.044: 95.7054% ( 7) 00:12:28.219 6.044 - 6.068: 95.8355% ( 15) 00:12:28.219 6.068 - 6.116: 96.0871% ( 29) 00:12:28.219 6.116 - 6.163: 96.4775% ( 45) 00:12:28.219 6.163 - 6.210: 96.5903% ( 13) 00:12:28.219 6.210 - 6.258: 96.8593% ( 31) 00:12:28.219 6.258 - 6.305: 97.3625% ( 58) 00:12:28.219 6.305 - 6.353: 97.5360% ( 20) 00:12:28.219 6.353 - 6.400: 97.6314% ( 11) 00:12:28.219 6.400 - 6.447: 97.6748% ( 5) 00:12:28.219 6.447 - 6.495: 97.7356% ( 7) 00:12:28.219 6.495 - 6.542: 97.8657% ( 15) 00:12:28.219 6.542 - 6.590: 97.9004% ( 4) 00:12:28.219 6.590 - 6.637: 97.9178% ( 2) 00:12:28.219 6.637 - 6.684: 97.9611% ( 5) 00:12:28.219 6.684 - 6.732: 98.0132% ( 6) 00:12:28.219 6.732 - 6.779: 98.0305% ( 2) 00:12:28.219 6.779 - 6.827: 98.0566% ( 3) 00:12:28.219 6.827 - 6.874: 98.0652% ( 1) 00:12:28.219 6.874 - 6.921: 98.3255% ( 30) 00:12:28.219 6.921 - 6.969: 98.6465% ( 37) 00:12:28.219 6.969 - 7.016: 98.8808% ( 27) 00:12:28.219 7.016 - 7.064: 98.9589% ( 9) 00:12:28.219 7.064 - 7.111: 98.9849% ( 3) 00:12:28.219 7.111 - 7.159: 98.9936% ( 1) 00:12:28.219 7.443 - 7.490: 99.0023% ( 1) 00:12:28.219 7.538 - 7.585: 99.0109% ( 1) 00:12:28.219 7.585 - 7.633: 99.0196% ( 1) 00:12:28.219 7.775 - 7.822: 99.0370% ( 2) 00:12:28.219 8.012 - 8.059: 99.0456% ( 1) 00:12:28.219 8.344 - 8.391: 99.0543% ( 1) 00:12:28.219 8.581 - 8.628: 99.0630% ( 1) 00:12:28.219 8.628 - 8.676: 99.0717% ( 1) 00:12:28.219 8.676 - 8.723: 99.0803% ( 1) 00:12:28.219 8.770 - 8.818: 99.0890% ( 1) 00:12:28.219 8.865 - 8.913: 99.1064% ( 2) 00:12:28.219 8.913 - 8.960: 99.1150% ( 1) 00:12:28.219 9.055 - 9.102: 99.1237% ( 1) 00:12:28.219 9.102 - 9.150: 99.1324% ( 1) 00:12:28.219 9.292 - 9.339: 99.1411% ( 1) 00:12:28.219 9.434 - 9.481: 99.1497% ( 1) 00:12:28.219 9.481 - 9.529: 99.1584% ( 1) 00:12:28.219 9.576 - 9.624: 99.1671% ( 1) 00:12:28.219 9.624 - 9.671: 99.1758% ( 1) 00:12:28.219 9.766 - 9.813: 99.1845% ( 1) 00:12:28.219 9.813 - 9.861: 99.1931% ( 1) 00:12:28.219 9.861 - 9.908: 99.2018% ( 1) 00:12:28.219 9.908 - 9.956: 99.2192% ( 2) 00:12:28.219 9.956 - 10.003: 99.2452% ( 3) 00:12:28.219 10.003 - 10.050: 99.2539% ( 1) 00:12:28.219 10.098 - 10.145: 99.2625% ( 1) 00:12:28.219 10.145 - 10.193: 99.2799% ( 2) 00:12:28.219 10.193 - 10.240: 99.2972% ( 2) 00:12:28.219 10.287 - 10.335: 99.3233% ( 3) 00:12:28.219 10.430 - 10.477: 99.3319% ( 1) 00:12:28.219 10.619 - 10.667: 99.3493% ( 2) 00:12:28.219 10.714 - 10.761: 99.3840% ( 4) 00:12:28.219 10.761 - 10.809: 99.3927% ( 1) 00:12:28.219 10.809 - 10.856: 99.4187% ( 3) 00:12:28.219 10.856 - 10.904: 99.4274% ( 1) 00:12:28.219 10.904 - 10.951: 99.4447% ( 2) 00:12:28.219 10.951 - 10.999: 99.4534% ( 1) 00:12:28.219 10.999 - 11.046: 99.4621% ( 1) 00:12:28.219 11.141 - 11.188: 99.4708% ( 1) 00:12:28.219 11.283 - 11.330: 99.4794% ( 1) 00:12:28.219 11.330 - 11.378: 99.4881% ( 1) 00:12:28.219 11.615 - 11.662: 99.5141% ( 3) 00:12:28.219 11.804 - 11.852: 99.5228% ( 1) 00:12:28.219 11.852 - 11.899: 99.5315% ( 1) 00:12:28.219 12.136 - 12.231: 99.5402% ( 1) 00:12:28.219 12.231 - 12.326: 99.5488% ( 1) 00:12:28.219 12.326 - 12.421: 99.5575% ( 1) 00:12:28.219 12.421 - 12.516: 99.5662% ( 1) 00:12:28.219 12.516 - 12.610: 99.5749% ( 1) 00:12:28.219 12.610 - 12.705: 99.5836% ( 1) 00:12:28.219 13.179 - 13.274: 99.5922% ( 1) 00:12:28.219 13.274 - 13.369: 99.6009% ( 1) 00:12:28.219 13.369 - 13.464: 99.6096% ( 1) 00:12:28.219 13.653 - 13.748: 99.6356% ( 3) 00:12:28.219 13.748 - 13.843: 99.6703% ( 4) 00:12:28.219 13.843 - 13.938: 99.6790% ( 1) 00:12:28.219 14.033 - 14.127: 99.6877% ( 1) 00:12:28.219 14.222 - 14.317: 99.6963% ( 1) 00:12:28.219 14.412 - 14.507: 99.7137% ( 2) 00:12:28.219 15.834 - 15.929: 99.7224% ( 1) 00:12:28.219 16.498 - 16.593: 99.7310% ( 1) 00:12:28.219 18.489 - 18.584: 99.7397% ( 1) 00:12:28.219 20.575 - 20.670: 99.7484% ( 1) 00:12:28.219 3980.705 - 4004.978: 99.8785% ( 15) 00:12:28.219 4004.978 - 4029.250: 100.0000% ( 14) 00:12:28.219 00:12:28.219 Complete histogram 00:12:28.219 ================== 00:12:28.219 Range in us Cumulative Count 00:12:28.219 2.631 - 2.643: 0.0694% ( 8) 00:12:28.219 2.643 - 2.655: 5.2837% ( 601) 00:12:28.219 2.655 - 2.667: 35.0165% ( 3427) 00:12:28.219 2.667 - 2.679: 47.4059% ( 1428) 00:12:28.219 2.679 - 2.690: 51.9868% ( 528) 00:12:28.219 2.690 - 2.702: 70.6229% ( 2148) 00:12:28.219 2.702 - 2.714: 84.7649% ( 1630) 00:12:28.219 2.714 - 2.726: 90.3870% ( 648) 00:12:28.219 2.726 - 2.738: 94.1003% ( 428) 00:12:28.219 2.738 - 2.750: 95.7227% ( 187) 00:12:28.219 2.750 - 2.761: 96.4949% ( 89) 00:12:28.219 2.761 - 2.773: 96.7812% ( 33) 00:12:28.219 2.773 - 2.785: 96.8766% ( 11) 00:12:28.219 2.785 - 2.797: 96.9027% ( 3) 00:12:28.219 2.797 - 2.809: 96.9460% ( 5) 00:12:28.219 2.809 - 2.821: 97.0154% ( 8) 00:12:28.219 2.821 - 2.833: 97.0501% ( 4) 00:12:28.220 2.833 - 2.844: 97.0588% ( 1) 00:12:28.220 2.856 - 2.868: 97.1282% ( 8) 00:12:28.220 2.868 - 2.880: 97.1716% ( 5) 00:12:28.220 2.880 - 2.892: 97.2323% ( 7) 00:12:28.220 2.892 - 2.904: 97.2844% ( 6) 00:12:28.220 2.904 - 2.916: 97.3538% ( 8) 00:12:28.220 2.916 - 2.927: 97.3972% ( 5) 00:12:28.220 2.927 - 2.939: 97.4059% ( 1) 00:12:28.220 2.939 - 2.951: 97.4406% ( 4) 00:12:28.220 2.951 - 2.963: 97.4753% ( 4) 00:12:28.220 2.963 - 2.975: 97.5100% ( 4) 00:12:28.220 2.975 - 2.987: 97.5360% ( 3) 00:12:28.220 2.987 - 2.999: 97.5620% ( 3) 00:12:28.220 3.022 - 3.034: 97.5881% ( 3) 00:12:28.220 3.034 - 3.058: 97.6054% ( 2) 00:12:28.220 3.058 - 3.081: 97.6314% ( 3) 00:12:28.220 3.081 - 3.105: 97.6922% ( 7) 00:12:28.220 3.105 - 3.129: 97.7356% ( 5) 00:12:28.220 3.129 - 3.153: 97.7789% ( 5) 00:12:28.220 3.153 - 3.176: 97.8657% ( 10) 00:12:28.220 3.176 - 3.200: 97.9004% ( 4) 00:12:28.220 3.200 - 3.224: 97.9785% ( 9) 00:12:28.220 3.224 - 3.247: 98.0479% ( 8) 00:12:28.220 3.247 - 3.271: 98.1086% ( 7) 00:12:28.220 3.271 - 3.295: 98.1607% ( 6) 00:12:28.220 3.295 - 3.319: 98.2214% ( 7) 00:12:28.220 3.319 - 3.342: 98.2908% ( 8) 00:12:28.220 3.342 - 3.366: 98.3516% ( 7) 00:12:28.220 3.366 - 3.390: 98.4123% ( 7) 00:12:28.220 3.390 - 3.413: 98.4817% ( 8) 00:12:28.220 3.413 - 3.437: 98.5164% ( 4) 00:12:28.220 3.437 - 3.461: 98.6118% ( 11) 00:12:28.220 3.461 - 3.484: 98.6552% ( 5) 00:12:28.220 3.484 - 3.508: 98.7246% ( 8) 00:12:28.220 3.508 - 3.532: 98.7333% ( 1) 00:12:28.220 3.532 - 3.556: 98.7767% ( 5) 00:12:28.220 3.556 - 3.579: 98.8287% ( 6) 00:12:28.220 3.579 - 3.603: 98.8634% ( 4) 00:12:28.220 3.603 - 3.627: 98.8895% ( 3) 00:12:28.220 3.627 - 3.650: 98.8981% ( 1) 00:12:28.220 3.650 - 3.674: 98.9242% ( 3) 00:12:28.220 3.674 - 3.698: 98.9589% ( 4) 00:12:28.220 3.698 - 3.721: 98.9849% ( 3) 00:12:28.220 3.721 - 3.745: 99.0109% ( 3) 00:12:28.220 3.745 - 3.769: 99.0283% ( 2) 00:12:28.220 3.769 - 3.793: 99.0543% ( 3) 00:12:28.220 3.816 - 3.840: 99.0630% ( 1) 00:12:28.220 3.911 - 3.935: 99.0717% ( 1) 00:12:28.220 3.935 - 3.959: 99.0890% ( 2) 00:12:28.220 3.959 - 3.982: 99.1064% ( 2) 00:12:28.220 4.101 - 4.124: 99.1237% ( 2) 00:12:28.220 4.148 - 4.172: 99.1324% ( 1) 00:12:28.220 4.243 - 4.267: 99.1411% ( 1) 00:12:28.220 4.290 - 4.314: 99.1497% ( 1) 00:12:28.220 4.361 - 4.385: 99.1584% ( 1) 00:12:28.220 4.385 - 4.409: 99.1671% ( 1) 00:12:28.220 4.433 - 4.456: 99.1845% ( 2) 00:12:28.220 4.551 - 4.575: 99.1931% ( 1) 00:12:28.220 4.717 - 4.741: 99.2018% ( 1) 00:12:28.220 5.618 - 5.641: 99.2105% ( 1) 00:12:28.220 6.637 - 6.684: 99.2192% ( 1) 00:12:28.220 7.159 - 7.206: 99.2278% ( 1) 00:12:28.220 7.206 - 7.253: 99.2365% ( 1) 00:12:28.220 7.253 - 7.301: 99.2452% ( 1) 00:12:28.220 7.301 - 7.348: 99.2539% ( 1) 00:12:28.220 7.396 - 7.443: 99.2625% ( 1) 00:12:28.220 7.585 - 7.633: 99.2799% ( 2) 00:12:28.220 7.727 - 7.775: 99.2886% ( 1) 00:12:28.220 7.822 - 7.870: 99.2972% ( 1) 00:12:28.220 7.870 - 7.917: 99.3059% ( 1) 00:12:28.220 7.917 - 7.964: 99.3146% ( 1) 00:12:28.220 8.201 - 8.249: 99.3233% ( 1) 00:12:28.220 8.391 - 8.439: 99.3406% ( 2) 00:12:28.220 8.533 - 8.581: 99.3493% ( 1) 00:12:28.220 8.770 - 8.818: 99.3580% ( 1) 00:12:28.220 8.960 - 9.007: 99.3666% ( 1) 00:12:28.220 9.055 - 9.102: 99.3753% ( 1) 00:12:28.220 10.003 - 10.050: 99.3840% ( 1) 00:12:28.220 12.421 - 12.516: 99.3927% ( 1) 00:12:28.220 14.791 - 14.886: 99.4014% ( 1) 00:12:28.220 16.213 - 16.308: 9[2024-07-24 22:21:53.727260] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:28.220 9.4100% ( 1) 00:12:28.220 17.351 - 17.446: 99.4187% ( 1) 00:12:28.220 25.410 - 25.600: 99.4274% ( 1) 00:12:28.220 29.013 - 29.203: 99.4361% ( 1) 00:12:28.220 3980.705 - 4004.978: 99.7484% ( 36) 00:12:28.220 4004.978 - 4029.250: 100.0000% ( 29) 00:12:28.220 00:12:28.220 22:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:12:28.220 22:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:28.220 22:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:12:28.220 22:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:12:28.220 22:21:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:28.478 [ 00:12:28.478 { 00:12:28.478 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:28.478 "subtype": "Discovery", 00:12:28.478 "listen_addresses": [], 00:12:28.478 "allow_any_host": true, 00:12:28.478 "hosts": [] 00:12:28.478 }, 00:12:28.478 { 00:12:28.478 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:28.478 "subtype": "NVMe", 00:12:28.478 "listen_addresses": [ 00:12:28.478 { 00:12:28.478 "trtype": "VFIOUSER", 00:12:28.478 "adrfam": "IPv4", 00:12:28.478 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:28.478 "trsvcid": "0" 00:12:28.478 } 00:12:28.478 ], 00:12:28.478 "allow_any_host": true, 00:12:28.478 "hosts": [], 00:12:28.478 "serial_number": "SPDK1", 00:12:28.478 "model_number": "SPDK bdev Controller", 00:12:28.478 "max_namespaces": 32, 00:12:28.478 "min_cntlid": 1, 00:12:28.478 "max_cntlid": 65519, 00:12:28.478 "namespaces": [ 00:12:28.478 { 00:12:28.478 "nsid": 1, 00:12:28.479 "bdev_name": "Malloc1", 00:12:28.479 "name": "Malloc1", 00:12:28.479 "nguid": "54DC6740C39B4B6F8DE4D7F53A57FC49", 00:12:28.479 "uuid": "54dc6740-c39b-4b6f-8de4-d7f53a57fc49" 00:12:28.479 } 00:12:28.479 ] 00:12:28.479 }, 00:12:28.479 { 00:12:28.479 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:28.479 "subtype": "NVMe", 00:12:28.479 "listen_addresses": [ 00:12:28.479 { 00:12:28.479 "trtype": "VFIOUSER", 00:12:28.479 "adrfam": "IPv4", 00:12:28.479 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:28.479 "trsvcid": "0" 00:12:28.479 } 00:12:28.479 ], 00:12:28.479 "allow_any_host": true, 00:12:28.479 "hosts": [], 00:12:28.479 "serial_number": "SPDK2", 00:12:28.479 "model_number": "SPDK bdev Controller", 00:12:28.479 "max_namespaces": 32, 00:12:28.479 "min_cntlid": 1, 00:12:28.479 "max_cntlid": 65519, 00:12:28.479 "namespaces": [ 00:12:28.479 { 00:12:28.479 "nsid": 1, 00:12:28.479 "bdev_name": "Malloc2", 00:12:28.479 "name": "Malloc2", 00:12:28.479 "nguid": "75A478876C024254A4CA9DD474DFE757", 00:12:28.479 "uuid": "75a47887-6c02-4254-a4ca-9dd474dfe757" 00:12:28.479 } 00:12:28.479 ] 00:12:28.479 } 00:12:28.479 ] 00:12:28.479 22:21:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:28.479 22:21:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3827875 00:12:28.479 22:21:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:12:28.479 22:21:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:28.479 22:21:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1263 -- # local i=0 00:12:28.479 22:21:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1264 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:28.479 22:21:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:28.479 22:21:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # return 0 00:12:28.479 22:21:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:28.479 22:21:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:12:28.479 EAL: No free 2048 kB hugepages reported on node 1 00:12:28.738 [2024-07-24 22:21:54.263000] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:28.738 Malloc3 00:12:28.738 22:21:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:12:29.306 [2024-07-24 22:21:54.710339] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:29.306 22:21:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:29.306 Asynchronous Event Request test 00:12:29.306 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:29.306 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:29.306 Registering asynchronous event callbacks... 00:12:29.306 Starting namespace attribute notice tests for all controllers... 00:12:29.306 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:29.306 aer_cb - Changed Namespace 00:12:29.306 Cleaning up... 00:12:29.306 [ 00:12:29.306 { 00:12:29.306 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:29.306 "subtype": "Discovery", 00:12:29.306 "listen_addresses": [], 00:12:29.306 "allow_any_host": true, 00:12:29.306 "hosts": [] 00:12:29.306 }, 00:12:29.306 { 00:12:29.306 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:29.306 "subtype": "NVMe", 00:12:29.306 "listen_addresses": [ 00:12:29.306 { 00:12:29.306 "trtype": "VFIOUSER", 00:12:29.306 "adrfam": "IPv4", 00:12:29.306 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:29.306 "trsvcid": "0" 00:12:29.306 } 00:12:29.306 ], 00:12:29.306 "allow_any_host": true, 00:12:29.306 "hosts": [], 00:12:29.306 "serial_number": "SPDK1", 00:12:29.306 "model_number": "SPDK bdev Controller", 00:12:29.306 "max_namespaces": 32, 00:12:29.306 "min_cntlid": 1, 00:12:29.306 "max_cntlid": 65519, 00:12:29.306 "namespaces": [ 00:12:29.306 { 00:12:29.306 "nsid": 1, 00:12:29.306 "bdev_name": "Malloc1", 00:12:29.306 "name": "Malloc1", 00:12:29.306 "nguid": "54DC6740C39B4B6F8DE4D7F53A57FC49", 00:12:29.306 "uuid": "54dc6740-c39b-4b6f-8de4-d7f53a57fc49" 00:12:29.306 }, 00:12:29.306 { 00:12:29.306 "nsid": 2, 00:12:29.306 "bdev_name": "Malloc3", 00:12:29.306 "name": "Malloc3", 00:12:29.306 "nguid": "4BAA48A47A564CB4BA4FB681767651D8", 00:12:29.306 "uuid": "4baa48a4-7a56-4cb4-ba4f-b681767651d8" 00:12:29.306 } 00:12:29.306 ] 00:12:29.306 }, 00:12:29.306 { 00:12:29.306 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:29.306 "subtype": "NVMe", 00:12:29.306 "listen_addresses": [ 00:12:29.306 { 00:12:29.306 "trtype": "VFIOUSER", 00:12:29.306 "adrfam": "IPv4", 00:12:29.306 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:29.306 "trsvcid": "0" 00:12:29.306 } 00:12:29.306 ], 00:12:29.306 "allow_any_host": true, 00:12:29.306 "hosts": [], 00:12:29.306 "serial_number": "SPDK2", 00:12:29.306 "model_number": "SPDK bdev Controller", 00:12:29.306 "max_namespaces": 32, 00:12:29.306 "min_cntlid": 1, 00:12:29.306 "max_cntlid": 65519, 00:12:29.306 "namespaces": [ 00:12:29.306 { 00:12:29.306 "nsid": 1, 00:12:29.306 "bdev_name": "Malloc2", 00:12:29.307 "name": "Malloc2", 00:12:29.307 "nguid": "75A478876C024254A4CA9DD474DFE757", 00:12:29.307 "uuid": "75a47887-6c02-4254-a4ca-9dd474dfe757" 00:12:29.307 } 00:12:29.307 ] 00:12:29.307 } 00:12:29.307 ] 00:12:29.568 22:21:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3827875 00:12:29.568 22:21:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:29.568 22:21:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:29.568 22:21:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:12:29.568 22:21:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:29.568 [2024-07-24 22:21:55.044898] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:12:29.568 [2024-07-24 22:21:55.044949] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3827980 ] 00:12:29.568 EAL: No free 2048 kB hugepages reported on node 1 00:12:29.568 [2024-07-24 22:21:55.086447] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:12:29.568 [2024-07-24 22:21:55.095835] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:29.568 [2024-07-24 22:21:55.095867] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fc585116000 00:12:29.568 [2024-07-24 22:21:55.096837] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:29.568 [2024-07-24 22:21:55.097847] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:29.568 [2024-07-24 22:21:55.098849] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:29.568 [2024-07-24 22:21:55.099854] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:29.568 [2024-07-24 22:21:55.100859] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:29.568 [2024-07-24 22:21:55.101867] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:29.568 [2024-07-24 22:21:55.104498] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:29.568 [2024-07-24 22:21:55.104889] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:29.568 [2024-07-24 22:21:55.105914] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:29.568 [2024-07-24 22:21:55.105938] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fc58510b000 00:12:29.568 [2024-07-24 22:21:55.107414] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:29.568 [2024-07-24 22:21:55.127422] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:12:29.569 [2024-07-24 22:21:55.127462] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:12:29.569 [2024-07-24 22:21:55.129575] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:29.569 [2024-07-24 22:21:55.129639] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:29.569 [2024-07-24 22:21:55.129752] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:12:29.569 [2024-07-24 22:21:55.129785] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:12:29.569 [2024-07-24 22:21:55.129797] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:12:29.569 [2024-07-24 22:21:55.131491] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:12:29.569 [2024-07-24 22:21:55.131526] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:12:29.569 [2024-07-24 22:21:55.131541] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:12:29.569 [2024-07-24 22:21:55.131619] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:29.569 [2024-07-24 22:21:55.131641] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:12:29.569 [2024-07-24 22:21:55.131657] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:12:29.569 [2024-07-24 22:21:55.132625] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:12:29.569 [2024-07-24 22:21:55.132653] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:29.569 [2024-07-24 22:21:55.133635] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:12:29.569 [2024-07-24 22:21:55.133666] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:12:29.569 [2024-07-24 22:21:55.133681] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:12:29.569 [2024-07-24 22:21:55.133696] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:29.569 [2024-07-24 22:21:55.133807] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:12:29.569 [2024-07-24 22:21:55.133816] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:29.569 [2024-07-24 22:21:55.133826] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:12:29.569 [2024-07-24 22:21:55.134637] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:12:29.569 [2024-07-24 22:21:55.135634] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:12:29.569 [2024-07-24 22:21:55.136646] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:29.569 [2024-07-24 22:21:55.137654] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:29.569 [2024-07-24 22:21:55.137728] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:29.569 [2024-07-24 22:21:55.138668] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:12:29.569 [2024-07-24 22:21:55.138691] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:29.569 [2024-07-24 22:21:55.138702] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:12:29.569 [2024-07-24 22:21:55.138730] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:12:29.569 [2024-07-24 22:21:55.138746] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:12:29.569 [2024-07-24 22:21:55.138775] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:29.569 [2024-07-24 22:21:55.138786] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:29.569 [2024-07-24 22:21:55.138794] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:29.569 [2024-07-24 22:21:55.138816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:29.569 [2024-07-24 22:21:55.149504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:29.569 [2024-07-24 22:21:55.149530] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:12:29.569 [2024-07-24 22:21:55.149541] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:12:29.569 [2024-07-24 22:21:55.149550] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:12:29.569 [2024-07-24 22:21:55.149559] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:29.569 [2024-07-24 22:21:55.149568] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:12:29.569 [2024-07-24 22:21:55.149577] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:12:29.569 [2024-07-24 22:21:55.149591] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:12:29.569 [2024-07-24 22:21:55.149607] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:12:29.569 [2024-07-24 22:21:55.149629] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:29.569 [2024-07-24 22:21:55.157500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:29.569 [2024-07-24 22:21:55.157534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:29.569 [2024-07-24 22:21:55.157550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:29.569 [2024-07-24 22:21:55.157565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:29.569 [2024-07-24 22:21:55.157579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:29.569 [2024-07-24 22:21:55.157589] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:12:29.569 [2024-07-24 22:21:55.157605] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:29.569 [2024-07-24 22:21:55.157623] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:29.569 [2024-07-24 22:21:55.165497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:29.569 [2024-07-24 22:21:55.165517] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:12:29.569 [2024-07-24 22:21:55.165528] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:29.569 [2024-07-24 22:21:55.165547] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:12:29.569 [2024-07-24 22:21:55.165560] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:12:29.569 [2024-07-24 22:21:55.165576] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:29.569 [2024-07-24 22:21:55.173499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:29.569 [2024-07-24 22:21:55.173597] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:12:29.569 [2024-07-24 22:21:55.173617] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:12:29.569 [2024-07-24 22:21:55.173633] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:29.569 [2024-07-24 22:21:55.173643] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:29.569 [2024-07-24 22:21:55.173650] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:29.569 [2024-07-24 22:21:55.173661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:29.569 [2024-07-24 22:21:55.181494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:29.569 [2024-07-24 22:21:55.181534] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:12:29.569 [2024-07-24 22:21:55.181554] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:12:29.569 [2024-07-24 22:21:55.181573] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:12:29.569 [2024-07-24 22:21:55.181587] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:29.569 [2024-07-24 22:21:55.181597] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:29.569 [2024-07-24 22:21:55.181604] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:29.569 [2024-07-24 22:21:55.181615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:29.569 [2024-07-24 22:21:55.189494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:29.570 [2024-07-24 22:21:55.189538] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:29.570 [2024-07-24 22:21:55.189558] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:29.570 [2024-07-24 22:21:55.189574] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:29.570 [2024-07-24 22:21:55.189583] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:29.570 [2024-07-24 22:21:55.189590] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:29.570 [2024-07-24 22:21:55.189602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:29.570 [2024-07-24 22:21:55.197504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:29.570 [2024-07-24 22:21:55.197529] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:29.570 [2024-07-24 22:21:55.197544] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:12:29.570 [2024-07-24 22:21:55.197561] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:12:29.570 [2024-07-24 22:21:55.197576] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:12:29.570 [2024-07-24 22:21:55.197587] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:29.570 [2024-07-24 22:21:55.197597] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:12:29.570 [2024-07-24 22:21:55.197607] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:12:29.570 [2024-07-24 22:21:55.197616] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:12:29.570 [2024-07-24 22:21:55.197626] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:12:29.570 [2024-07-24 22:21:55.197657] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:29.570 [2024-07-24 22:21:55.205493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:29.570 [2024-07-24 22:21:55.205536] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:29.570 [2024-07-24 22:21:55.213495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:29.570 [2024-07-24 22:21:55.213530] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:29.570 [2024-07-24 22:21:55.221494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:29.570 [2024-07-24 22:21:55.221522] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:29.570 [2024-07-24 22:21:55.229494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:29.570 [2024-07-24 22:21:55.229530] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:29.570 [2024-07-24 22:21:55.229543] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:29.570 [2024-07-24 22:21:55.229550] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:29.570 [2024-07-24 22:21:55.229558] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:29.570 [2024-07-24 22:21:55.229565] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:12:29.570 [2024-07-24 22:21:55.229576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:29.570 [2024-07-24 22:21:55.229589] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:29.570 [2024-07-24 22:21:55.229599] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:29.570 [2024-07-24 22:21:55.229605] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:29.570 [2024-07-24 22:21:55.229616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:29.570 [2024-07-24 22:21:55.229628] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:29.570 [2024-07-24 22:21:55.229638] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:29.570 [2024-07-24 22:21:55.229644] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:29.570 [2024-07-24 22:21:55.229654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:29.570 [2024-07-24 22:21:55.229668] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:29.570 [2024-07-24 22:21:55.229677] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:29.570 [2024-07-24 22:21:55.229684] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:29.570 [2024-07-24 22:21:55.229695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:29.570 [2024-07-24 22:21:55.237495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:29.570 [2024-07-24 22:21:55.237535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:29.570 [2024-07-24 22:21:55.237554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:29.570 [2024-07-24 22:21:55.237569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:29.570 ===================================================== 00:12:29.570 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:29.570 ===================================================== 00:12:29.570 Controller Capabilities/Features 00:12:29.570 ================================ 00:12:29.570 Vendor ID: 4e58 00:12:29.570 Subsystem Vendor ID: 4e58 00:12:29.570 Serial Number: SPDK2 00:12:29.570 Model Number: SPDK bdev Controller 00:12:29.570 Firmware Version: 24.09 00:12:29.570 Recommended Arb Burst: 6 00:12:29.570 IEEE OUI Identifier: 8d 6b 50 00:12:29.570 Multi-path I/O 00:12:29.570 May have multiple subsystem ports: Yes 00:12:29.570 May have multiple controllers: Yes 00:12:29.570 Associated with SR-IOV VF: No 00:12:29.570 Max Data Transfer Size: 131072 00:12:29.570 Max Number of Namespaces: 32 00:12:29.570 Max Number of I/O Queues: 127 00:12:29.570 NVMe Specification Version (VS): 1.3 00:12:29.570 NVMe Specification Version (Identify): 1.3 00:12:29.570 Maximum Queue Entries: 256 00:12:29.570 Contiguous Queues Required: Yes 00:12:29.570 Arbitration Mechanisms Supported 00:12:29.570 Weighted Round Robin: Not Supported 00:12:29.570 Vendor Specific: Not Supported 00:12:29.570 Reset Timeout: 15000 ms 00:12:29.570 Doorbell Stride: 4 bytes 00:12:29.570 NVM Subsystem Reset: Not Supported 00:12:29.570 Command Sets Supported 00:12:29.570 NVM Command Set: Supported 00:12:29.570 Boot Partition: Not Supported 00:12:29.570 Memory Page Size Minimum: 4096 bytes 00:12:29.570 Memory Page Size Maximum: 4096 bytes 00:12:29.570 Persistent Memory Region: Not Supported 00:12:29.570 Optional Asynchronous Events Supported 00:12:29.570 Namespace Attribute Notices: Supported 00:12:29.570 Firmware Activation Notices: Not Supported 00:12:29.570 ANA Change Notices: Not Supported 00:12:29.570 PLE Aggregate Log Change Notices: Not Supported 00:12:29.570 LBA Status Info Alert Notices: Not Supported 00:12:29.570 EGE Aggregate Log Change Notices: Not Supported 00:12:29.570 Normal NVM Subsystem Shutdown event: Not Supported 00:12:29.570 Zone Descriptor Change Notices: Not Supported 00:12:29.570 Discovery Log Change Notices: Not Supported 00:12:29.570 Controller Attributes 00:12:29.570 128-bit Host Identifier: Supported 00:12:29.570 Non-Operational Permissive Mode: Not Supported 00:12:29.570 NVM Sets: Not Supported 00:12:29.570 Read Recovery Levels: Not Supported 00:12:29.570 Endurance Groups: Not Supported 00:12:29.570 Predictable Latency Mode: Not Supported 00:12:29.570 Traffic Based Keep ALive: Not Supported 00:12:29.570 Namespace Granularity: Not Supported 00:12:29.570 SQ Associations: Not Supported 00:12:29.570 UUID List: Not Supported 00:12:29.570 Multi-Domain Subsystem: Not Supported 00:12:29.570 Fixed Capacity Management: Not Supported 00:12:29.570 Variable Capacity Management: Not Supported 00:12:29.570 Delete Endurance Group: Not Supported 00:12:29.570 Delete NVM Set: Not Supported 00:12:29.570 Extended LBA Formats Supported: Not Supported 00:12:29.570 Flexible Data Placement Supported: Not Supported 00:12:29.570 00:12:29.570 Controller Memory Buffer Support 00:12:29.570 ================================ 00:12:29.570 Supported: No 00:12:29.570 00:12:29.571 Persistent Memory Region Support 00:12:29.571 ================================ 00:12:29.571 Supported: No 00:12:29.571 00:12:29.571 Admin Command Set Attributes 00:12:29.571 ============================ 00:12:29.571 Security Send/Receive: Not Supported 00:12:29.571 Format NVM: Not Supported 00:12:29.571 Firmware Activate/Download: Not Supported 00:12:29.571 Namespace Management: Not Supported 00:12:29.571 Device Self-Test: Not Supported 00:12:29.571 Directives: Not Supported 00:12:29.571 NVMe-MI: Not Supported 00:12:29.571 Virtualization Management: Not Supported 00:12:29.571 Doorbell Buffer Config: Not Supported 00:12:29.571 Get LBA Status Capability: Not Supported 00:12:29.571 Command & Feature Lockdown Capability: Not Supported 00:12:29.571 Abort Command Limit: 4 00:12:29.571 Async Event Request Limit: 4 00:12:29.571 Number of Firmware Slots: N/A 00:12:29.571 Firmware Slot 1 Read-Only: N/A 00:12:29.571 Firmware Activation Without Reset: N/A 00:12:29.571 Multiple Update Detection Support: N/A 00:12:29.571 Firmware Update Granularity: No Information Provided 00:12:29.571 Per-Namespace SMART Log: No 00:12:29.571 Asymmetric Namespace Access Log Page: Not Supported 00:12:29.571 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:12:29.571 Command Effects Log Page: Supported 00:12:29.571 Get Log Page Extended Data: Supported 00:12:29.571 Telemetry Log Pages: Not Supported 00:12:29.571 Persistent Event Log Pages: Not Supported 00:12:29.571 Supported Log Pages Log Page: May Support 00:12:29.571 Commands Supported & Effects Log Page: Not Supported 00:12:29.571 Feature Identifiers & Effects Log Page:May Support 00:12:29.571 NVMe-MI Commands & Effects Log Page: May Support 00:12:29.571 Data Area 4 for Telemetry Log: Not Supported 00:12:29.571 Error Log Page Entries Supported: 128 00:12:29.571 Keep Alive: Supported 00:12:29.571 Keep Alive Granularity: 10000 ms 00:12:29.571 00:12:29.571 NVM Command Set Attributes 00:12:29.571 ========================== 00:12:29.571 Submission Queue Entry Size 00:12:29.571 Max: 64 00:12:29.571 Min: 64 00:12:29.571 Completion Queue Entry Size 00:12:29.571 Max: 16 00:12:29.571 Min: 16 00:12:29.571 Number of Namespaces: 32 00:12:29.571 Compare Command: Supported 00:12:29.571 Write Uncorrectable Command: Not Supported 00:12:29.571 Dataset Management Command: Supported 00:12:29.571 Write Zeroes Command: Supported 00:12:29.571 Set Features Save Field: Not Supported 00:12:29.571 Reservations: Not Supported 00:12:29.571 Timestamp: Not Supported 00:12:29.571 Copy: Supported 00:12:29.571 Volatile Write Cache: Present 00:12:29.571 Atomic Write Unit (Normal): 1 00:12:29.571 Atomic Write Unit (PFail): 1 00:12:29.571 Atomic Compare & Write Unit: 1 00:12:29.571 Fused Compare & Write: Supported 00:12:29.571 Scatter-Gather List 00:12:29.571 SGL Command Set: Supported (Dword aligned) 00:12:29.571 SGL Keyed: Not Supported 00:12:29.571 SGL Bit Bucket Descriptor: Not Supported 00:12:29.571 SGL Metadata Pointer: Not Supported 00:12:29.571 Oversized SGL: Not Supported 00:12:29.571 SGL Metadata Address: Not Supported 00:12:29.571 SGL Offset: Not Supported 00:12:29.571 Transport SGL Data Block: Not Supported 00:12:29.571 Replay Protected Memory Block: Not Supported 00:12:29.571 00:12:29.571 Firmware Slot Information 00:12:29.571 ========================= 00:12:29.571 Active slot: 1 00:12:29.571 Slot 1 Firmware Revision: 24.09 00:12:29.571 00:12:29.571 00:12:29.571 Commands Supported and Effects 00:12:29.571 ============================== 00:12:29.571 Admin Commands 00:12:29.571 -------------- 00:12:29.571 Get Log Page (02h): Supported 00:12:29.571 Identify (06h): Supported 00:12:29.571 Abort (08h): Supported 00:12:29.571 Set Features (09h): Supported 00:12:29.571 Get Features (0Ah): Supported 00:12:29.571 Asynchronous Event Request (0Ch): Supported 00:12:29.571 Keep Alive (18h): Supported 00:12:29.571 I/O Commands 00:12:29.571 ------------ 00:12:29.571 Flush (00h): Supported LBA-Change 00:12:29.571 Write (01h): Supported LBA-Change 00:12:29.571 Read (02h): Supported 00:12:29.571 Compare (05h): Supported 00:12:29.571 Write Zeroes (08h): Supported LBA-Change 00:12:29.571 Dataset Management (09h): Supported LBA-Change 00:12:29.571 Copy (19h): Supported LBA-Change 00:12:29.571 00:12:29.571 Error Log 00:12:29.571 ========= 00:12:29.571 00:12:29.571 Arbitration 00:12:29.571 =========== 00:12:29.571 Arbitration Burst: 1 00:12:29.571 00:12:29.571 Power Management 00:12:29.571 ================ 00:12:29.571 Number of Power States: 1 00:12:29.571 Current Power State: Power State #0 00:12:29.571 Power State #0: 00:12:29.571 Max Power: 0.00 W 00:12:29.571 Non-Operational State: Operational 00:12:29.571 Entry Latency: Not Reported 00:12:29.571 Exit Latency: Not Reported 00:12:29.571 Relative Read Throughput: 0 00:12:29.571 Relative Read Latency: 0 00:12:29.571 Relative Write Throughput: 0 00:12:29.571 Relative Write Latency: 0 00:12:29.571 Idle Power: Not Reported 00:12:29.571 Active Power: Not Reported 00:12:29.571 Non-Operational Permissive Mode: Not Supported 00:12:29.571 00:12:29.571 Health Information 00:12:29.571 ================== 00:12:29.571 Critical Warnings: 00:12:29.571 Available Spare Space: OK 00:12:29.571 Temperature: OK 00:12:29.571 Device Reliability: OK 00:12:29.571 Read Only: No 00:12:29.571 Volatile Memory Backup: OK 00:12:29.571 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:29.571 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:29.571 Available Spare: 0% 00:12:29.571 Available Sp[2024-07-24 22:21:55.237720] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:29.571 [2024-07-24 22:21:55.245493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:29.571 [2024-07-24 22:21:55.245549] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:12:29.571 [2024-07-24 22:21:55.245570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:29.571 [2024-07-24 22:21:55.245583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:29.571 [2024-07-24 22:21:55.245595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:29.571 [2024-07-24 22:21:55.245606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:29.571 [2024-07-24 22:21:55.245680] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:29.571 [2024-07-24 22:21:55.245705] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:12:29.571 [2024-07-24 22:21:55.246685] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:29.571 [2024-07-24 22:21:55.246766] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:12:29.571 [2024-07-24 22:21:55.246783] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:12:29.571 [2024-07-24 22:21:55.247696] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:12:29.571 [2024-07-24 22:21:55.247723] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:12:29.571 [2024-07-24 22:21:55.247802] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:12:29.572 [2024-07-24 22:21:55.249320] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:29.831 are Threshold: 0% 00:12:29.831 Life Percentage Used: 0% 00:12:29.831 Data Units Read: 0 00:12:29.831 Data Units Written: 0 00:12:29.831 Host Read Commands: 0 00:12:29.831 Host Write Commands: 0 00:12:29.831 Controller Busy Time: 0 minutes 00:12:29.831 Power Cycles: 0 00:12:29.831 Power On Hours: 0 hours 00:12:29.831 Unsafe Shutdowns: 0 00:12:29.831 Unrecoverable Media Errors: 0 00:12:29.831 Lifetime Error Log Entries: 0 00:12:29.831 Warning Temperature Time: 0 minutes 00:12:29.831 Critical Temperature Time: 0 minutes 00:12:29.831 00:12:29.831 Number of Queues 00:12:29.831 ================ 00:12:29.831 Number of I/O Submission Queues: 127 00:12:29.831 Number of I/O Completion Queues: 127 00:12:29.831 00:12:29.831 Active Namespaces 00:12:29.831 ================= 00:12:29.831 Namespace ID:1 00:12:29.831 Error Recovery Timeout: Unlimited 00:12:29.831 Command Set Identifier: NVM (00h) 00:12:29.831 Deallocate: Supported 00:12:29.831 Deallocated/Unwritten Error: Not Supported 00:12:29.831 Deallocated Read Value: Unknown 00:12:29.831 Deallocate in Write Zeroes: Not Supported 00:12:29.831 Deallocated Guard Field: 0xFFFF 00:12:29.831 Flush: Supported 00:12:29.831 Reservation: Supported 00:12:29.831 Namespace Sharing Capabilities: Multiple Controllers 00:12:29.831 Size (in LBAs): 131072 (0GiB) 00:12:29.831 Capacity (in LBAs): 131072 (0GiB) 00:12:29.831 Utilization (in LBAs): 131072 (0GiB) 00:12:29.831 NGUID: 75A478876C024254A4CA9DD474DFE757 00:12:29.831 UUID: 75a47887-6c02-4254-a4ca-9dd474dfe757 00:12:29.831 Thin Provisioning: Not Supported 00:12:29.831 Per-NS Atomic Units: Yes 00:12:29.831 Atomic Boundary Size (Normal): 0 00:12:29.831 Atomic Boundary Size (PFail): 0 00:12:29.831 Atomic Boundary Offset: 0 00:12:29.831 Maximum Single Source Range Length: 65535 00:12:29.831 Maximum Copy Length: 65535 00:12:29.831 Maximum Source Range Count: 1 00:12:29.831 NGUID/EUI64 Never Reused: No 00:12:29.831 Namespace Write Protected: No 00:12:29.831 Number of LBA Formats: 1 00:12:29.831 Current LBA Format: LBA Format #00 00:12:29.831 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:29.831 00:12:29.831 22:21:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:29.831 EAL: No free 2048 kB hugepages reported on node 1 00:12:29.831 [2024-07-24 22:21:55.476548] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:35.107 Initializing NVMe Controllers 00:12:35.107 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:35.107 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:35.107 Initialization complete. Launching workers. 00:12:35.107 ======================================================== 00:12:35.107 Latency(us) 00:12:35.107 Device Information : IOPS MiB/s Average min max 00:12:35.107 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 24124.40 94.24 5306.57 1485.04 10552.58 00:12:35.107 ======================================================== 00:12:35.107 Total : 24124.40 94.24 5306.57 1485.04 10552.58 00:12:35.107 00:12:35.107 [2024-07-24 22:22:00.579822] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:35.107 22:22:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:35.107 EAL: No free 2048 kB hugepages reported on node 1 00:12:35.365 [2024-07-24 22:22:00.816426] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:40.638 Initializing NVMe Controllers 00:12:40.638 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:40.638 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:40.638 Initialization complete. Launching workers. 00:12:40.638 ======================================================== 00:12:40.638 Latency(us) 00:12:40.638 Device Information : IOPS MiB/s Average min max 00:12:40.638 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 24077.81 94.05 5316.06 1494.49 10553.83 00:12:40.638 ======================================================== 00:12:40.638 Total : 24077.81 94.05 5316.06 1494.49 10553.83 00:12:40.638 00:12:40.638 [2024-07-24 22:22:05.836600] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:40.638 22:22:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:40.638 EAL: No free 2048 kB hugepages reported on node 1 00:12:40.638 [2024-07-24 22:22:06.073516] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:45.916 [2024-07-24 22:22:11.215647] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:45.916 Initializing NVMe Controllers 00:12:45.916 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:45.916 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:45.916 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:12:45.916 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:12:45.916 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:12:45.916 Initialization complete. Launching workers. 00:12:45.916 Starting thread on core 2 00:12:45.916 Starting thread on core 3 00:12:45.916 Starting thread on core 1 00:12:45.916 22:22:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:12:45.916 EAL: No free 2048 kB hugepages reported on node 1 00:12:45.916 [2024-07-24 22:22:11.512986] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:50.108 [2024-07-24 22:22:14.942694] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:50.108 Initializing NVMe Controllers 00:12:50.108 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:50.108 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:50.108 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:12:50.108 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:12:50.108 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:12:50.108 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:12:50.108 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:50.108 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:50.108 Initialization complete. Launching workers. 00:12:50.108 Starting thread on core 1 with urgent priority queue 00:12:50.108 Starting thread on core 2 with urgent priority queue 00:12:50.108 Starting thread on core 3 with urgent priority queue 00:12:50.108 Starting thread on core 0 with urgent priority queue 00:12:50.108 SPDK bdev Controller (SPDK2 ) core 0: 4563.00 IO/s 21.92 secs/100000 ios 00:12:50.109 SPDK bdev Controller (SPDK2 ) core 1: 4980.33 IO/s 20.08 secs/100000 ios 00:12:50.109 SPDK bdev Controller (SPDK2 ) core 2: 5641.33 IO/s 17.73 secs/100000 ios 00:12:50.109 SPDK bdev Controller (SPDK2 ) core 3: 6079.67 IO/s 16.45 secs/100000 ios 00:12:50.109 ======================================================== 00:12:50.109 00:12:50.109 22:22:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:50.109 EAL: No free 2048 kB hugepages reported on node 1 00:12:50.109 [2024-07-24 22:22:15.227009] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:50.109 Initializing NVMe Controllers 00:12:50.109 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:50.109 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:50.109 Namespace ID: 1 size: 0GB 00:12:50.109 Initialization complete. 00:12:50.109 INFO: using host memory buffer for IO 00:12:50.109 Hello world! 00:12:50.109 [2024-07-24 22:22:15.237077] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:50.109 22:22:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:12:50.109 EAL: No free 2048 kB hugepages reported on node 1 00:12:50.109 [2024-07-24 22:22:15.516437] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:51.048 Initializing NVMe Controllers 00:12:51.048 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:51.048 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:51.048 Initialization complete. Launching workers. 00:12:51.048 submit (in ns) avg, min, max = 10100.2, 4546.7, 4017909.6 00:12:51.048 complete (in ns) avg, min, max = 32884.9, 2650.4, 4024050.4 00:12:51.048 00:12:51.048 Submit histogram 00:12:51.048 ================ 00:12:51.048 Range in us Cumulative Count 00:12:51.048 4.527 - 4.551: 0.0343% ( 4) 00:12:51.048 4.551 - 4.575: 0.4377% ( 47) 00:12:51.048 4.575 - 4.599: 2.1368% ( 198) 00:12:51.048 4.599 - 4.622: 5.3291% ( 372) 00:12:51.048 4.622 - 4.646: 10.3664% ( 587) 00:12:51.048 4.646 - 4.670: 13.9878% ( 422) 00:12:51.048 4.670 - 4.693: 16.2791% ( 267) 00:12:51.048 4.693 - 4.717: 17.1887% ( 106) 00:12:51.048 4.717 - 4.741: 17.7637% ( 67) 00:12:51.048 4.741 - 4.764: 18.8793% ( 130) 00:12:51.048 4.764 - 4.788: 21.1019% ( 259) 00:12:51.048 4.788 - 4.812: 25.6672% ( 532) 00:12:51.048 4.812 - 4.836: 30.2669% ( 536) 00:12:51.048 4.836 - 4.859: 33.8969% ( 423) 00:12:51.048 4.859 - 4.883: 35.5531% ( 193) 00:12:51.048 4.883 - 4.907: 36.0765% ( 61) 00:12:51.048 4.907 - 4.930: 36.4026% ( 38) 00:12:51.048 4.930 - 4.954: 36.8060% ( 47) 00:12:51.048 4.954 - 4.978: 37.2951% ( 57) 00:12:51.048 4.978 - 5.001: 37.7928% ( 58) 00:12:51.048 5.001 - 5.025: 38.1533% ( 42) 00:12:51.048 5.025 - 5.049: 38.4708% ( 37) 00:12:51.048 5.049 - 5.073: 38.7111% ( 28) 00:12:51.048 5.073 - 5.096: 38.8913% ( 21) 00:12:51.048 5.096 - 5.120: 39.0114% ( 14) 00:12:51.048 5.120 - 5.144: 39.0801% ( 8) 00:12:51.048 5.144 - 5.167: 39.1830% ( 12) 00:12:51.048 5.167 - 5.191: 39.6035% ( 49) 00:12:51.048 5.191 - 5.215: 41.0710% ( 171) 00:12:51.048 5.215 - 5.239: 44.8125% ( 436) 00:12:51.048 5.239 - 5.262: 48.7257% ( 456) 00:12:51.048 5.262 - 5.286: 51.7549% ( 353) 00:12:51.048 5.286 - 5.310: 53.1108% ( 158) 00:12:51.048 5.310 - 5.333: 53.9861% ( 102) 00:12:51.048 5.333 - 5.357: 55.4535% ( 171) 00:12:51.048 5.357 - 5.381: 59.0063% ( 414) 00:12:51.048 5.381 - 5.404: 62.5418% ( 412) 00:12:51.048 5.404 - 5.428: 65.9229% ( 394) 00:12:51.048 5.428 - 5.452: 67.7594% ( 214) 00:12:51.048 5.452 - 5.476: 69.0895% ( 155) 00:12:51.049 5.476 - 5.499: 70.5827% ( 174) 00:12:51.049 5.499 - 5.523: 71.5009% ( 107) 00:12:51.049 5.523 - 5.547: 71.8098% ( 36) 00:12:51.049 5.547 - 5.570: 71.9386% ( 15) 00:12:51.049 5.570 - 5.594: 73.3288% ( 162) 00:12:51.049 5.594 - 5.618: 78.9153% ( 651) 00:12:51.049 5.618 - 5.641: 84.8194% ( 688) 00:12:51.049 5.641 - 5.665: 90.0112% ( 605) 00:12:51.049 5.665 - 5.689: 91.6931% ( 196) 00:12:51.049 5.689 - 5.713: 92.2938% ( 70) 00:12:51.049 5.713 - 5.736: 92.5169% ( 26) 00:12:51.049 5.736 - 5.760: 92.8173% ( 35) 00:12:51.049 5.760 - 5.784: 92.9975% ( 21) 00:12:51.049 5.784 - 5.807: 93.0833% ( 10) 00:12:51.049 5.807 - 5.831: 93.1606% ( 9) 00:12:51.049 5.831 - 5.855: 93.3150% ( 18) 00:12:51.049 5.855 - 5.879: 93.5896% ( 32) 00:12:51.049 5.879 - 5.902: 93.7698% ( 21) 00:12:51.049 5.902 - 5.926: 93.9415% ( 20) 00:12:51.049 5.926 - 5.950: 94.0445% ( 12) 00:12:51.049 5.950 - 5.973: 94.1388% ( 11) 00:12:51.049 5.973 - 5.997: 94.2247% ( 10) 00:12:51.049 5.997 - 6.021: 94.3362% ( 13) 00:12:51.049 6.021 - 6.044: 94.3791% ( 5) 00:12:51.049 6.044 - 6.068: 94.4478% ( 8) 00:12:51.049 6.068 - 6.116: 94.6452% ( 23) 00:12:51.049 6.116 - 6.163: 94.7481% ( 12) 00:12:51.049 6.163 - 6.210: 94.8168% ( 8) 00:12:51.049 6.210 - 6.258: 94.9198% ( 12) 00:12:51.049 6.258 - 6.305: 95.0571% ( 16) 00:12:51.049 6.305 - 6.353: 95.2544% ( 23) 00:12:51.049 6.353 - 6.400: 95.4518% ( 23) 00:12:51.049 6.400 - 6.447: 95.5891% ( 16) 00:12:51.049 6.495 - 6.542: 95.8208% ( 27) 00:12:51.049 6.542 - 6.590: 96.2156% ( 46) 00:12:51.049 6.590 - 6.637: 96.2756% ( 7) 00:12:51.049 6.637 - 6.684: 96.3700% ( 11) 00:12:51.049 6.684 - 6.732: 96.5417% ( 20) 00:12:51.049 6.732 - 6.779: 96.5846% ( 5) 00:12:51.049 6.779 - 6.827: 96.6103% ( 3) 00:12:51.049 6.827 - 6.874: 96.6446% ( 4) 00:12:51.049 6.874 - 6.921: 96.9536% ( 36) 00:12:51.049 6.921 - 6.969: 98.0606% ( 129) 00:12:51.049 6.969 - 7.016: 98.7128% ( 76) 00:12:51.049 7.016 - 7.064: 98.9531% ( 28) 00:12:51.049 7.064 - 7.111: 99.0560% ( 12) 00:12:51.049 7.111 - 7.159: 99.0904% ( 4) 00:12:51.049 7.159 - 7.206: 99.1676% ( 9) 00:12:51.049 7.206 - 7.253: 99.1762% ( 1) 00:12:51.049 7.253 - 7.301: 99.1848% ( 1) 00:12:51.049 7.301 - 7.348: 99.1933% ( 1) 00:12:51.049 7.964 - 8.012: 99.2019% ( 1) 00:12:51.049 8.059 - 8.107: 99.2105% ( 1) 00:12:51.049 8.201 - 8.249: 99.2191% ( 1) 00:12:51.049 8.296 - 8.344: 99.2277% ( 1) 00:12:51.049 8.391 - 8.439: 99.2362% ( 1) 00:12:51.049 8.628 - 8.676: 99.2448% ( 1) 00:12:51.049 8.676 - 8.723: 99.2534% ( 1) 00:12:51.049 8.770 - 8.818: 99.2620% ( 1) 00:12:51.049 8.960 - 9.007: 99.2706% ( 1) 00:12:51.049 9.150 - 9.197: 99.2877% ( 2) 00:12:51.049 9.292 - 9.339: 99.2963% ( 1) 00:12:51.049 9.339 - 9.387: 99.3135% ( 2) 00:12:51.049 9.434 - 9.481: 99.3221% ( 1) 00:12:51.049 9.529 - 9.576: 99.3306% ( 1) 00:12:51.049 9.624 - 9.671: 99.3478% ( 2) 00:12:51.049 9.719 - 9.766: 99.3564% ( 1) 00:12:51.049 9.766 - 9.813: 99.3650% ( 1) 00:12:51.049 9.908 - 9.956: 99.3736% ( 1) 00:12:51.049 10.050 - 10.098: 99.3907% ( 2) 00:12:51.049 10.240 - 10.287: 99.3993% ( 1) 00:12:51.049 10.335 - 10.382: 99.4079% ( 1) 00:12:51.049 10.382 - 10.430: 99.4165% ( 1) 00:12:51.049 10.524 - 10.572: 99.4250% ( 1) 00:12:51.049 10.619 - 10.667: 99.4336% ( 1) 00:12:51.049 10.667 - 10.714: 99.4422% ( 1) 00:12:51.049 10.761 - 10.809: 99.4508% ( 1) 00:12:51.049 10.856 - 10.904: 99.4594% ( 1) 00:12:51.049 10.904 - 10.951: 99.4679% ( 1) 00:12:51.049 11.141 - 11.188: 99.4765% ( 1) 00:12:51.049 11.236 - 11.283: 99.4851% ( 1) 00:12:51.049 11.378 - 11.425: 99.4937% ( 1) 00:12:51.049 11.473 - 11.520: 99.5194% ( 3) 00:12:51.049 11.567 - 11.615: 99.5280% ( 1) 00:12:51.049 11.899 - 11.947: 99.5366% ( 1) 00:12:51.049 12.089 - 12.136: 99.5538% ( 2) 00:12:51.049 12.136 - 12.231: 99.5709% ( 2) 00:12:51.049 12.231 - 12.326: 99.5795% ( 1) 00:12:51.049 12.326 - 12.421: 99.5881% ( 1) 00:12:51.049 12.516 - 12.610: 99.6053% ( 2) 00:12:51.049 12.610 - 12.705: 99.6138% ( 1) 00:12:51.049 13.084 - 13.179: 99.6310% ( 2) 00:12:51.049 13.464 - 13.559: 99.6567% ( 3) 00:12:51.049 13.559 - 13.653: 99.6739% ( 2) 00:12:51.049 13.653 - 13.748: 99.6825% ( 1) 00:12:51.049 13.748 - 13.843: 99.7082% ( 3) 00:12:51.049 13.843 - 13.938: 99.7511% ( 5) 00:12:51.049 13.938 - 14.033: 99.7855% ( 4) 00:12:51.049 14.033 - 14.127: 99.8112% ( 3) 00:12:51.049 14.127 - 14.222: 99.8198% ( 1) 00:12:51.049 14.317 - 14.412: 99.8284% ( 1) 00:12:51.049 14.507 - 14.601: 99.8455% ( 2) 00:12:51.049 15.550 - 15.644: 99.8541% ( 1) 00:12:51.049 16.498 - 16.593: 99.8627% ( 1) 00:12:51.049 18.584 - 18.679: 99.8713% ( 1) 00:12:51.049 18.679 - 18.773: 99.8799% ( 1) 00:12:51.049 3980.705 - 4004.978: 99.9228% ( 5) 00:12:51.049 4004.978 - 4029.250: 100.0000% ( 9) 00:12:51.049 00:12:51.049 Complete histogram 00:12:51.049 ================== 00:12:51.049 Range in us Cumulative Count 00:12:51.049 2.643 - 2.655: 0.0687% ( 8) 00:12:51.049 2.655 - 2.667: 6.1358% ( 707) 00:12:51.049 2.667 - 2.679: 33.2618% ( 3161) 00:12:51.049 2.679 - 2.690: 45.8938% ( 1472) 00:12:51.049 2.690 - 2.702: 52.2870% ( 745) 00:12:51.049 2.702 - 2.714: 71.0632% ( 2188) 00:12:51.049 2.714 - 2.726: 84.6477% ( 1583) 00:12:51.049 2.726 - 2.738: 90.0798% ( 633) 00:12:51.049 2.738 - 2.750: 93.7784% ( 431) 00:12:51.049 2.750 - 2.761: 96.0868% ( 269) 00:12:51.049 2.761 - 2.773: 97.1424% ( 123) 00:12:51.049 2.773 - 2.785: 97.7774% ( 74) 00:12:51.049 2.785 - 2.797: 98.0348% ( 30) 00:12:51.049 2.797 - 2.809: 98.1550% ( 14) 00:12:51.049 2.809 - 2.821: 98.2065% ( 6) 00:12:51.049 2.821 - 2.833: 98.2322% ( 3) 00:12:51.049 2.833 - 2.8[2024-07-24 22:22:16.613721] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:51.049 44: 98.2580% ( 3) 00:12:51.049 2.844 - 2.856: 98.3009% ( 5) 00:12:51.049 2.856 - 2.868: 98.3266% ( 3) 00:12:51.049 2.880 - 2.892: 98.3609% ( 4) 00:12:51.049 2.892 - 2.904: 98.3781% ( 2) 00:12:51.049 2.904 - 2.916: 98.3867% ( 1) 00:12:51.049 2.916 - 2.927: 98.4038% ( 2) 00:12:51.049 2.987 - 2.999: 98.4124% ( 1) 00:12:51.049 2.999 - 3.010: 98.4382% ( 3) 00:12:51.049 3.010 - 3.022: 98.4553% ( 2) 00:12:51.049 3.034 - 3.058: 98.4639% ( 1) 00:12:51.049 3.105 - 3.129: 98.4725% ( 1) 00:12:51.049 3.176 - 3.200: 98.4897% ( 2) 00:12:51.049 3.200 - 3.224: 98.4982% ( 1) 00:12:51.049 3.224 - 3.247: 98.5411% ( 5) 00:12:51.049 3.247 - 3.271: 98.5841% ( 5) 00:12:51.049 3.271 - 3.295: 98.6012% ( 2) 00:12:51.049 3.295 - 3.319: 98.6184% ( 2) 00:12:51.049 3.390 - 3.413: 98.6355% ( 2) 00:12:51.049 3.413 - 3.437: 98.6527% ( 2) 00:12:51.049 3.437 - 3.461: 98.6613% ( 1) 00:12:51.049 3.461 - 3.484: 98.6699% ( 1) 00:12:51.049 3.484 - 3.508: 98.6956% ( 3) 00:12:51.049 3.532 - 3.556: 98.7214% ( 3) 00:12:51.049 3.556 - 3.579: 98.7385% ( 2) 00:12:51.049 3.579 - 3.603: 98.7557% ( 2) 00:12:51.049 3.603 - 3.627: 98.7728% ( 2) 00:12:51.049 3.627 - 3.650: 98.8072% ( 4) 00:12:51.049 3.650 - 3.674: 98.8158% ( 1) 00:12:51.049 3.698 - 3.721: 98.8243% ( 1) 00:12:51.049 3.721 - 3.745: 98.8501% ( 3) 00:12:51.049 3.745 - 3.769: 98.8844% ( 4) 00:12:51.049 3.793 - 3.816: 98.8930% ( 1) 00:12:51.049 3.816 - 3.840: 98.9102% ( 2) 00:12:51.049 3.840 - 3.864: 98.9187% ( 1) 00:12:51.049 3.864 - 3.887: 98.9273% ( 1) 00:12:51.049 4.006 - 4.030: 98.9359% ( 1) 00:12:51.049 4.314 - 4.338: 98.9445% ( 1) 00:12:51.049 4.385 - 4.409: 98.9531% ( 1) 00:12:51.049 5.855 - 5.879: 98.9616% ( 1) 00:12:51.049 5.950 - 5.973: 98.9702% ( 1) 00:12:51.049 6.163 - 6.210: 98.9788% ( 1) 00:12:51.049 6.305 - 6.353: 98.9874% ( 1) 00:12:51.049 6.684 - 6.732: 98.9960% ( 1) 00:12:51.049 7.111 - 7.159: 99.0045% ( 1) 00:12:51.049 7.253 - 7.301: 99.0131% ( 1) 00:12:51.049 7.301 - 7.348: 99.0217% ( 1) 00:12:51.049 7.396 - 7.443: 99.0303% ( 1) 00:12:51.049 7.538 - 7.585: 99.0389% ( 1) 00:12:51.049 7.775 - 7.822: 99.0475% ( 1) 00:12:51.049 7.822 - 7.870: 99.0560% ( 1) 00:12:51.049 7.964 - 8.012: 99.0646% ( 1) 00:12:51.049 8.012 - 8.059: 99.0732% ( 1) 00:12:51.049 8.107 - 8.154: 99.0818% ( 1) 00:12:51.049 8.628 - 8.676: 99.0904% ( 1) 00:12:51.049 9.102 - 9.150: 99.0989% ( 1) 00:12:51.049 9.387 - 9.434: 99.1075% ( 1) 00:12:51.050 9.481 - 9.529: 99.1161% ( 1) 00:12:51.050 9.529 - 9.576: 99.1247% ( 1) 00:12:51.050 10.382 - 10.430: 99.1333% ( 1) 00:12:51.050 10.619 - 10.667: 99.1419% ( 1) 00:12:51.050 10.761 - 10.809: 99.1504% ( 1) 00:12:51.050 11.141 - 11.188: 99.1590% ( 1) 00:12:51.050 12.326 - 12.421: 99.1676% ( 1) 00:12:51.050 12.516 - 12.610: 99.1762% ( 1) 00:12:51.050 12.895 - 12.990: 99.1848% ( 1) 00:12:51.050 14.981 - 15.076: 99.1933% ( 1) 00:12:51.050 15.265 - 15.360: 99.2019% ( 1) 00:12:51.050 15.455 - 15.550: 99.2105% ( 1) 00:12:51.050 16.024 - 16.119: 99.2191% ( 1) 00:12:51.050 17.541 - 17.636: 99.2277% ( 1) 00:12:51.050 17.730 - 17.825: 99.2362% ( 1) 00:12:51.050 22.850 - 22.945: 99.2448% ( 1) 00:12:51.050 3325.345 - 3349.618: 99.2534% ( 1) 00:12:51.050 3980.705 - 4004.978: 99.7340% ( 56) 00:12:51.050 4004.978 - 4029.250: 100.0000% ( 31) 00:12:51.050 00:12:51.050 22:22:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:12:51.050 22:22:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:51.050 22:22:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:12:51.050 22:22:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:12:51.050 22:22:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:51.308 [ 00:12:51.308 { 00:12:51.308 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:51.308 "subtype": "Discovery", 00:12:51.308 "listen_addresses": [], 00:12:51.308 "allow_any_host": true, 00:12:51.308 "hosts": [] 00:12:51.308 }, 00:12:51.308 { 00:12:51.308 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:51.308 "subtype": "NVMe", 00:12:51.308 "listen_addresses": [ 00:12:51.308 { 00:12:51.308 "trtype": "VFIOUSER", 00:12:51.308 "adrfam": "IPv4", 00:12:51.308 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:51.308 "trsvcid": "0" 00:12:51.308 } 00:12:51.308 ], 00:12:51.308 "allow_any_host": true, 00:12:51.308 "hosts": [], 00:12:51.308 "serial_number": "SPDK1", 00:12:51.308 "model_number": "SPDK bdev Controller", 00:12:51.308 "max_namespaces": 32, 00:12:51.308 "min_cntlid": 1, 00:12:51.308 "max_cntlid": 65519, 00:12:51.308 "namespaces": [ 00:12:51.308 { 00:12:51.308 "nsid": 1, 00:12:51.308 "bdev_name": "Malloc1", 00:12:51.308 "name": "Malloc1", 00:12:51.308 "nguid": "54DC6740C39B4B6F8DE4D7F53A57FC49", 00:12:51.308 "uuid": "54dc6740-c39b-4b6f-8de4-d7f53a57fc49" 00:12:51.308 }, 00:12:51.308 { 00:12:51.308 "nsid": 2, 00:12:51.308 "bdev_name": "Malloc3", 00:12:51.308 "name": "Malloc3", 00:12:51.308 "nguid": "4BAA48A47A564CB4BA4FB681767651D8", 00:12:51.308 "uuid": "4baa48a4-7a56-4cb4-ba4f-b681767651d8" 00:12:51.308 } 00:12:51.308 ] 00:12:51.308 }, 00:12:51.308 { 00:12:51.308 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:51.308 "subtype": "NVMe", 00:12:51.308 "listen_addresses": [ 00:12:51.308 { 00:12:51.308 "trtype": "VFIOUSER", 00:12:51.308 "adrfam": "IPv4", 00:12:51.308 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:51.308 "trsvcid": "0" 00:12:51.308 } 00:12:51.308 ], 00:12:51.308 "allow_any_host": true, 00:12:51.308 "hosts": [], 00:12:51.308 "serial_number": "SPDK2", 00:12:51.308 "model_number": "SPDK bdev Controller", 00:12:51.308 "max_namespaces": 32, 00:12:51.308 "min_cntlid": 1, 00:12:51.308 "max_cntlid": 65519, 00:12:51.308 "namespaces": [ 00:12:51.308 { 00:12:51.308 "nsid": 1, 00:12:51.308 "bdev_name": "Malloc2", 00:12:51.308 "name": "Malloc2", 00:12:51.308 "nguid": "75A478876C024254A4CA9DD474DFE757", 00:12:51.308 "uuid": "75a47887-6c02-4254-a4ca-9dd474dfe757" 00:12:51.308 } 00:12:51.308 ] 00:12:51.308 } 00:12:51.308 ] 00:12:51.309 22:22:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:51.309 22:22:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3829989 00:12:51.309 22:22:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:51.309 22:22:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:12:51.309 22:22:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1263 -- # local i=0 00:12:51.309 22:22:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1264 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:51.309 22:22:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:51.309 22:22:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # return 0 00:12:51.309 22:22:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:51.309 22:22:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:12:51.569 EAL: No free 2048 kB hugepages reported on node 1 00:12:51.569 [2024-07-24 22:22:17.146011] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:51.828 Malloc4 00:12:51.828 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:12:52.086 [2024-07-24 22:22:17.596425] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:52.086 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:52.086 Asynchronous Event Request test 00:12:52.086 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:12:52.086 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:12:52.086 Registering asynchronous event callbacks... 00:12:52.086 Starting namespace attribute notice tests for all controllers... 00:12:52.086 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:52.086 aer_cb - Changed Namespace 00:12:52.086 Cleaning up... 00:12:52.344 [ 00:12:52.344 { 00:12:52.344 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:52.344 "subtype": "Discovery", 00:12:52.344 "listen_addresses": [], 00:12:52.344 "allow_any_host": true, 00:12:52.344 "hosts": [] 00:12:52.344 }, 00:12:52.344 { 00:12:52.344 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:52.344 "subtype": "NVMe", 00:12:52.344 "listen_addresses": [ 00:12:52.344 { 00:12:52.344 "trtype": "VFIOUSER", 00:12:52.344 "adrfam": "IPv4", 00:12:52.344 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:52.344 "trsvcid": "0" 00:12:52.344 } 00:12:52.344 ], 00:12:52.344 "allow_any_host": true, 00:12:52.344 "hosts": [], 00:12:52.344 "serial_number": "SPDK1", 00:12:52.344 "model_number": "SPDK bdev Controller", 00:12:52.344 "max_namespaces": 32, 00:12:52.344 "min_cntlid": 1, 00:12:52.344 "max_cntlid": 65519, 00:12:52.344 "namespaces": [ 00:12:52.344 { 00:12:52.344 "nsid": 1, 00:12:52.344 "bdev_name": "Malloc1", 00:12:52.344 "name": "Malloc1", 00:12:52.344 "nguid": "54DC6740C39B4B6F8DE4D7F53A57FC49", 00:12:52.344 "uuid": "54dc6740-c39b-4b6f-8de4-d7f53a57fc49" 00:12:52.344 }, 00:12:52.344 { 00:12:52.344 "nsid": 2, 00:12:52.344 "bdev_name": "Malloc3", 00:12:52.344 "name": "Malloc3", 00:12:52.344 "nguid": "4BAA48A47A564CB4BA4FB681767651D8", 00:12:52.344 "uuid": "4baa48a4-7a56-4cb4-ba4f-b681767651d8" 00:12:52.344 } 00:12:52.344 ] 00:12:52.344 }, 00:12:52.344 { 00:12:52.344 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:52.344 "subtype": "NVMe", 00:12:52.344 "listen_addresses": [ 00:12:52.344 { 00:12:52.344 "trtype": "VFIOUSER", 00:12:52.344 "adrfam": "IPv4", 00:12:52.344 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:52.344 "trsvcid": "0" 00:12:52.344 } 00:12:52.344 ], 00:12:52.344 "allow_any_host": true, 00:12:52.344 "hosts": [], 00:12:52.344 "serial_number": "SPDK2", 00:12:52.344 "model_number": "SPDK bdev Controller", 00:12:52.344 "max_namespaces": 32, 00:12:52.344 "min_cntlid": 1, 00:12:52.344 "max_cntlid": 65519, 00:12:52.344 "namespaces": [ 00:12:52.344 { 00:12:52.344 "nsid": 1, 00:12:52.344 "bdev_name": "Malloc2", 00:12:52.344 "name": "Malloc2", 00:12:52.345 "nguid": "75A478876C024254A4CA9DD474DFE757", 00:12:52.345 "uuid": "75a47887-6c02-4254-a4ca-9dd474dfe757" 00:12:52.345 }, 00:12:52.345 { 00:12:52.345 "nsid": 2, 00:12:52.345 "bdev_name": "Malloc4", 00:12:52.345 "name": "Malloc4", 00:12:52.345 "nguid": "9430AB064ECC42F4B1B45F3C63724DDC", 00:12:52.345 "uuid": "9430ab06-4ecc-42f4-b1b4-5f3c63724ddc" 00:12:52.345 } 00:12:52.345 ] 00:12:52.345 } 00:12:52.345 ] 00:12:52.345 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3829989 00:12:52.345 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:12:52.345 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3825638 00:12:52.345 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 3825638 ']' 00:12:52.345 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 3825638 00:12:52.345 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:12:52.345 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:52.345 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3825638 00:12:52.345 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:52.345 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:52.345 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3825638' 00:12:52.345 killing process with pid 3825638 00:12:52.345 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 3825638 00:12:52.345 22:22:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 3825638 00:12:52.603 22:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:52.603 22:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:52.603 22:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:12:52.603 22:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:12:52.603 22:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:12:52.603 22:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3830101 00:12:52.603 22:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3830101' 00:12:52.603 Process pid: 3830101 00:12:52.603 22:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:12:52.603 22:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:52.603 22:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3830101 00:12:52.603 22:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 3830101 ']' 00:12:52.603 22:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:52.603 22:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:52.603 22:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:52.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:52.603 22:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:52.603 22:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:52.603 [2024-07-24 22:22:18.275410] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:12:52.603 [2024-07-24 22:22:18.276642] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:12:52.603 [2024-07-24 22:22:18.276714] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:52.603 EAL: No free 2048 kB hugepages reported on node 1 00:12:52.863 [2024-07-24 22:22:18.339168] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:52.863 [2024-07-24 22:22:18.456055] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:52.863 [2024-07-24 22:22:18.456115] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:52.863 [2024-07-24 22:22:18.456131] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:52.863 [2024-07-24 22:22:18.456143] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:52.863 [2024-07-24 22:22:18.456155] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:52.863 [2024-07-24 22:22:18.456241] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:52.863 [2024-07-24 22:22:18.456274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:52.863 [2024-07-24 22:22:18.456322] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:52.863 [2024-07-24 22:22:18.456325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.863 [2024-07-24 22:22:18.545807] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:12:52.863 [2024-07-24 22:22:18.546040] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:12:52.863 [2024-07-24 22:22:18.546273] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:12:52.863 [2024-07-24 22:22:18.546825] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:12:52.863 [2024-07-24 22:22:18.547084] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:12:53.124 22:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:53.124 22:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:12:53.124 22:22:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:54.063 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:12:54.322 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:54.322 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:54.322 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:54.322 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:54.322 22:22:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:54.582 Malloc1 00:12:54.582 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:54.842 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:55.102 22:22:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:55.668 22:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:55.668 22:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:55.668 22:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:55.926 Malloc2 00:12:55.926 22:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:56.184 22:22:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:56.443 22:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:56.701 22:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:12:56.701 22:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3830101 00:12:56.701 22:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 3830101 ']' 00:12:56.701 22:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 3830101 00:12:56.701 22:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:12:56.701 22:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:56.701 22:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3830101 00:12:56.701 22:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:56.701 22:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:56.701 22:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3830101' 00:12:56.701 killing process with pid 3830101 00:12:56.701 22:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 3830101 00:12:56.701 22:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 3830101 00:12:56.960 22:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:12:56.960 22:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:56.960 00:12:56.960 real 0m53.577s 00:12:56.960 user 3m31.395s 00:12:56.960 sys 0m4.370s 00:12:56.960 22:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:56.960 22:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:56.960 ************************************ 00:12:56.960 END TEST nvmf_vfio_user 00:12:56.960 ************************************ 00:12:56.960 22:22:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:56.960 22:22:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:56.960 22:22:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:56.960 22:22:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:56.960 ************************************ 00:12:56.960 START TEST nvmf_vfio_user_nvme_compliance 00:12:56.960 ************************************ 00:12:56.960 22:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:12:57.219 * Looking for test storage... 00:12:57.219 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:12:57.219 22:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:57.219 22:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:12:57.219 22:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:57.219 22:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:57.219 22:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:57.219 22:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:57.219 22:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:57.219 22:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:57.219 22:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:57.219 22:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:57.219 22:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:57.219 22:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:57.219 22:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:12:57.219 22:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:12:57.219 22:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:57.219 22:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:57.219 22:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:57.220 22:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:57.220 22:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:57.220 22:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:57.220 22:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:57.220 22:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:57.220 22:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.220 22:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.220 22:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.220 22:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:12:57.220 22:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:57.220 22:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:12:57.220 22:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:57.220 22:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:57.220 22:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:57.220 22:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:57.220 22:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:57.220 22:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:57.220 22:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:57.220 22:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:57.220 22:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:57.220 22:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:57.220 22:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:12:57.220 22:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:12:57.220 22:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:12:57.220 22:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3830573 00:12:57.220 22:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:12:57.220 22:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3830573' 00:12:57.220 Process pid: 3830573 00:12:57.220 22:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:57.220 22:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3830573 00:12:57.220 22:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 3830573 ']' 00:12:57.220 22:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:57.220 22:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:57.220 22:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:57.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:57.220 22:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:57.220 22:22:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:57.220 [2024-07-24 22:22:22.758217] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:12:57.220 [2024-07-24 22:22:22.758318] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:57.220 EAL: No free 2048 kB hugepages reported on node 1 00:12:57.220 [2024-07-24 22:22:22.822226] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:57.480 [2024-07-24 22:22:22.938758] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:57.480 [2024-07-24 22:22:22.938824] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:57.480 [2024-07-24 22:22:22.938840] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:57.480 [2024-07-24 22:22:22.938853] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:57.480 [2024-07-24 22:22:22.938865] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:57.480 [2024-07-24 22:22:22.938942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:57.480 [2024-07-24 22:22:22.939002] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:57.480 [2024-07-24 22:22:22.939005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.480 22:22:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:57.480 22:22:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:12:57.480 22:22:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:12:58.416 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:12:58.416 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:12:58.416 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:12:58.416 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.416 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:58.416 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.416 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:12:58.416 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:12:58.416 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.416 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:58.416 malloc0 00:12:58.416 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.416 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:12:58.416 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.416 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:58.416 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.416 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:12:58.416 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.416 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:58.416 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.416 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:12:58.416 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:58.416 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:12:58.677 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:58.677 22:22:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:12:58.677 EAL: No free 2048 kB hugepages reported on node 1 00:12:58.677 00:12:58.677 00:12:58.677 CUnit - A unit testing framework for C - Version 2.1-3 00:12:58.677 http://cunit.sourceforge.net/ 00:12:58.677 00:12:58.677 00:12:58.677 Suite: nvme_compliance 00:12:58.677 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-24 22:22:24.284060] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:58.677 [2024-07-24 22:22:24.285624] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:12:58.677 [2024-07-24 22:22:24.285653] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:12:58.677 [2024-07-24 22:22:24.285676] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:12:58.677 [2024-07-24 22:22:24.287101] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:58.677 passed 00:12:58.937 Test: admin_identify_ctrlr_verify_fused ...[2024-07-24 22:22:24.396834] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:58.937 [2024-07-24 22:22:24.399860] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:58.937 passed 00:12:58.937 Test: admin_identify_ns ...[2024-07-24 22:22:24.511532] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:58.937 [2024-07-24 22:22:24.573512] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:12:58.937 [2024-07-24 22:22:24.581526] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:12:58.937 [2024-07-24 22:22:24.602635] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:59.197 passed 00:12:59.197 Test: admin_get_features_mandatory_features ...[2024-07-24 22:22:24.707932] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:59.197 [2024-07-24 22:22:24.710960] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:59.197 passed 00:12:59.197 Test: admin_get_features_optional_features ...[2024-07-24 22:22:24.817653] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:59.197 [2024-07-24 22:22:24.820682] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:59.197 passed 00:12:59.456 Test: admin_set_features_number_of_queues ...[2024-07-24 22:22:24.926557] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:59.456 [2024-07-24 22:22:25.033628] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:59.456 passed 00:12:59.456 Test: admin_get_log_page_mandatory_logs ...[2024-07-24 22:22:25.135358] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:59.456 [2024-07-24 22:22:25.140395] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:59.716 passed 00:12:59.716 Test: admin_get_log_page_with_lpo ...[2024-07-24 22:22:25.245555] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:59.716 [2024-07-24 22:22:25.314515] ctrlr.c:2688:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:12:59.716 [2024-07-24 22:22:25.327587] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:59.716 passed 00:12:59.976 Test: fabric_property_get ...[2024-07-24 22:22:25.434468] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:59.976 [2024-07-24 22:22:25.435834] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:12:59.976 [2024-07-24 22:22:25.437511] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:59.976 passed 00:12:59.976 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-24 22:22:25.541133] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:12:59.976 [2024-07-24 22:22:25.542478] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:12:59.976 [2024-07-24 22:22:25.544163] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:12:59.976 passed 00:12:59.976 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-24 22:22:25.648422] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:00.236 [2024-07-24 22:22:25.731508] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:00.236 [2024-07-24 22:22:25.747507] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:00.236 [2024-07-24 22:22:25.752641] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:00.236 passed 00:13:00.236 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-24 22:22:25.848857] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:00.236 [2024-07-24 22:22:25.850207] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:13:00.236 [2024-07-24 22:22:25.851884] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:00.236 passed 00:13:00.497 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-24 22:22:25.960160] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:00.497 [2024-07-24 22:22:26.038499] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:00.497 [2024-07-24 22:22:26.062493] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:00.497 [2024-07-24 22:22:26.067630] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:00.497 passed 00:13:00.497 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-24 22:22:26.164945] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:00.497 [2024-07-24 22:22:26.166281] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:13:00.497 [2024-07-24 22:22:26.166326] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:13:00.497 [2024-07-24 22:22:26.167969] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:00.789 passed 00:13:00.789 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-24 22:22:26.268118] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:00.789 [2024-07-24 22:22:26.362493] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:13:00.789 [2024-07-24 22:22:26.370506] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:13:00.789 [2024-07-24 22:22:26.378498] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:13:00.789 [2024-07-24 22:22:26.386505] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:13:00.789 [2024-07-24 22:22:26.415617] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:00.789 passed 00:13:01.077 Test: admin_create_io_sq_verify_pc ...[2024-07-24 22:22:26.519285] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:01.077 [2024-07-24 22:22:26.535518] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:13:01.077 [2024-07-24 22:22:26.552796] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:01.077 passed 00:13:01.077 Test: admin_create_io_qp_max_qps ...[2024-07-24 22:22:26.649466] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:02.455 [2024-07-24 22:22:27.743501] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:13:02.455 [2024-07-24 22:22:28.126196] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:02.715 passed 00:13:02.715 Test: admin_create_io_sq_shared_cq ...[2024-07-24 22:22:28.224576] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:02.715 [2024-07-24 22:22:28.358497] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:02.715 [2024-07-24 22:22:28.395598] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:02.976 passed 00:13:02.976 00:13:02.976 Run Summary: Type Total Ran Passed Failed Inactive 00:13:02.976 suites 1 1 n/a 0 0 00:13:02.976 tests 18 18 18 0 0 00:13:02.976 asserts 360 360 360 0 n/a 00:13:02.976 00:13:02.976 Elapsed time = 1.744 seconds 00:13:02.976 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3830573 00:13:02.976 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 3830573 ']' 00:13:02.976 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 3830573 00:13:02.976 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:13:02.976 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:02.976 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3830573 00:13:02.976 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:02.976 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:02.976 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3830573' 00:13:02.976 killing process with pid 3830573 00:13:02.976 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 3830573 00:13:02.976 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 3830573 00:13:03.237 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:13:03.237 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:13:03.237 00:13:03.237 real 0m6.100s 00:13:03.237 user 0m17.090s 00:13:03.237 sys 0m0.555s 00:13:03.237 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:03.237 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:03.237 ************************************ 00:13:03.237 END TEST nvmf_vfio_user_nvme_compliance 00:13:03.237 ************************************ 00:13:03.237 22:22:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:03.237 22:22:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:03.237 22:22:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:03.237 22:22:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:03.237 ************************************ 00:13:03.237 START TEST nvmf_vfio_user_fuzz 00:13:03.237 ************************************ 00:13:03.237 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:03.237 * Looking for test storage... 00:13:03.237 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:03.237 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:03.237 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:13:03.237 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:03.237 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:03.237 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:03.237 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:03.237 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:03.237 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:03.237 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:03.237 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:03.237 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:03.237 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:03.237 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:13:03.237 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:13:03.237 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:03.237 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:03.237 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:03.237 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:03.237 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:03.237 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:03.237 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:03.237 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:03.237 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.238 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.238 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.238 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:13:03.238 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.238 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:13:03.238 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:03.238 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:03.238 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:03.238 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:03.238 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:03.238 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:03.238 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:03.238 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:03.238 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:03.238 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:03.238 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:03.238 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:13:03.238 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:03.238 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:03.238 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:13:03.238 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3831227 00:13:03.238 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:03.238 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3831227' 00:13:03.238 Process pid: 3831227 00:13:03.238 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:03.238 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3831227 00:13:03.238 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 3831227 ']' 00:13:03.238 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:03.238 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:03.238 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:03.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:03.238 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:03.238 22:22:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:03.496 22:22:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:03.496 22:22:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:13:03.496 22:22:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:13:04.872 22:22:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:04.872 22:22:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.872 22:22:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:04.872 22:22:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.872 22:22:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:13:04.872 22:22:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:04.872 22:22:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.872 22:22:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:04.872 malloc0 00:13:04.872 22:22:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.872 22:22:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:13:04.872 22:22:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.872 22:22:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:04.872 22:22:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.872 22:22:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:04.872 22:22:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.872 22:22:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:04.872 22:22:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.872 22:22:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:04.872 22:22:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:04.872 22:22:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:04.872 22:22:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:04.873 22:22:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:13:04.873 22:22:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:13:36.936 Fuzzing completed. Shutting down the fuzz application 00:13:36.936 00:13:36.936 Dumping successful admin opcodes: 00:13:36.936 8, 9, 10, 24, 00:13:36.936 Dumping successful io opcodes: 00:13:36.936 0, 00:13:36.936 NS: 0x200003a1ef00 I/O qp, Total commands completed: 598485, total successful commands: 2315, random_seed: 1296618880 00:13:36.936 NS: 0x200003a1ef00 admin qp, Total commands completed: 144640, total successful commands: 1174, random_seed: 896691584 00:13:36.936 22:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:13:36.936 22:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.936 22:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:36.936 22:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.936 22:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3831227 00:13:36.936 22:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 3831227 ']' 00:13:36.936 22:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 3831227 00:13:36.936 22:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:13:36.936 22:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:36.936 22:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3831227 00:13:36.936 22:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:36.936 22:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:36.936 22:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3831227' 00:13:36.936 killing process with pid 3831227 00:13:36.936 22:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 3831227 00:13:36.936 22:23:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 3831227 00:13:36.936 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:13:36.936 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:13:36.936 00:13:36.936 real 0m32.317s 00:13:36.936 user 0m34.248s 00:13:36.936 sys 0m26.804s 00:13:36.936 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:36.936 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:36.936 ************************************ 00:13:36.936 END TEST nvmf_vfio_user_fuzz 00:13:36.936 ************************************ 00:13:36.936 22:23:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:13:36.936 22:23:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:36.936 22:23:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:36.936 22:23:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:36.937 ************************************ 00:13:36.937 START TEST nvmf_auth_target 00:13:36.937 ************************************ 00:13:36.937 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:13:36.937 * Looking for test storage... 00:13:36.937 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:36.937 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:36.937 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:13:36.937 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:36.937 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:36.937 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:36.937 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:36.937 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:36.937 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:36.937 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:36.937 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:36.937 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:36.937 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:36.937 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:13:36.937 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:13:36.937 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:36.937 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:36.937 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:36.937 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:36.937 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:36.937 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:36.937 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:36.937 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:36.937 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.937 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.937 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.937 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:13:36.937 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.937 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:13:36.937 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:36.937 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:36.937 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:36.937 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:36.937 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:36.937 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:36.937 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:36.937 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:36.937 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:13:36.937 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:13:36.937 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:13:36.937 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:13:36.937 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:13:36.937 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:13:36.937 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:13:36.937 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:13:36.937 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:36.937 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:36.937 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:36.937 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:36.937 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:36.937 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:36.937 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:36.937 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:36.937 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:36.937 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:36.937 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:13:36.937 22:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.197 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:37.197 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:13:37.197 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:37.197 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:37.197 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:37.197 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:37.197 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:37.197 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:13:37.197 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:37.197 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:13:37.197 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:13:37.197 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:13:37.197 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:13:37.197 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:13:37.197 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:13:37.197 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:37.197 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:37.197 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:37.197 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:37.197 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:37.197 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:37.197 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:37.197 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:37.197 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:37.197 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:37.197 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:37.197 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:37.197 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:37.197 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:37.197 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:37.197 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:37.197 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:37.197 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:37.197 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:13:37.197 Found 0000:08:00.0 (0x8086 - 0x159b) 00:13:37.197 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:37.197 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:37.197 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:37.197 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:37.197 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:37.197 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:37.197 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:13:37.197 Found 0000:08:00.1 (0x8086 - 0x159b) 00:13:37.197 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:37.197 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:37.197 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:37.197 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:37.197 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:37.197 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:37.197 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:37.197 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:37.197 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:37.197 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:37.197 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:37.197 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:37.197 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:37.197 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:37.197 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:37.197 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:13:37.197 Found net devices under 0000:08:00.0: cvl_0_0 00:13:37.197 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:37.197 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:37.197 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:37.197 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:37.197 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:37.197 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:37.197 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:37.197 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:37.197 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:13:37.197 Found net devices under 0000:08:00.1: cvl_0_1 00:13:37.197 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:37.197 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:37.197 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:13:37.197 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:37.197 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:37.197 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:37.197 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:37.197 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:37.197 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:37.197 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:37.198 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:37.198 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:37.198 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:37.198 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:37.198 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:37.198 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:37.198 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:37.198 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:37.198 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:37.198 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:37.198 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:37.456 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:37.456 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:37.456 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:37.456 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:37.456 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:37.456 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:37.456 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:13:37.456 00:13:37.456 --- 10.0.0.2 ping statistics --- 00:13:37.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:37.456 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:13:37.456 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:37.456 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:37.456 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:13:37.456 00:13:37.456 --- 10.0.0.1 ping statistics --- 00:13:37.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:37.456 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:13:37.456 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:37.456 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:13:37.456 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:37.456 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:37.456 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:37.456 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:37.456 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:37.456 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:37.456 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:37.456 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:13:37.456 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:37.456 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:37.456 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.456 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3835370 00:13:37.456 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3835370 00:13:37.456 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:13:37.456 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3835370 ']' 00:13:37.456 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:37.456 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:37.456 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:37.456 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:37.456 22:23:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.715 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:37.715 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:13:37.715 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:37.715 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:37.715 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.715 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:37.715 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=3835396 00:13:37.715 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:37.715 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:13:37.715 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:13:37.715 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:37.715 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:37.715 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:37.715 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:13:37.715 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:13:37.715 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:37.715 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=4a31eaa9a299cc84779cd21b17cb457681732732cbaed01c 00:13:37.715 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:13:37.715 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.aZB 00:13:37.715 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 4a31eaa9a299cc84779cd21b17cb457681732732cbaed01c 0 00:13:37.715 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 4a31eaa9a299cc84779cd21b17cb457681732732cbaed01c 0 00:13:37.715 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:37.715 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:37.715 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=4a31eaa9a299cc84779cd21b17cb457681732732cbaed01c 00:13:37.715 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:13:37.715 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:37.715 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.aZB 00:13:37.715 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.aZB 00:13:37.715 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.aZB 00:13:37.715 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:13:37.715 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:37.715 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:37.715 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:37.715 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:13:37.715 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:13:37.715 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:13:37.715 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=058b2fcbd8718de10a1aad06ca045bfee06f2c8afe230f700644bc0df16c8515 00:13:37.715 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:13:37.715 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.ZjA 00:13:37.715 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 058b2fcbd8718de10a1aad06ca045bfee06f2c8afe230f700644bc0df16c8515 3 00:13:37.715 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 058b2fcbd8718de10a1aad06ca045bfee06f2c8afe230f700644bc0df16c8515 3 00:13:37.715 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:37.715 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:37.715 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=058b2fcbd8718de10a1aad06ca045bfee06f2c8afe230f700644bc0df16c8515 00:13:37.715 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:13:37.715 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:37.974 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.ZjA 00:13:37.974 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.ZjA 00:13:37.974 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.ZjA 00:13:37.974 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:13:37.974 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:37.974 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:37.974 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:37.974 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:13:37.974 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:13:37.974 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:37.974 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b9ff0da0ef8b9c100ec3a1a861e13977 00:13:37.974 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:13:37.974 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.VLZ 00:13:37.974 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b9ff0da0ef8b9c100ec3a1a861e13977 1 00:13:37.974 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b9ff0da0ef8b9c100ec3a1a861e13977 1 00:13:37.974 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:37.974 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:37.974 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b9ff0da0ef8b9c100ec3a1a861e13977 00:13:37.974 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:13:37.974 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:37.974 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.VLZ 00:13:37.974 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.VLZ 00:13:37.974 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.VLZ 00:13:37.974 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:13:37.974 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:37.974 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:37.974 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:37.974 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:13:37.974 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:13:37.974 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:37.974 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b12e4e7505d2cdba6c996f2d22e9cd3af0c5b74c76b209b9 00:13:37.974 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:13:37.974 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.3mJ 00:13:37.974 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b12e4e7505d2cdba6c996f2d22e9cd3af0c5b74c76b209b9 2 00:13:37.974 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b12e4e7505d2cdba6c996f2d22e9cd3af0c5b74c76b209b9 2 00:13:37.974 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:37.974 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:37.974 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b12e4e7505d2cdba6c996f2d22e9cd3af0c5b74c76b209b9 00:13:37.974 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:13:37.974 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:37.974 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.3mJ 00:13:37.974 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.3mJ 00:13:37.974 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.3mJ 00:13:37.974 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:13:37.974 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:37.974 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:37.974 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:37.974 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:13:37.974 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:13:37.974 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:37.974 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d4f4030e2914e42930fcd8c51bfbc91dc5f15d0b93e2844d 00:13:37.974 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:13:37.974 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.NHI 00:13:37.974 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d4f4030e2914e42930fcd8c51bfbc91dc5f15d0b93e2844d 2 00:13:37.974 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d4f4030e2914e42930fcd8c51bfbc91dc5f15d0b93e2844d 2 00:13:37.974 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:37.974 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:37.974 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d4f4030e2914e42930fcd8c51bfbc91dc5f15d0b93e2844d 00:13:37.974 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:13:37.974 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:37.974 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.NHI 00:13:37.974 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.NHI 00:13:37.974 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.NHI 00:13:37.975 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:13:37.975 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:37.975 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:37.975 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:37.975 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:13:37.975 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:13:37.975 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:37.975 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=5bc16a283ac8153fc0bb4188ef3ac72f 00:13:37.975 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:13:37.975 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.TCZ 00:13:37.975 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 5bc16a283ac8153fc0bb4188ef3ac72f 1 00:13:37.975 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 5bc16a283ac8153fc0bb4188ef3ac72f 1 00:13:37.975 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:37.975 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:37.975 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=5bc16a283ac8153fc0bb4188ef3ac72f 00:13:37.975 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:13:37.975 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:38.233 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.TCZ 00:13:38.233 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.TCZ 00:13:38.233 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.TCZ 00:13:38.233 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:13:38.233 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:38.233 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:38.233 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:38.233 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:13:38.233 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:13:38.233 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:13:38.233 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=4896b6f230f799a38f022215542707c8c314f2f3f97adb4c54dc356ead121054 00:13:38.233 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:13:38.233 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.CpI 00:13:38.233 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 4896b6f230f799a38f022215542707c8c314f2f3f97adb4c54dc356ead121054 3 00:13:38.233 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 4896b6f230f799a38f022215542707c8c314f2f3f97adb4c54dc356ead121054 3 00:13:38.233 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:38.233 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:38.233 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=4896b6f230f799a38f022215542707c8c314f2f3f97adb4c54dc356ead121054 00:13:38.233 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:13:38.233 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:38.233 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.CpI 00:13:38.233 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.CpI 00:13:38.233 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.CpI 00:13:38.233 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:13:38.233 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 3835370 00:13:38.233 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3835370 ']' 00:13:38.233 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:38.233 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:38.233 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:38.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:38.233 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:38.233 22:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.490 22:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:38.490 22:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:13:38.490 22:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 3835396 /var/tmp/host.sock 00:13:38.491 22:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3835396 ']' 00:13:38.491 22:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:13:38.491 22:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:38.491 22:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:38.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:38.491 22:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:38.491 22:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.748 22:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:38.748 22:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:13:38.748 22:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:13:38.748 22:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.748 22:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.748 22:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.748 22:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:38.748 22:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.aZB 00:13:38.748 22:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.748 22:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.748 22:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.748 22:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.aZB 00:13:38.748 22:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.aZB 00:13:39.006 22:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.ZjA ]] 00:13:39.006 22:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ZjA 00:13:39.006 22:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.006 22:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.006 22:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.006 22:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ZjA 00:13:39.006 22:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ZjA 00:13:39.263 22:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:39.263 22:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.VLZ 00:13:39.263 22:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.263 22:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.263 22:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.263 22:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.VLZ 00:13:39.263 22:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.VLZ 00:13:39.521 22:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.3mJ ]] 00:13:39.521 22:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.3mJ 00:13:39.521 22:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.521 22:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.521 22:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.521 22:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.3mJ 00:13:39.521 22:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.3mJ 00:13:39.779 22:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:39.779 22:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.NHI 00:13:39.779 22:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.779 22:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.779 22:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.779 22:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.NHI 00:13:39.779 22:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.NHI 00:13:40.037 22:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.TCZ ]] 00:13:40.037 22:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.TCZ 00:13:40.037 22:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.037 22:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.037 22:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.037 22:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.TCZ 00:13:40.037 22:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.TCZ 00:13:40.295 22:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:40.295 22:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.CpI 00:13:40.295 22:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.295 22:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.295 22:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.295 22:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.CpI 00:13:40.295 22:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.CpI 00:13:40.552 22:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:13:40.552 22:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:13:40.552 22:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:40.552 22:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:40.552 22:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:40.552 22:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:40.809 22:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:13:40.809 22:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:40.809 22:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:40.809 22:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:40.809 22:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:40.809 22:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:40.809 22:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:40.809 22:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.809 22:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.809 22:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.809 22:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:40.809 22:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:41.066 00:13:41.066 22:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:41.066 22:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:41.066 22:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:41.324 22:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:41.324 22:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:41.324 22:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.324 22:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.324 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.324 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:41.324 { 00:13:41.324 "cntlid": 1, 00:13:41.324 "qid": 0, 00:13:41.324 "state": "enabled", 00:13:41.324 "thread": "nvmf_tgt_poll_group_000", 00:13:41.324 "listen_address": { 00:13:41.324 "trtype": "TCP", 00:13:41.324 "adrfam": "IPv4", 00:13:41.324 "traddr": "10.0.0.2", 00:13:41.324 "trsvcid": "4420" 00:13:41.324 }, 00:13:41.324 "peer_address": { 00:13:41.324 "trtype": "TCP", 00:13:41.324 "adrfam": "IPv4", 00:13:41.324 "traddr": "10.0.0.1", 00:13:41.324 "trsvcid": "58862" 00:13:41.324 }, 00:13:41.324 "auth": { 00:13:41.324 "state": "completed", 00:13:41.324 "digest": "sha256", 00:13:41.324 "dhgroup": "null" 00:13:41.324 } 00:13:41.324 } 00:13:41.324 ]' 00:13:41.324 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:41.581 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:41.581 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:41.581 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:41.581 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:41.581 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:41.581 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:41.581 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:41.839 22:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:NGEzMWVhYTlhMjk5Y2M4NDc3OWNkMjFiMTdjYjQ1NzY4MTczMjczMmNiYWVkMDFjUM/DaA==: --dhchap-ctrl-secret DHHC-1:03:MDU4YjJmY2JkODcxOGRlMTBhMWFhZDA2Y2EwNDViZmVlMDZmMmM4YWZlMjMwZjcwMDY0NGJjMGRmMTZjODUxNQyopow=: 00:13:43.211 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:43.211 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:43.211 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:13:43.211 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:43.211 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.211 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:43.211 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:43.211 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:43.211 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:43.211 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:13:43.211 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:43.211 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:43.211 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:43.211 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:43.211 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:43.211 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:43.211 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:43.211 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.211 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:43.211 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:43.211 22:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:43.777 00:13:43.777 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:43.777 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:43.777 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:44.035 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:44.035 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:44.035 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:44.035 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.035 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:44.035 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:44.035 { 00:13:44.035 "cntlid": 3, 00:13:44.035 "qid": 0, 00:13:44.035 "state": "enabled", 00:13:44.035 "thread": "nvmf_tgt_poll_group_000", 00:13:44.035 "listen_address": { 00:13:44.035 "trtype": "TCP", 00:13:44.035 "adrfam": "IPv4", 00:13:44.035 "traddr": "10.0.0.2", 00:13:44.035 "trsvcid": "4420" 00:13:44.035 }, 00:13:44.035 "peer_address": { 00:13:44.035 "trtype": "TCP", 00:13:44.035 "adrfam": "IPv4", 00:13:44.035 "traddr": "10.0.0.1", 00:13:44.035 "trsvcid": "58894" 00:13:44.035 }, 00:13:44.035 "auth": { 00:13:44.035 "state": "completed", 00:13:44.035 "digest": "sha256", 00:13:44.035 "dhgroup": "null" 00:13:44.035 } 00:13:44.035 } 00:13:44.035 ]' 00:13:44.035 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:44.035 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:44.035 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:44.035 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:44.035 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:44.035 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:44.035 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:44.036 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:44.294 22:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:YjlmZjBkYTBlZjhiOWMxMDBlYzNhMWE4NjFlMTM5NzcebKtQ: --dhchap-ctrl-secret DHHC-1:02:YjEyZTRlNzUwNWQyY2RiYTZjOTk2ZjJkMjJlOWNkM2FmMGM1Yjc0Yzc2YjIwOWI5roOT2A==: 00:13:45.667 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:45.667 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:45.667 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:13:45.667 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.667 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.667 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.667 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:45.667 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:45.667 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:45.925 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:13:45.925 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:45.925 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:45.925 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:45.925 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:45.925 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:45.925 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:45.925 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.925 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.925 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.925 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:45.925 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:46.183 00:13:46.183 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:46.183 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:46.183 22:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:46.441 22:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:46.441 22:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:46.441 22:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.441 22:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.441 22:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.441 22:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:46.441 { 00:13:46.441 "cntlid": 5, 00:13:46.441 "qid": 0, 00:13:46.441 "state": "enabled", 00:13:46.441 "thread": "nvmf_tgt_poll_group_000", 00:13:46.441 "listen_address": { 00:13:46.441 "trtype": "TCP", 00:13:46.441 "adrfam": "IPv4", 00:13:46.441 "traddr": "10.0.0.2", 00:13:46.441 "trsvcid": "4420" 00:13:46.441 }, 00:13:46.441 "peer_address": { 00:13:46.441 "trtype": "TCP", 00:13:46.441 "adrfam": "IPv4", 00:13:46.441 "traddr": "10.0.0.1", 00:13:46.441 "trsvcid": "58918" 00:13:46.441 }, 00:13:46.441 "auth": { 00:13:46.441 "state": "completed", 00:13:46.441 "digest": "sha256", 00:13:46.441 "dhgroup": "null" 00:13:46.441 } 00:13:46.441 } 00:13:46.441 ]' 00:13:46.441 22:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:46.699 22:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:46.699 22:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:46.699 22:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:46.699 22:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:46.699 22:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:46.699 22:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:46.699 22:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:46.957 22:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:ZDRmNDAzMGUyOTE0ZTQyOTMwZmNkOGM1MWJmYmM5MWRjNWYxNWQwYjkzZTI4NDRk64rDTA==: --dhchap-ctrl-secret DHHC-1:01:NWJjMTZhMjgzYWM4MTUzZmMwYmI0MTg4ZWYzYWM3MmZQQcBP: 00:13:48.332 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:48.332 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:48.332 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:13:48.332 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.332 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.332 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.332 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:48.332 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:48.332 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:48.332 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:13:48.332 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:48.332 22:23:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:48.332 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:48.332 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:48.332 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:48.332 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:13:48.332 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.332 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.332 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.332 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:48.332 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:48.899 00:13:48.899 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:48.899 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:48.899 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:49.157 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:49.157 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:49.157 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.157 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.157 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.157 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:49.157 { 00:13:49.157 "cntlid": 7, 00:13:49.157 "qid": 0, 00:13:49.157 "state": "enabled", 00:13:49.157 "thread": "nvmf_tgt_poll_group_000", 00:13:49.157 "listen_address": { 00:13:49.157 "trtype": "TCP", 00:13:49.157 "adrfam": "IPv4", 00:13:49.157 "traddr": "10.0.0.2", 00:13:49.157 "trsvcid": "4420" 00:13:49.157 }, 00:13:49.157 "peer_address": { 00:13:49.157 "trtype": "TCP", 00:13:49.157 "adrfam": "IPv4", 00:13:49.157 "traddr": "10.0.0.1", 00:13:49.157 "trsvcid": "58936" 00:13:49.157 }, 00:13:49.157 "auth": { 00:13:49.157 "state": "completed", 00:13:49.157 "digest": "sha256", 00:13:49.157 "dhgroup": "null" 00:13:49.157 } 00:13:49.157 } 00:13:49.157 ]' 00:13:49.157 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:49.157 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:49.157 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:49.157 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:49.157 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:49.157 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:49.157 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:49.157 22:23:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:49.417 22:23:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:NDg5NmI2ZjIzMGY3OTlhMzhmMDIyMjE1NTQyNzA3YzhjMzE0ZjJmM2Y5N2FkYjRjNTRkYzM1NmVhZDEyMTA1NA2KbfU=: 00:13:50.795 22:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:50.795 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:50.795 22:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:13:50.795 22:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.795 22:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.795 22:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.795 22:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:50.795 22:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:50.795 22:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:50.795 22:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:51.053 22:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:13:51.053 22:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:51.053 22:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:51.053 22:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:51.053 22:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:51.053 22:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:51.053 22:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:51.053 22:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.053 22:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.053 22:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.053 22:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:51.053 22:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:51.311 00:13:51.311 22:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:51.311 22:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:51.311 22:23:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:51.570 22:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:51.570 22:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:51.570 22:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.570 22:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.829 22:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.829 22:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:51.829 { 00:13:51.829 "cntlid": 9, 00:13:51.829 "qid": 0, 00:13:51.829 "state": "enabled", 00:13:51.829 "thread": "nvmf_tgt_poll_group_000", 00:13:51.829 "listen_address": { 00:13:51.829 "trtype": "TCP", 00:13:51.829 "adrfam": "IPv4", 00:13:51.829 "traddr": "10.0.0.2", 00:13:51.829 "trsvcid": "4420" 00:13:51.829 }, 00:13:51.829 "peer_address": { 00:13:51.829 "trtype": "TCP", 00:13:51.829 "adrfam": "IPv4", 00:13:51.829 "traddr": "10.0.0.1", 00:13:51.829 "trsvcid": "54290" 00:13:51.829 }, 00:13:51.829 "auth": { 00:13:51.829 "state": "completed", 00:13:51.829 "digest": "sha256", 00:13:51.829 "dhgroup": "ffdhe2048" 00:13:51.829 } 00:13:51.829 } 00:13:51.829 ]' 00:13:51.829 22:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:51.829 22:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:51.829 22:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:51.829 22:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:51.829 22:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:51.829 22:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:51.829 22:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:51.829 22:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:52.110 22:23:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:NGEzMWVhYTlhMjk5Y2M4NDc3OWNkMjFiMTdjYjQ1NzY4MTczMjczMmNiYWVkMDFjUM/DaA==: --dhchap-ctrl-secret DHHC-1:03:MDU4YjJmY2JkODcxOGRlMTBhMWFhZDA2Y2EwNDViZmVlMDZmMmM4YWZlMjMwZjcwMDY0NGJjMGRmMTZjODUxNQyopow=: 00:13:53.521 22:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:53.521 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:53.521 22:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:13:53.521 22:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.521 22:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.521 22:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.521 22:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:53.521 22:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:53.521 22:23:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:53.521 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:13:53.521 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:53.521 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:53.521 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:53.521 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:13:53.521 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:53.521 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:53.521 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.521 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.521 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.521 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:53.521 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:54.088 00:13:54.088 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:54.088 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:54.088 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:54.346 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:54.346 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:54.346 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.346 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.346 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.346 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:54.346 { 00:13:54.346 "cntlid": 11, 00:13:54.346 "qid": 0, 00:13:54.346 "state": "enabled", 00:13:54.346 "thread": "nvmf_tgt_poll_group_000", 00:13:54.346 "listen_address": { 00:13:54.346 "trtype": "TCP", 00:13:54.346 "adrfam": "IPv4", 00:13:54.346 "traddr": "10.0.0.2", 00:13:54.346 "trsvcid": "4420" 00:13:54.346 }, 00:13:54.346 "peer_address": { 00:13:54.346 "trtype": "TCP", 00:13:54.346 "adrfam": "IPv4", 00:13:54.346 "traddr": "10.0.0.1", 00:13:54.346 "trsvcid": "54324" 00:13:54.346 }, 00:13:54.346 "auth": { 00:13:54.346 "state": "completed", 00:13:54.346 "digest": "sha256", 00:13:54.346 "dhgroup": "ffdhe2048" 00:13:54.346 } 00:13:54.346 } 00:13:54.346 ]' 00:13:54.346 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:54.346 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:54.346 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:54.346 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:54.346 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:54.346 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:54.346 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:54.346 22:23:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:54.605 22:23:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:YjlmZjBkYTBlZjhiOWMxMDBlYzNhMWE4NjFlMTM5NzcebKtQ: --dhchap-ctrl-secret DHHC-1:02:YjEyZTRlNzUwNWQyY2RiYTZjOTk2ZjJkMjJlOWNkM2FmMGM1Yjc0Yzc2YjIwOWI5roOT2A==: 00:13:55.982 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:55.982 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:55.982 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:13:55.982 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.982 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.982 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.982 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:55.982 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:55.982 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:56.240 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:13:56.240 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:56.240 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:56.240 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:56.240 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:13:56.240 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:56.240 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:56.240 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.240 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.240 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.240 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:56.240 22:23:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:56.499 00:13:56.499 22:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:56.499 22:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:56.499 22:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:56.757 22:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:56.757 22:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:56.757 22:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.757 22:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.758 22:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.758 22:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:56.758 { 00:13:56.758 "cntlid": 13, 00:13:56.758 "qid": 0, 00:13:56.758 "state": "enabled", 00:13:56.758 "thread": "nvmf_tgt_poll_group_000", 00:13:56.758 "listen_address": { 00:13:56.758 "trtype": "TCP", 00:13:56.758 "adrfam": "IPv4", 00:13:56.758 "traddr": "10.0.0.2", 00:13:56.758 "trsvcid": "4420" 00:13:56.758 }, 00:13:56.758 "peer_address": { 00:13:56.758 "trtype": "TCP", 00:13:56.758 "adrfam": "IPv4", 00:13:56.758 "traddr": "10.0.0.1", 00:13:56.758 "trsvcid": "54344" 00:13:56.758 }, 00:13:56.758 "auth": { 00:13:56.758 "state": "completed", 00:13:56.758 "digest": "sha256", 00:13:56.758 "dhgroup": "ffdhe2048" 00:13:56.758 } 00:13:56.758 } 00:13:56.758 ]' 00:13:56.758 22:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:56.758 22:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:56.758 22:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:57.016 22:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:57.016 22:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:57.016 22:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:57.016 22:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:57.016 22:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:57.274 22:23:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:ZDRmNDAzMGUyOTE0ZTQyOTMwZmNkOGM1MWJmYmM5MWRjNWYxNWQwYjkzZTI4NDRk64rDTA==: --dhchap-ctrl-secret DHHC-1:01:NWJjMTZhMjgzYWM4MTUzZmMwYmI0MTg4ZWYzYWM3MmZQQcBP: 00:13:58.650 22:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:58.650 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:58.650 22:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:13:58.650 22:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.650 22:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.650 22:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.650 22:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:58.650 22:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:58.651 22:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:58.651 22:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:13:58.651 22:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:58.651 22:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:58.651 22:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:13:58.651 22:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:13:58.651 22:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:58.651 22:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:13:58.651 22:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.651 22:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.651 22:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:58.651 22:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:58.651 22:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:13:59.218 00:13:59.218 22:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:59.218 22:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:59.218 22:23:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:59.477 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:59.477 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:59.477 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:59.477 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.477 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.477 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:59.477 { 00:13:59.477 "cntlid": 15, 00:13:59.477 "qid": 0, 00:13:59.477 "state": "enabled", 00:13:59.477 "thread": "nvmf_tgt_poll_group_000", 00:13:59.477 "listen_address": { 00:13:59.477 "trtype": "TCP", 00:13:59.477 "adrfam": "IPv4", 00:13:59.477 "traddr": "10.0.0.2", 00:13:59.477 "trsvcid": "4420" 00:13:59.477 }, 00:13:59.477 "peer_address": { 00:13:59.477 "trtype": "TCP", 00:13:59.477 "adrfam": "IPv4", 00:13:59.477 "traddr": "10.0.0.1", 00:13:59.477 "trsvcid": "42650" 00:13:59.477 }, 00:13:59.477 "auth": { 00:13:59.477 "state": "completed", 00:13:59.477 "digest": "sha256", 00:13:59.477 "dhgroup": "ffdhe2048" 00:13:59.477 } 00:13:59.477 } 00:13:59.477 ]' 00:13:59.477 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:59.477 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:59.477 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:59.477 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:59.477 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:59.477 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:59.477 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:59.477 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:00.047 22:23:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:NDg5NmI2ZjIzMGY3OTlhMzhmMDIyMjE1NTQyNzA3YzhjMzE0ZjJmM2Y5N2FkYjRjNTRkYzM1NmVhZDEyMTA1NA2KbfU=: 00:14:00.987 22:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:00.987 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:00.987 22:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:00.987 22:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.987 22:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.987 22:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.987 22:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:00.987 22:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:00.987 22:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:00.987 22:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:01.245 22:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:14:01.245 22:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:01.245 22:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:01.245 22:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:01.245 22:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:01.245 22:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:01.245 22:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:01.245 22:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.245 22:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.503 22:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.503 22:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:01.503 22:23:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:01.760 00:14:01.760 22:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:01.760 22:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:01.760 22:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:02.018 22:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:02.018 22:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:02.018 22:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.018 22:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.018 22:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.018 22:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:02.018 { 00:14:02.018 "cntlid": 17, 00:14:02.018 "qid": 0, 00:14:02.018 "state": "enabled", 00:14:02.018 "thread": "nvmf_tgt_poll_group_000", 00:14:02.018 "listen_address": { 00:14:02.018 "trtype": "TCP", 00:14:02.018 "adrfam": "IPv4", 00:14:02.018 "traddr": "10.0.0.2", 00:14:02.018 "trsvcid": "4420" 00:14:02.018 }, 00:14:02.018 "peer_address": { 00:14:02.018 "trtype": "TCP", 00:14:02.018 "adrfam": "IPv4", 00:14:02.018 "traddr": "10.0.0.1", 00:14:02.018 "trsvcid": "42676" 00:14:02.018 }, 00:14:02.018 "auth": { 00:14:02.018 "state": "completed", 00:14:02.018 "digest": "sha256", 00:14:02.018 "dhgroup": "ffdhe3072" 00:14:02.018 } 00:14:02.018 } 00:14:02.018 ]' 00:14:02.018 22:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:02.018 22:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:02.018 22:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:02.276 22:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:02.276 22:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:02.276 22:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:02.276 22:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:02.276 22:23:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:02.533 22:23:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:NGEzMWVhYTlhMjk5Y2M4NDc3OWNkMjFiMTdjYjQ1NzY4MTczMjczMmNiYWVkMDFjUM/DaA==: --dhchap-ctrl-secret DHHC-1:03:MDU4YjJmY2JkODcxOGRlMTBhMWFhZDA2Y2EwNDViZmVlMDZmMmM4YWZlMjMwZjcwMDY0NGJjMGRmMTZjODUxNQyopow=: 00:14:03.906 22:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:03.906 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:03.906 22:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:03.906 22:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.907 22:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.907 22:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.907 22:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:03.907 22:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:03.907 22:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:03.907 22:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:14:03.907 22:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:03.907 22:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:03.907 22:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:03.907 22:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:03.907 22:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:03.907 22:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:03.907 22:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.907 22:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.907 22:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.907 22:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:03.907 22:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:04.471 00:14:04.471 22:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:04.471 22:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:04.471 22:23:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:04.729 22:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:04.729 22:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:04.729 22:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.729 22:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.729 22:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.729 22:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:04.729 { 00:14:04.729 "cntlid": 19, 00:14:04.729 "qid": 0, 00:14:04.729 "state": "enabled", 00:14:04.729 "thread": "nvmf_tgt_poll_group_000", 00:14:04.729 "listen_address": { 00:14:04.729 "trtype": "TCP", 00:14:04.729 "adrfam": "IPv4", 00:14:04.729 "traddr": "10.0.0.2", 00:14:04.729 "trsvcid": "4420" 00:14:04.729 }, 00:14:04.729 "peer_address": { 00:14:04.729 "trtype": "TCP", 00:14:04.729 "adrfam": "IPv4", 00:14:04.729 "traddr": "10.0.0.1", 00:14:04.729 "trsvcid": "42716" 00:14:04.729 }, 00:14:04.729 "auth": { 00:14:04.729 "state": "completed", 00:14:04.729 "digest": "sha256", 00:14:04.729 "dhgroup": "ffdhe3072" 00:14:04.729 } 00:14:04.729 } 00:14:04.729 ]' 00:14:04.729 22:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:04.729 22:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:04.729 22:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:04.729 22:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:04.729 22:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:04.986 22:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:04.986 22:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:04.986 22:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:05.243 22:23:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:YjlmZjBkYTBlZjhiOWMxMDBlYzNhMWE4NjFlMTM5NzcebKtQ: --dhchap-ctrl-secret DHHC-1:02:YjEyZTRlNzUwNWQyY2RiYTZjOTk2ZjJkMjJlOWNkM2FmMGM1Yjc0Yzc2YjIwOWI5roOT2A==: 00:14:06.610 22:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:06.610 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:06.610 22:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:06.610 22:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.610 22:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.610 22:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.610 22:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:06.610 22:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:06.610 22:23:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:06.610 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:14:06.610 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:06.610 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:06.610 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:06.610 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:06.610 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:06.610 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:06.610 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.610 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.610 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.610 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:06.610 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:07.174 00:14:07.174 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:07.174 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:07.175 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:07.432 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:07.432 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:07.432 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.432 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.432 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.432 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:07.432 { 00:14:07.432 "cntlid": 21, 00:14:07.432 "qid": 0, 00:14:07.432 "state": "enabled", 00:14:07.432 "thread": "nvmf_tgt_poll_group_000", 00:14:07.432 "listen_address": { 00:14:07.432 "trtype": "TCP", 00:14:07.432 "adrfam": "IPv4", 00:14:07.432 "traddr": "10.0.0.2", 00:14:07.432 "trsvcid": "4420" 00:14:07.432 }, 00:14:07.432 "peer_address": { 00:14:07.432 "trtype": "TCP", 00:14:07.432 "adrfam": "IPv4", 00:14:07.432 "traddr": "10.0.0.1", 00:14:07.432 "trsvcid": "42728" 00:14:07.432 }, 00:14:07.432 "auth": { 00:14:07.432 "state": "completed", 00:14:07.432 "digest": "sha256", 00:14:07.432 "dhgroup": "ffdhe3072" 00:14:07.432 } 00:14:07.432 } 00:14:07.432 ]' 00:14:07.432 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:07.432 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:07.432 22:23:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:07.432 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:07.432 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:07.432 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:07.432 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:07.432 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:07.689 22:23:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:ZDRmNDAzMGUyOTE0ZTQyOTMwZmNkOGM1MWJmYmM5MWRjNWYxNWQwYjkzZTI4NDRk64rDTA==: --dhchap-ctrl-secret DHHC-1:01:NWJjMTZhMjgzYWM4MTUzZmMwYmI0MTg4ZWYzYWM3MmZQQcBP: 00:14:09.058 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:09.058 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:09.058 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:09.058 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.058 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.058 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.058 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:09.058 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:09.058 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:09.315 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:14:09.315 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:09.315 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:09.315 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:09.315 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:09.315 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:09.315 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:14:09.315 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.315 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.315 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.315 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:09.315 22:23:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:09.573 00:14:09.573 22:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:09.573 22:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:09.573 22:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:10.138 22:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:10.138 22:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:10.138 22:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.138 22:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.138 22:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.138 22:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:10.138 { 00:14:10.138 "cntlid": 23, 00:14:10.138 "qid": 0, 00:14:10.138 "state": "enabled", 00:14:10.138 "thread": "nvmf_tgt_poll_group_000", 00:14:10.138 "listen_address": { 00:14:10.138 "trtype": "TCP", 00:14:10.138 "adrfam": "IPv4", 00:14:10.138 "traddr": "10.0.0.2", 00:14:10.138 "trsvcid": "4420" 00:14:10.138 }, 00:14:10.138 "peer_address": { 00:14:10.138 "trtype": "TCP", 00:14:10.138 "adrfam": "IPv4", 00:14:10.138 "traddr": "10.0.0.1", 00:14:10.138 "trsvcid": "46832" 00:14:10.138 }, 00:14:10.138 "auth": { 00:14:10.138 "state": "completed", 00:14:10.138 "digest": "sha256", 00:14:10.138 "dhgroup": "ffdhe3072" 00:14:10.138 } 00:14:10.138 } 00:14:10.138 ]' 00:14:10.138 22:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:10.138 22:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:10.138 22:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:10.138 22:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:10.138 22:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:10.138 22:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:10.138 22:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:10.138 22:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:10.396 22:23:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:NDg5NmI2ZjIzMGY3OTlhMzhmMDIyMjE1NTQyNzA3YzhjMzE0ZjJmM2Y5N2FkYjRjNTRkYzM1NmVhZDEyMTA1NA2KbfU=: 00:14:11.769 22:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:11.769 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:11.769 22:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:11.769 22:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:11.769 22:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.769 22:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:11.769 22:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:11.769 22:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:11.769 22:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:11.769 22:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:12.028 22:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:14:12.028 22:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:12.028 22:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:12.028 22:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:12.028 22:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:12.028 22:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:12.028 22:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:12.028 22:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.028 22:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.028 22:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.028 22:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:12.028 22:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:12.286 00:14:12.286 22:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:12.286 22:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:12.286 22:23:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:12.544 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:12.544 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:12.544 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:12.544 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.544 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:12.544 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:12.544 { 00:14:12.544 "cntlid": 25, 00:14:12.544 "qid": 0, 00:14:12.544 "state": "enabled", 00:14:12.544 "thread": "nvmf_tgt_poll_group_000", 00:14:12.544 "listen_address": { 00:14:12.544 "trtype": "TCP", 00:14:12.544 "adrfam": "IPv4", 00:14:12.544 "traddr": "10.0.0.2", 00:14:12.544 "trsvcid": "4420" 00:14:12.544 }, 00:14:12.544 "peer_address": { 00:14:12.544 "trtype": "TCP", 00:14:12.544 "adrfam": "IPv4", 00:14:12.544 "traddr": "10.0.0.1", 00:14:12.544 "trsvcid": "46870" 00:14:12.544 }, 00:14:12.544 "auth": { 00:14:12.544 "state": "completed", 00:14:12.544 "digest": "sha256", 00:14:12.544 "dhgroup": "ffdhe4096" 00:14:12.544 } 00:14:12.544 } 00:14:12.545 ]' 00:14:12.802 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:12.802 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:12.802 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:12.802 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:12.802 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:12.802 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:12.802 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:12.802 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:13.060 22:23:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:NGEzMWVhYTlhMjk5Y2M4NDc3OWNkMjFiMTdjYjQ1NzY4MTczMjczMmNiYWVkMDFjUM/DaA==: --dhchap-ctrl-secret DHHC-1:03:MDU4YjJmY2JkODcxOGRlMTBhMWFhZDA2Y2EwNDViZmVlMDZmMmM4YWZlMjMwZjcwMDY0NGJjMGRmMTZjODUxNQyopow=: 00:14:14.435 22:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:14.435 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:14.435 22:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:14.435 22:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.435 22:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.435 22:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.435 22:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:14.435 22:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:14.435 22:23:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:14.693 22:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:14:14.693 22:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:14.693 22:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:14.693 22:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:14.693 22:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:14.693 22:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:14.693 22:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:14.693 22:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.693 22:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.693 22:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.693 22:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:14.693 22:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:14.951 00:14:14.951 22:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:14.951 22:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:14.951 22:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:15.517 22:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:15.517 22:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:15.517 22:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.517 22:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.517 22:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.517 22:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:15.517 { 00:14:15.517 "cntlid": 27, 00:14:15.517 "qid": 0, 00:14:15.517 "state": "enabled", 00:14:15.517 "thread": "nvmf_tgt_poll_group_000", 00:14:15.517 "listen_address": { 00:14:15.517 "trtype": "TCP", 00:14:15.517 "adrfam": "IPv4", 00:14:15.517 "traddr": "10.0.0.2", 00:14:15.517 "trsvcid": "4420" 00:14:15.517 }, 00:14:15.517 "peer_address": { 00:14:15.517 "trtype": "TCP", 00:14:15.517 "adrfam": "IPv4", 00:14:15.517 "traddr": "10.0.0.1", 00:14:15.517 "trsvcid": "46902" 00:14:15.517 }, 00:14:15.517 "auth": { 00:14:15.517 "state": "completed", 00:14:15.517 "digest": "sha256", 00:14:15.517 "dhgroup": "ffdhe4096" 00:14:15.517 } 00:14:15.517 } 00:14:15.517 ]' 00:14:15.517 22:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:15.517 22:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:15.517 22:23:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:15.517 22:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:15.517 22:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:15.517 22:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:15.518 22:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:15.518 22:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:15.776 22:23:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:YjlmZjBkYTBlZjhiOWMxMDBlYzNhMWE4NjFlMTM5NzcebKtQ: --dhchap-ctrl-secret DHHC-1:02:YjEyZTRlNzUwNWQyY2RiYTZjOTk2ZjJkMjJlOWNkM2FmMGM1Yjc0Yzc2YjIwOWI5roOT2A==: 00:14:17.150 22:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:17.150 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:17.150 22:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:17.150 22:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.150 22:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.150 22:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.150 22:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:17.150 22:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:17.150 22:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:17.408 22:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:14:17.408 22:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:17.408 22:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:17.408 22:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:17.408 22:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:17.408 22:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:17.408 22:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:17.408 22:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.408 22:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.408 22:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.408 22:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:17.408 22:23:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:17.701 00:14:17.701 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:17.701 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:17.701 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:17.983 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:17.983 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:17.983 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.983 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.983 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.983 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:17.983 { 00:14:17.983 "cntlid": 29, 00:14:17.983 "qid": 0, 00:14:17.983 "state": "enabled", 00:14:17.983 "thread": "nvmf_tgt_poll_group_000", 00:14:17.983 "listen_address": { 00:14:17.983 "trtype": "TCP", 00:14:17.983 "adrfam": "IPv4", 00:14:17.983 "traddr": "10.0.0.2", 00:14:17.983 "trsvcid": "4420" 00:14:17.983 }, 00:14:17.983 "peer_address": { 00:14:17.983 "trtype": "TCP", 00:14:17.983 "adrfam": "IPv4", 00:14:17.983 "traddr": "10.0.0.1", 00:14:17.983 "trsvcid": "46918" 00:14:17.983 }, 00:14:17.983 "auth": { 00:14:17.983 "state": "completed", 00:14:17.983 "digest": "sha256", 00:14:17.983 "dhgroup": "ffdhe4096" 00:14:17.983 } 00:14:17.983 } 00:14:17.983 ]' 00:14:17.983 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:17.983 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:17.983 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:17.983 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:17.983 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:18.241 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:18.241 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:18.241 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:18.499 22:23:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:ZDRmNDAzMGUyOTE0ZTQyOTMwZmNkOGM1MWJmYmM5MWRjNWYxNWQwYjkzZTI4NDRk64rDTA==: --dhchap-ctrl-secret DHHC-1:01:NWJjMTZhMjgzYWM4MTUzZmMwYmI0MTg4ZWYzYWM3MmZQQcBP: 00:14:19.434 22:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:19.434 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:19.434 22:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:19.434 22:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.434 22:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.434 22:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.434 22:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:19.434 22:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:19.434 22:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:19.999 22:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:14:19.999 22:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:19.999 22:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:19.999 22:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:19.999 22:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:19.999 22:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:19.999 22:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:14:19.999 22:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:19.999 22:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.999 22:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:19.999 22:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:19.999 22:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:20.257 00:14:20.257 22:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:20.257 22:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:20.257 22:23:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:20.516 22:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:20.516 22:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:20.516 22:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.516 22:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.516 22:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.516 22:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:20.516 { 00:14:20.516 "cntlid": 31, 00:14:20.516 "qid": 0, 00:14:20.516 "state": "enabled", 00:14:20.516 "thread": "nvmf_tgt_poll_group_000", 00:14:20.516 "listen_address": { 00:14:20.516 "trtype": "TCP", 00:14:20.516 "adrfam": "IPv4", 00:14:20.516 "traddr": "10.0.0.2", 00:14:20.516 "trsvcid": "4420" 00:14:20.516 }, 00:14:20.516 "peer_address": { 00:14:20.516 "trtype": "TCP", 00:14:20.516 "adrfam": "IPv4", 00:14:20.516 "traddr": "10.0.0.1", 00:14:20.516 "trsvcid": "55174" 00:14:20.516 }, 00:14:20.516 "auth": { 00:14:20.516 "state": "completed", 00:14:20.516 "digest": "sha256", 00:14:20.516 "dhgroup": "ffdhe4096" 00:14:20.516 } 00:14:20.516 } 00:14:20.516 ]' 00:14:20.516 22:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:20.774 22:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:20.774 22:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:20.774 22:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:20.774 22:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:20.774 22:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:20.774 22:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:20.774 22:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:21.032 22:23:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:NDg5NmI2ZjIzMGY3OTlhMzhmMDIyMjE1NTQyNzA3YzhjMzE0ZjJmM2Y5N2FkYjRjNTRkYzM1NmVhZDEyMTA1NA2KbfU=: 00:14:22.405 22:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:22.405 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:22.405 22:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:22.405 22:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.405 22:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.405 22:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.405 22:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:22.405 22:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:22.405 22:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:22.405 22:23:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:22.405 22:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:14:22.405 22:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:22.405 22:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:22.405 22:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:22.405 22:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:22.405 22:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:22.405 22:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:22.405 22:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.405 22:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.405 22:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.405 22:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:22.405 22:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:23.340 00:14:23.340 22:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:23.340 22:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:23.340 22:23:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:23.340 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:23.340 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:23.340 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.340 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.340 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.340 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:23.340 { 00:14:23.340 "cntlid": 33, 00:14:23.340 "qid": 0, 00:14:23.340 "state": "enabled", 00:14:23.340 "thread": "nvmf_tgt_poll_group_000", 00:14:23.340 "listen_address": { 00:14:23.340 "trtype": "TCP", 00:14:23.340 "adrfam": "IPv4", 00:14:23.340 "traddr": "10.0.0.2", 00:14:23.340 "trsvcid": "4420" 00:14:23.340 }, 00:14:23.340 "peer_address": { 00:14:23.340 "trtype": "TCP", 00:14:23.340 "adrfam": "IPv4", 00:14:23.340 "traddr": "10.0.0.1", 00:14:23.340 "trsvcid": "55200" 00:14:23.340 }, 00:14:23.340 "auth": { 00:14:23.340 "state": "completed", 00:14:23.340 "digest": "sha256", 00:14:23.340 "dhgroup": "ffdhe6144" 00:14:23.340 } 00:14:23.340 } 00:14:23.340 ]' 00:14:23.340 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:23.598 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:23.598 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:23.598 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:23.598 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:23.598 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:23.598 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:23.598 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:23.856 22:23:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:NGEzMWVhYTlhMjk5Y2M4NDc3OWNkMjFiMTdjYjQ1NzY4MTczMjczMmNiYWVkMDFjUM/DaA==: --dhchap-ctrl-secret DHHC-1:03:MDU4YjJmY2JkODcxOGRlMTBhMWFhZDA2Y2EwNDViZmVlMDZmMmM4YWZlMjMwZjcwMDY0NGJjMGRmMTZjODUxNQyopow=: 00:14:25.229 22:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:25.229 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:25.229 22:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:25.229 22:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.229 22:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.229 22:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.229 22:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:25.229 22:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:25.229 22:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:25.229 22:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:14:25.229 22:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:25.229 22:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:25.229 22:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:25.229 22:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:25.229 22:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:25.229 22:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:25.230 22:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.230 22:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.486 22:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.486 22:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:25.486 22:23:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:26.051 00:14:26.051 22:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:26.051 22:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:26.051 22:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:26.308 22:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:26.308 22:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:26.308 22:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.308 22:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.308 22:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.308 22:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:26.308 { 00:14:26.308 "cntlid": 35, 00:14:26.308 "qid": 0, 00:14:26.308 "state": "enabled", 00:14:26.308 "thread": "nvmf_tgt_poll_group_000", 00:14:26.308 "listen_address": { 00:14:26.308 "trtype": "TCP", 00:14:26.308 "adrfam": "IPv4", 00:14:26.308 "traddr": "10.0.0.2", 00:14:26.308 "trsvcid": "4420" 00:14:26.308 }, 00:14:26.308 "peer_address": { 00:14:26.308 "trtype": "TCP", 00:14:26.308 "adrfam": "IPv4", 00:14:26.308 "traddr": "10.0.0.1", 00:14:26.308 "trsvcid": "55214" 00:14:26.308 }, 00:14:26.308 "auth": { 00:14:26.308 "state": "completed", 00:14:26.308 "digest": "sha256", 00:14:26.308 "dhgroup": "ffdhe6144" 00:14:26.308 } 00:14:26.308 } 00:14:26.308 ]' 00:14:26.308 22:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:26.308 22:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:26.308 22:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:26.308 22:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:26.308 22:23:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:26.565 22:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:26.566 22:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:26.566 22:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:26.823 22:23:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:YjlmZjBkYTBlZjhiOWMxMDBlYzNhMWE4NjFlMTM5NzcebKtQ: --dhchap-ctrl-secret DHHC-1:02:YjEyZTRlNzUwNWQyY2RiYTZjOTk2ZjJkMjJlOWNkM2FmMGM1Yjc0Yzc2YjIwOWI5roOT2A==: 00:14:28.195 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:28.195 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:28.195 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:28.195 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.195 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.195 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.195 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:28.195 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:28.195 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:28.195 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:14:28.195 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:28.195 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:28.195 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:28.195 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:28.195 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:28.195 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:28.195 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.195 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.195 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.195 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:28.195 22:23:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:29.128 00:14:29.128 22:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:29.128 22:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:29.128 22:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:29.128 22:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:29.128 22:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:29.128 22:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.128 22:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.128 22:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.128 22:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:29.128 { 00:14:29.128 "cntlid": 37, 00:14:29.128 "qid": 0, 00:14:29.128 "state": "enabled", 00:14:29.128 "thread": "nvmf_tgt_poll_group_000", 00:14:29.128 "listen_address": { 00:14:29.128 "trtype": "TCP", 00:14:29.128 "adrfam": "IPv4", 00:14:29.128 "traddr": "10.0.0.2", 00:14:29.128 "trsvcid": "4420" 00:14:29.128 }, 00:14:29.128 "peer_address": { 00:14:29.128 "trtype": "TCP", 00:14:29.128 "adrfam": "IPv4", 00:14:29.128 "traddr": "10.0.0.1", 00:14:29.128 "trsvcid": "55242" 00:14:29.128 }, 00:14:29.128 "auth": { 00:14:29.128 "state": "completed", 00:14:29.128 "digest": "sha256", 00:14:29.128 "dhgroup": "ffdhe6144" 00:14:29.128 } 00:14:29.128 } 00:14:29.128 ]' 00:14:29.128 22:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:29.386 22:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:29.386 22:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:29.386 22:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:29.386 22:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:29.386 22:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:29.386 22:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:29.386 22:23:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:29.644 22:23:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:ZDRmNDAzMGUyOTE0ZTQyOTMwZmNkOGM1MWJmYmM5MWRjNWYxNWQwYjkzZTI4NDRk64rDTA==: --dhchap-ctrl-secret DHHC-1:01:NWJjMTZhMjgzYWM4MTUzZmMwYmI0MTg4ZWYzYWM3MmZQQcBP: 00:14:31.017 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:31.017 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:31.017 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:31.017 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.017 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.017 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.017 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:31.017 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:31.017 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:31.017 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:14:31.017 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:31.274 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:31.274 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:31.274 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:31.274 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:31.274 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:14:31.274 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.274 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.274 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.274 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:31.274 22:23:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:31.840 00:14:31.840 22:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:31.840 22:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:31.840 22:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:32.099 22:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:32.099 22:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:32.099 22:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.099 22:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.099 22:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.099 22:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:32.099 { 00:14:32.099 "cntlid": 39, 00:14:32.099 "qid": 0, 00:14:32.099 "state": "enabled", 00:14:32.099 "thread": "nvmf_tgt_poll_group_000", 00:14:32.099 "listen_address": { 00:14:32.099 "trtype": "TCP", 00:14:32.099 "adrfam": "IPv4", 00:14:32.099 "traddr": "10.0.0.2", 00:14:32.099 "trsvcid": "4420" 00:14:32.099 }, 00:14:32.099 "peer_address": { 00:14:32.099 "trtype": "TCP", 00:14:32.099 "adrfam": "IPv4", 00:14:32.099 "traddr": "10.0.0.1", 00:14:32.099 "trsvcid": "44900" 00:14:32.099 }, 00:14:32.099 "auth": { 00:14:32.099 "state": "completed", 00:14:32.099 "digest": "sha256", 00:14:32.099 "dhgroup": "ffdhe6144" 00:14:32.099 } 00:14:32.099 } 00:14:32.099 ]' 00:14:32.099 22:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:32.099 22:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:32.099 22:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:32.099 22:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:32.099 22:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:32.357 22:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:32.357 22:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:32.357 22:23:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:32.615 22:23:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:NDg5NmI2ZjIzMGY3OTlhMzhmMDIyMjE1NTQyNzA3YzhjMzE0ZjJmM2Y5N2FkYjRjNTRkYzM1NmVhZDEyMTA1NA2KbfU=: 00:14:33.988 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:33.988 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:33.988 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:33.988 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.988 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.988 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.988 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:33.988 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:33.988 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:33.988 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:33.988 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:14:33.988 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:33.988 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:33.988 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:33.988 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:33.988 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:33.988 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:33.988 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.988 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.988 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.988 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:33.988 22:23:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:35.361 00:14:35.361 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:35.361 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:35.361 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:35.361 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:35.361 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:35.361 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:35.361 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.361 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:35.361 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:35.361 { 00:14:35.361 "cntlid": 41, 00:14:35.361 "qid": 0, 00:14:35.361 "state": "enabled", 00:14:35.361 "thread": "nvmf_tgt_poll_group_000", 00:14:35.361 "listen_address": { 00:14:35.361 "trtype": "TCP", 00:14:35.361 "adrfam": "IPv4", 00:14:35.361 "traddr": "10.0.0.2", 00:14:35.361 "trsvcid": "4420" 00:14:35.361 }, 00:14:35.361 "peer_address": { 00:14:35.361 "trtype": "TCP", 00:14:35.361 "adrfam": "IPv4", 00:14:35.361 "traddr": "10.0.0.1", 00:14:35.361 "trsvcid": "44946" 00:14:35.361 }, 00:14:35.361 "auth": { 00:14:35.361 "state": "completed", 00:14:35.361 "digest": "sha256", 00:14:35.361 "dhgroup": "ffdhe8192" 00:14:35.361 } 00:14:35.361 } 00:14:35.361 ]' 00:14:35.361 22:24:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:35.361 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:35.361 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:35.361 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:35.361 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:35.619 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:35.619 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:35.619 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:35.877 22:24:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:NGEzMWVhYTlhMjk5Y2M4NDc3OWNkMjFiMTdjYjQ1NzY4MTczMjczMmNiYWVkMDFjUM/DaA==: --dhchap-ctrl-secret DHHC-1:03:MDU4YjJmY2JkODcxOGRlMTBhMWFhZDA2Y2EwNDViZmVlMDZmMmM4YWZlMjMwZjcwMDY0NGJjMGRmMTZjODUxNQyopow=: 00:14:37.250 22:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:37.250 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:37.250 22:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:37.250 22:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.250 22:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.250 22:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.250 22:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:37.250 22:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:37.250 22:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:37.250 22:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:14:37.250 22:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:37.250 22:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:37.250 22:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:37.250 22:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:37.250 22:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:37.250 22:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:37.250 22:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.250 22:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.250 22:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.250 22:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:37.250 22:24:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:38.187 00:14:38.187 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:38.187 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:38.187 22:24:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:38.751 22:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:38.751 22:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:38.751 22:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.751 22:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.751 22:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.751 22:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:38.751 { 00:14:38.751 "cntlid": 43, 00:14:38.751 "qid": 0, 00:14:38.751 "state": "enabled", 00:14:38.751 "thread": "nvmf_tgt_poll_group_000", 00:14:38.751 "listen_address": { 00:14:38.751 "trtype": "TCP", 00:14:38.751 "adrfam": "IPv4", 00:14:38.751 "traddr": "10.0.0.2", 00:14:38.751 "trsvcid": "4420" 00:14:38.751 }, 00:14:38.751 "peer_address": { 00:14:38.751 "trtype": "TCP", 00:14:38.751 "adrfam": "IPv4", 00:14:38.751 "traddr": "10.0.0.1", 00:14:38.751 "trsvcid": "44982" 00:14:38.751 }, 00:14:38.751 "auth": { 00:14:38.751 "state": "completed", 00:14:38.751 "digest": "sha256", 00:14:38.751 "dhgroup": "ffdhe8192" 00:14:38.751 } 00:14:38.751 } 00:14:38.751 ]' 00:14:38.751 22:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:38.751 22:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:38.751 22:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:38.751 22:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:38.751 22:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:38.751 22:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:38.751 22:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:38.751 22:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:39.009 22:24:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:YjlmZjBkYTBlZjhiOWMxMDBlYzNhMWE4NjFlMTM5NzcebKtQ: --dhchap-ctrl-secret DHHC-1:02:YjEyZTRlNzUwNWQyY2RiYTZjOTk2ZjJkMjJlOWNkM2FmMGM1Yjc0Yzc2YjIwOWI5roOT2A==: 00:14:40.380 22:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:40.380 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:40.380 22:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:40.380 22:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.380 22:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.380 22:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.380 22:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:40.380 22:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:40.380 22:24:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:40.637 22:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:14:40.637 22:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:40.637 22:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:40.637 22:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:40.637 22:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:40.637 22:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:40.637 22:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:40.637 22:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.637 22:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.637 22:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.637 22:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:40.637 22:24:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:41.570 00:14:41.570 22:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:41.570 22:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:41.570 22:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:41.828 22:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:41.828 22:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:41.828 22:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.828 22:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.828 22:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.828 22:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:41.828 { 00:14:41.828 "cntlid": 45, 00:14:41.828 "qid": 0, 00:14:41.828 "state": "enabled", 00:14:41.828 "thread": "nvmf_tgt_poll_group_000", 00:14:41.828 "listen_address": { 00:14:41.828 "trtype": "TCP", 00:14:41.828 "adrfam": "IPv4", 00:14:41.828 "traddr": "10.0.0.2", 00:14:41.828 "trsvcid": "4420" 00:14:41.828 }, 00:14:41.828 "peer_address": { 00:14:41.828 "trtype": "TCP", 00:14:41.828 "adrfam": "IPv4", 00:14:41.828 "traddr": "10.0.0.1", 00:14:41.828 "trsvcid": "45278" 00:14:41.828 }, 00:14:41.828 "auth": { 00:14:41.828 "state": "completed", 00:14:41.828 "digest": "sha256", 00:14:41.828 "dhgroup": "ffdhe8192" 00:14:41.828 } 00:14:41.828 } 00:14:41.828 ]' 00:14:41.828 22:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:41.828 22:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:41.828 22:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:42.085 22:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:42.085 22:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:42.085 22:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:42.085 22:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:42.085 22:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:42.342 22:24:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:ZDRmNDAzMGUyOTE0ZTQyOTMwZmNkOGM1MWJmYmM5MWRjNWYxNWQwYjkzZTI4NDRk64rDTA==: --dhchap-ctrl-secret DHHC-1:01:NWJjMTZhMjgzYWM4MTUzZmMwYmI0MTg4ZWYzYWM3MmZQQcBP: 00:14:43.716 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:43.716 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:43.716 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:43.716 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.716 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.716 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.716 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:43.716 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:43.716 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:43.716 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:14:43.716 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:43.716 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:43.716 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:43.716 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:43.716 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:43.716 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:14:43.716 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:43.716 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.716 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:43.716 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:43.716 22:24:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:45.145 00:14:45.145 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:45.145 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:45.145 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:45.145 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:45.145 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:45.145 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.145 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.145 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.145 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:45.145 { 00:14:45.145 "cntlid": 47, 00:14:45.145 "qid": 0, 00:14:45.145 "state": "enabled", 00:14:45.145 "thread": "nvmf_tgt_poll_group_000", 00:14:45.145 "listen_address": { 00:14:45.145 "trtype": "TCP", 00:14:45.145 "adrfam": "IPv4", 00:14:45.145 "traddr": "10.0.0.2", 00:14:45.145 "trsvcid": "4420" 00:14:45.145 }, 00:14:45.145 "peer_address": { 00:14:45.145 "trtype": "TCP", 00:14:45.145 "adrfam": "IPv4", 00:14:45.145 "traddr": "10.0.0.1", 00:14:45.145 "trsvcid": "45314" 00:14:45.145 }, 00:14:45.145 "auth": { 00:14:45.145 "state": "completed", 00:14:45.145 "digest": "sha256", 00:14:45.145 "dhgroup": "ffdhe8192" 00:14:45.145 } 00:14:45.145 } 00:14:45.145 ]' 00:14:45.145 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:45.145 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:45.145 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:45.145 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:45.145 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:45.403 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:45.403 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:45.403 22:24:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:45.661 22:24:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:NDg5NmI2ZjIzMGY3OTlhMzhmMDIyMjE1NTQyNzA3YzhjMzE0ZjJmM2Y5N2FkYjRjNTRkYzM1NmVhZDEyMTA1NA2KbfU=: 00:14:47.035 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:47.035 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:47.035 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:47.035 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.035 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.035 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.035 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:14:47.035 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:47.035 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:47.035 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:47.035 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:47.035 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:14:47.035 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:47.035 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:47.035 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:47.035 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:47.035 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:47.035 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:47.035 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.035 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.035 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.035 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:47.035 22:24:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:47.602 00:14:47.602 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:47.602 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:47.602 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:47.859 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:47.859 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:47.859 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.859 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.859 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.859 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:47.859 { 00:14:47.859 "cntlid": 49, 00:14:47.859 "qid": 0, 00:14:47.859 "state": "enabled", 00:14:47.859 "thread": "nvmf_tgt_poll_group_000", 00:14:47.859 "listen_address": { 00:14:47.859 "trtype": "TCP", 00:14:47.859 "adrfam": "IPv4", 00:14:47.859 "traddr": "10.0.0.2", 00:14:47.859 "trsvcid": "4420" 00:14:47.859 }, 00:14:47.859 "peer_address": { 00:14:47.859 "trtype": "TCP", 00:14:47.859 "adrfam": "IPv4", 00:14:47.859 "traddr": "10.0.0.1", 00:14:47.859 "trsvcid": "45346" 00:14:47.859 }, 00:14:47.859 "auth": { 00:14:47.859 "state": "completed", 00:14:47.859 "digest": "sha384", 00:14:47.859 "dhgroup": "null" 00:14:47.859 } 00:14:47.859 } 00:14:47.859 ]' 00:14:47.859 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:47.859 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:47.859 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:47.859 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:47.859 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:47.859 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:47.859 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:47.859 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:48.117 22:24:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:NGEzMWVhYTlhMjk5Y2M4NDc3OWNkMjFiMTdjYjQ1NzY4MTczMjczMmNiYWVkMDFjUM/DaA==: --dhchap-ctrl-secret DHHC-1:03:MDU4YjJmY2JkODcxOGRlMTBhMWFhZDA2Y2EwNDViZmVlMDZmMmM4YWZlMjMwZjcwMDY0NGJjMGRmMTZjODUxNQyopow=: 00:14:49.491 22:24:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:49.491 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:49.491 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:49.491 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.491 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.491 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.491 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:49.491 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:49.491 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:49.750 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:14:49.750 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:49.750 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:49.750 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:49.750 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:49.750 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:49.750 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:49.750 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.750 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.750 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.750 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:49.750 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:50.008 00:14:50.008 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:50.008 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:50.008 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:50.266 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:50.266 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:50.266 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:50.266 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.266 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:50.266 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:50.266 { 00:14:50.266 "cntlid": 51, 00:14:50.266 "qid": 0, 00:14:50.266 "state": "enabled", 00:14:50.266 "thread": "nvmf_tgt_poll_group_000", 00:14:50.266 "listen_address": { 00:14:50.266 "trtype": "TCP", 00:14:50.266 "adrfam": "IPv4", 00:14:50.266 "traddr": "10.0.0.2", 00:14:50.266 "trsvcid": "4420" 00:14:50.266 }, 00:14:50.266 "peer_address": { 00:14:50.266 "trtype": "TCP", 00:14:50.266 "adrfam": "IPv4", 00:14:50.266 "traddr": "10.0.0.1", 00:14:50.266 "trsvcid": "59010" 00:14:50.266 }, 00:14:50.266 "auth": { 00:14:50.266 "state": "completed", 00:14:50.266 "digest": "sha384", 00:14:50.266 "dhgroup": "null" 00:14:50.266 } 00:14:50.266 } 00:14:50.266 ]' 00:14:50.266 22:24:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:50.524 22:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:50.524 22:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:50.524 22:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:50.524 22:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:50.524 22:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:50.524 22:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:50.524 22:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:50.782 22:24:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:YjlmZjBkYTBlZjhiOWMxMDBlYzNhMWE4NjFlMTM5NzcebKtQ: --dhchap-ctrl-secret DHHC-1:02:YjEyZTRlNzUwNWQyY2RiYTZjOTk2ZjJkMjJlOWNkM2FmMGM1Yjc0Yzc2YjIwOWI5roOT2A==: 00:14:52.155 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:52.155 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:52.155 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:52.155 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.155 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.155 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.155 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:52.155 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:52.155 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:52.155 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:14:52.155 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:52.155 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:52.155 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:52.155 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:52.155 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:52.155 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:52.155 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.155 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.155 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.155 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:52.155 22:24:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:52.720 00:14:52.720 22:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:52.720 22:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:52.720 22:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:52.979 22:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:52.979 22:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:52.979 22:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.979 22:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.979 22:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.979 22:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:52.979 { 00:14:52.979 "cntlid": 53, 00:14:52.979 "qid": 0, 00:14:52.979 "state": "enabled", 00:14:52.979 "thread": "nvmf_tgt_poll_group_000", 00:14:52.979 "listen_address": { 00:14:52.979 "trtype": "TCP", 00:14:52.979 "adrfam": "IPv4", 00:14:52.979 "traddr": "10.0.0.2", 00:14:52.979 "trsvcid": "4420" 00:14:52.979 }, 00:14:52.979 "peer_address": { 00:14:52.979 "trtype": "TCP", 00:14:52.979 "adrfam": "IPv4", 00:14:52.979 "traddr": "10.0.0.1", 00:14:52.979 "trsvcid": "59050" 00:14:52.979 }, 00:14:52.979 "auth": { 00:14:52.979 "state": "completed", 00:14:52.979 "digest": "sha384", 00:14:52.979 "dhgroup": "null" 00:14:52.979 } 00:14:52.979 } 00:14:52.979 ]' 00:14:52.979 22:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:52.979 22:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:52.979 22:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:52.979 22:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:52.979 22:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:52.979 22:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:52.979 22:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:52.979 22:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:53.236 22:24:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:ZDRmNDAzMGUyOTE0ZTQyOTMwZmNkOGM1MWJmYmM5MWRjNWYxNWQwYjkzZTI4NDRk64rDTA==: --dhchap-ctrl-secret DHHC-1:01:NWJjMTZhMjgzYWM4MTUzZmMwYmI0MTg4ZWYzYWM3MmZQQcBP: 00:14:54.608 22:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:54.608 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:54.608 22:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:54.608 22:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.608 22:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.608 22:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.608 22:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:54.608 22:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:54.608 22:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:54.866 22:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:14:54.866 22:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:54.866 22:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:54.866 22:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:54.866 22:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:54.866 22:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:54.866 22:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:14:54.866 22:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.866 22:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.866 22:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.866 22:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:54.866 22:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:55.124 00:14:55.124 22:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:55.124 22:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:55.125 22:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:55.382 22:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:55.382 22:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:55.382 22:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.382 22:24:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.383 22:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.383 22:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:55.383 { 00:14:55.383 "cntlid": 55, 00:14:55.383 "qid": 0, 00:14:55.383 "state": "enabled", 00:14:55.383 "thread": "nvmf_tgt_poll_group_000", 00:14:55.383 "listen_address": { 00:14:55.383 "trtype": "TCP", 00:14:55.383 "adrfam": "IPv4", 00:14:55.383 "traddr": "10.0.0.2", 00:14:55.383 "trsvcid": "4420" 00:14:55.383 }, 00:14:55.383 "peer_address": { 00:14:55.383 "trtype": "TCP", 00:14:55.383 "adrfam": "IPv4", 00:14:55.383 "traddr": "10.0.0.1", 00:14:55.383 "trsvcid": "59086" 00:14:55.383 }, 00:14:55.383 "auth": { 00:14:55.383 "state": "completed", 00:14:55.383 "digest": "sha384", 00:14:55.383 "dhgroup": "null" 00:14:55.383 } 00:14:55.383 } 00:14:55.383 ]' 00:14:55.383 22:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:55.383 22:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:55.383 22:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:55.383 22:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:55.383 22:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:55.640 22:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:55.640 22:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:55.640 22:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:55.898 22:24:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:NDg5NmI2ZjIzMGY3OTlhMzhmMDIyMjE1NTQyNzA3YzhjMzE0ZjJmM2Y5N2FkYjRjNTRkYzM1NmVhZDEyMTA1NA2KbfU=: 00:14:57.272 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:57.272 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:57.272 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:57.272 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.272 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.272 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.272 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:57.272 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:57.272 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:57.272 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:57.272 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:14:57.272 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:57.272 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:57.272 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:57.272 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:57.272 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:57.272 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:57.272 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:57.272 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.272 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:57.272 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:57.272 22:24:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:57.838 00:14:57.838 22:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:57.838 22:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:57.838 22:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:58.095 22:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:58.095 22:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:58.095 22:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.095 22:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.095 22:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.095 22:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:58.095 { 00:14:58.095 "cntlid": 57, 00:14:58.095 "qid": 0, 00:14:58.095 "state": "enabled", 00:14:58.095 "thread": "nvmf_tgt_poll_group_000", 00:14:58.095 "listen_address": { 00:14:58.095 "trtype": "TCP", 00:14:58.095 "adrfam": "IPv4", 00:14:58.096 "traddr": "10.0.0.2", 00:14:58.096 "trsvcid": "4420" 00:14:58.096 }, 00:14:58.096 "peer_address": { 00:14:58.096 "trtype": "TCP", 00:14:58.096 "adrfam": "IPv4", 00:14:58.096 "traddr": "10.0.0.1", 00:14:58.096 "trsvcid": "59110" 00:14:58.096 }, 00:14:58.096 "auth": { 00:14:58.096 "state": "completed", 00:14:58.096 "digest": "sha384", 00:14:58.096 "dhgroup": "ffdhe2048" 00:14:58.096 } 00:14:58.096 } 00:14:58.096 ]' 00:14:58.096 22:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:58.096 22:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:58.096 22:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:58.096 22:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:58.096 22:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:58.096 22:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:58.096 22:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:58.096 22:24:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:58.353 22:24:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:NGEzMWVhYTlhMjk5Y2M4NDc3OWNkMjFiMTdjYjQ1NzY4MTczMjczMmNiYWVkMDFjUM/DaA==: --dhchap-ctrl-secret DHHC-1:03:MDU4YjJmY2JkODcxOGRlMTBhMWFhZDA2Y2EwNDViZmVlMDZmMmM4YWZlMjMwZjcwMDY0NGJjMGRmMTZjODUxNQyopow=: 00:14:59.725 22:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:59.725 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:59.725 22:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:14:59.725 22:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.725 22:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.725 22:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.725 22:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:59.725 22:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:59.725 22:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:59.984 22:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:14:59.984 22:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:59.984 22:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:59.984 22:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:59.984 22:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:59.984 22:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:59.984 22:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:59.984 22:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.984 22:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.984 22:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.984 22:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:59.984 22:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:00.242 00:15:00.242 22:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:00.242 22:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:00.242 22:24:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:00.500 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:00.500 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:00.500 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.500 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.500 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.500 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:00.500 { 00:15:00.500 "cntlid": 59, 00:15:00.500 "qid": 0, 00:15:00.500 "state": "enabled", 00:15:00.500 "thread": "nvmf_tgt_poll_group_000", 00:15:00.500 "listen_address": { 00:15:00.500 "trtype": "TCP", 00:15:00.500 "adrfam": "IPv4", 00:15:00.500 "traddr": "10.0.0.2", 00:15:00.500 "trsvcid": "4420" 00:15:00.500 }, 00:15:00.500 "peer_address": { 00:15:00.500 "trtype": "TCP", 00:15:00.500 "adrfam": "IPv4", 00:15:00.500 "traddr": "10.0.0.1", 00:15:00.500 "trsvcid": "48386" 00:15:00.500 }, 00:15:00.500 "auth": { 00:15:00.500 "state": "completed", 00:15:00.500 "digest": "sha384", 00:15:00.500 "dhgroup": "ffdhe2048" 00:15:00.500 } 00:15:00.500 } 00:15:00.500 ]' 00:15:00.500 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:00.757 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:00.757 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:00.757 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:00.757 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:00.757 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:00.757 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:00.757 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:01.015 22:24:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:YjlmZjBkYTBlZjhiOWMxMDBlYzNhMWE4NjFlMTM5NzcebKtQ: --dhchap-ctrl-secret DHHC-1:02:YjEyZTRlNzUwNWQyY2RiYTZjOTk2ZjJkMjJlOWNkM2FmMGM1Yjc0Yzc2YjIwOWI5roOT2A==: 00:15:02.388 22:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:02.388 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:02.388 22:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:02.388 22:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.388 22:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.388 22:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.388 22:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:02.388 22:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:02.388 22:24:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:02.388 22:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:15:02.388 22:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:02.388 22:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:02.388 22:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:02.388 22:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:02.388 22:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:02.388 22:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:02.388 22:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.388 22:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.388 22:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.388 22:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:02.388 22:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:02.953 00:15:02.953 22:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:02.953 22:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:02.953 22:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:03.210 22:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:03.210 22:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:03.210 22:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.210 22:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.210 22:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.210 22:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:03.210 { 00:15:03.210 "cntlid": 61, 00:15:03.210 "qid": 0, 00:15:03.210 "state": "enabled", 00:15:03.210 "thread": "nvmf_tgt_poll_group_000", 00:15:03.210 "listen_address": { 00:15:03.210 "trtype": "TCP", 00:15:03.210 "adrfam": "IPv4", 00:15:03.210 "traddr": "10.0.0.2", 00:15:03.210 "trsvcid": "4420" 00:15:03.210 }, 00:15:03.210 "peer_address": { 00:15:03.210 "trtype": "TCP", 00:15:03.210 "adrfam": "IPv4", 00:15:03.210 "traddr": "10.0.0.1", 00:15:03.210 "trsvcid": "48404" 00:15:03.210 }, 00:15:03.210 "auth": { 00:15:03.210 "state": "completed", 00:15:03.210 "digest": "sha384", 00:15:03.210 "dhgroup": "ffdhe2048" 00:15:03.210 } 00:15:03.210 } 00:15:03.210 ]' 00:15:03.210 22:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:03.210 22:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:03.210 22:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:03.210 22:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:03.210 22:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:03.210 22:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:03.210 22:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:03.210 22:24:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:03.776 22:24:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:ZDRmNDAzMGUyOTE0ZTQyOTMwZmNkOGM1MWJmYmM5MWRjNWYxNWQwYjkzZTI4NDRk64rDTA==: --dhchap-ctrl-secret DHHC-1:01:NWJjMTZhMjgzYWM4MTUzZmMwYmI0MTg4ZWYzYWM3MmZQQcBP: 00:15:04.710 22:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:04.710 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:04.710 22:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:04.710 22:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.710 22:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.710 22:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.710 22:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:04.710 22:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:04.710 22:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:05.276 22:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:15:05.276 22:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:05.276 22:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:05.276 22:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:05.276 22:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:05.276 22:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:05.276 22:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:15:05.276 22:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.276 22:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.276 22:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.276 22:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:05.276 22:24:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:05.534 00:15:05.534 22:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:05.534 22:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:05.534 22:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:05.792 22:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:05.792 22:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:05.792 22:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.792 22:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.792 22:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.792 22:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:05.792 { 00:15:05.792 "cntlid": 63, 00:15:05.792 "qid": 0, 00:15:05.792 "state": "enabled", 00:15:05.792 "thread": "nvmf_tgt_poll_group_000", 00:15:05.792 "listen_address": { 00:15:05.792 "trtype": "TCP", 00:15:05.792 "adrfam": "IPv4", 00:15:05.792 "traddr": "10.0.0.2", 00:15:05.792 "trsvcid": "4420" 00:15:05.792 }, 00:15:05.792 "peer_address": { 00:15:05.792 "trtype": "TCP", 00:15:05.792 "adrfam": "IPv4", 00:15:05.792 "traddr": "10.0.0.1", 00:15:05.792 "trsvcid": "48422" 00:15:05.792 }, 00:15:05.792 "auth": { 00:15:05.792 "state": "completed", 00:15:05.792 "digest": "sha384", 00:15:05.792 "dhgroup": "ffdhe2048" 00:15:05.792 } 00:15:05.792 } 00:15:05.793 ]' 00:15:05.793 22:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:05.793 22:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:05.793 22:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:05.793 22:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:05.793 22:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:06.050 22:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:06.050 22:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:06.051 22:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:06.309 22:24:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:NDg5NmI2ZjIzMGY3OTlhMzhmMDIyMjE1NTQyNzA3YzhjMzE0ZjJmM2Y5N2FkYjRjNTRkYzM1NmVhZDEyMTA1NA2KbfU=: 00:15:07.683 22:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:07.683 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:07.683 22:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:07.683 22:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.683 22:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.683 22:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.683 22:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:07.683 22:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:07.683 22:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:07.683 22:24:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:07.683 22:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:15:07.683 22:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:07.683 22:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:07.683 22:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:07.683 22:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:07.683 22:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:07.683 22:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:07.683 22:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.683 22:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.683 22:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.683 22:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:07.683 22:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:08.246 00:15:08.246 22:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:08.246 22:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:08.247 22:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:08.504 22:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:08.504 22:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:08.504 22:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.504 22:24:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.504 22:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.504 22:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:08.504 { 00:15:08.504 "cntlid": 65, 00:15:08.504 "qid": 0, 00:15:08.504 "state": "enabled", 00:15:08.504 "thread": "nvmf_tgt_poll_group_000", 00:15:08.504 "listen_address": { 00:15:08.504 "trtype": "TCP", 00:15:08.504 "adrfam": "IPv4", 00:15:08.504 "traddr": "10.0.0.2", 00:15:08.504 "trsvcid": "4420" 00:15:08.504 }, 00:15:08.504 "peer_address": { 00:15:08.504 "trtype": "TCP", 00:15:08.504 "adrfam": "IPv4", 00:15:08.504 "traddr": "10.0.0.1", 00:15:08.504 "trsvcid": "48444" 00:15:08.504 }, 00:15:08.504 "auth": { 00:15:08.504 "state": "completed", 00:15:08.504 "digest": "sha384", 00:15:08.504 "dhgroup": "ffdhe3072" 00:15:08.504 } 00:15:08.504 } 00:15:08.504 ]' 00:15:08.504 22:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:08.504 22:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:08.504 22:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:08.504 22:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:08.504 22:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:08.504 22:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:08.504 22:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:08.504 22:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:08.761 22:24:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:NGEzMWVhYTlhMjk5Y2M4NDc3OWNkMjFiMTdjYjQ1NzY4MTczMjczMmNiYWVkMDFjUM/DaA==: --dhchap-ctrl-secret DHHC-1:03:MDU4YjJmY2JkODcxOGRlMTBhMWFhZDA2Y2EwNDViZmVlMDZmMmM4YWZlMjMwZjcwMDY0NGJjMGRmMTZjODUxNQyopow=: 00:15:10.190 22:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:10.190 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:10.190 22:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:10.190 22:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.190 22:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.190 22:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.190 22:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:10.190 22:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:10.190 22:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:10.447 22:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:15:10.447 22:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:10.447 22:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:10.447 22:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:10.447 22:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:10.447 22:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:10.447 22:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:10.447 22:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.447 22:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.447 22:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.447 22:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:10.448 22:24:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:10.705 00:15:10.705 22:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:10.705 22:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:10.705 22:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:10.963 22:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:10.963 22:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:10.963 22:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.963 22:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.963 22:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.963 22:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:10.963 { 00:15:10.963 "cntlid": 67, 00:15:10.963 "qid": 0, 00:15:10.963 "state": "enabled", 00:15:10.963 "thread": "nvmf_tgt_poll_group_000", 00:15:10.963 "listen_address": { 00:15:10.963 "trtype": "TCP", 00:15:10.963 "adrfam": "IPv4", 00:15:10.963 "traddr": "10.0.0.2", 00:15:10.963 "trsvcid": "4420" 00:15:10.963 }, 00:15:10.963 "peer_address": { 00:15:10.963 "trtype": "TCP", 00:15:10.963 "adrfam": "IPv4", 00:15:10.963 "traddr": "10.0.0.1", 00:15:10.963 "trsvcid": "43644" 00:15:10.963 }, 00:15:10.963 "auth": { 00:15:10.963 "state": "completed", 00:15:10.963 "digest": "sha384", 00:15:10.963 "dhgroup": "ffdhe3072" 00:15:10.963 } 00:15:10.963 } 00:15:10.963 ]' 00:15:10.963 22:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:11.221 22:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:11.221 22:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:11.221 22:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:11.221 22:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:11.221 22:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:11.221 22:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:11.221 22:24:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:11.478 22:24:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:YjlmZjBkYTBlZjhiOWMxMDBlYzNhMWE4NjFlMTM5NzcebKtQ: --dhchap-ctrl-secret DHHC-1:02:YjEyZTRlNzUwNWQyY2RiYTZjOTk2ZjJkMjJlOWNkM2FmMGM1Yjc0Yzc2YjIwOWI5roOT2A==: 00:15:12.858 22:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:12.859 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:12.859 22:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:12.859 22:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.859 22:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.859 22:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.859 22:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:12.859 22:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:12.859 22:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:13.117 22:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:15:13.117 22:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:13.117 22:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:13.117 22:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:13.117 22:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:13.117 22:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:13.117 22:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:13.117 22:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.117 22:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.117 22:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.117 22:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:13.117 22:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:13.375 00:15:13.375 22:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:13.375 22:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:13.375 22:24:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:13.633 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:13.633 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:13.633 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.633 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.633 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.633 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:13.633 { 00:15:13.633 "cntlid": 69, 00:15:13.633 "qid": 0, 00:15:13.633 "state": "enabled", 00:15:13.633 "thread": "nvmf_tgt_poll_group_000", 00:15:13.633 "listen_address": { 00:15:13.633 "trtype": "TCP", 00:15:13.633 "adrfam": "IPv4", 00:15:13.633 "traddr": "10.0.0.2", 00:15:13.633 "trsvcid": "4420" 00:15:13.633 }, 00:15:13.633 "peer_address": { 00:15:13.633 "trtype": "TCP", 00:15:13.633 "adrfam": "IPv4", 00:15:13.633 "traddr": "10.0.0.1", 00:15:13.633 "trsvcid": "43674" 00:15:13.633 }, 00:15:13.633 "auth": { 00:15:13.633 "state": "completed", 00:15:13.633 "digest": "sha384", 00:15:13.633 "dhgroup": "ffdhe3072" 00:15:13.633 } 00:15:13.633 } 00:15:13.633 ]' 00:15:13.633 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:13.633 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:13.633 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:13.890 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:13.890 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:13.890 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:13.890 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:13.890 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:14.148 22:24:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:ZDRmNDAzMGUyOTE0ZTQyOTMwZmNkOGM1MWJmYmM5MWRjNWYxNWQwYjkzZTI4NDRk64rDTA==: --dhchap-ctrl-secret DHHC-1:01:NWJjMTZhMjgzYWM4MTUzZmMwYmI0MTg4ZWYzYWM3MmZQQcBP: 00:15:15.522 22:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:15.522 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:15.522 22:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:15.522 22:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.522 22:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.522 22:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.522 22:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:15.522 22:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:15.522 22:24:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:15.522 22:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:15:15.522 22:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:15.522 22:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:15.522 22:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:15.522 22:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:15.522 22:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:15.522 22:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:15:15.522 22:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:15.522 22:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.522 22:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:15.522 22:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:15.522 22:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:16.088 00:15:16.088 22:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:16.088 22:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:16.088 22:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:16.346 22:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:16.346 22:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:16.346 22:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.346 22:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.346 22:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.346 22:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:16.346 { 00:15:16.346 "cntlid": 71, 00:15:16.346 "qid": 0, 00:15:16.346 "state": "enabled", 00:15:16.346 "thread": "nvmf_tgt_poll_group_000", 00:15:16.346 "listen_address": { 00:15:16.346 "trtype": "TCP", 00:15:16.346 "adrfam": "IPv4", 00:15:16.346 "traddr": "10.0.0.2", 00:15:16.346 "trsvcid": "4420" 00:15:16.346 }, 00:15:16.346 "peer_address": { 00:15:16.346 "trtype": "TCP", 00:15:16.346 "adrfam": "IPv4", 00:15:16.346 "traddr": "10.0.0.1", 00:15:16.346 "trsvcid": "43692" 00:15:16.346 }, 00:15:16.346 "auth": { 00:15:16.346 "state": "completed", 00:15:16.346 "digest": "sha384", 00:15:16.346 "dhgroup": "ffdhe3072" 00:15:16.346 } 00:15:16.346 } 00:15:16.346 ]' 00:15:16.346 22:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:16.346 22:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:16.346 22:24:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:16.346 22:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:16.346 22:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:16.346 22:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:16.346 22:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:16.346 22:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:16.912 22:24:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:NDg5NmI2ZjIzMGY3OTlhMzhmMDIyMjE1NTQyNzA3YzhjMzE0ZjJmM2Y5N2FkYjRjNTRkYzM1NmVhZDEyMTA1NA2KbfU=: 00:15:17.848 22:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:17.848 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:17.848 22:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:17.848 22:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.848 22:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.848 22:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.848 22:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:17.848 22:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:17.848 22:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:17.848 22:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:18.104 22:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:15:18.104 22:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:18.104 22:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:18.104 22:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:18.104 22:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:18.104 22:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:18.104 22:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:18.104 22:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.104 22:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.104 22:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.104 22:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:18.104 22:24:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:18.668 00:15:18.668 22:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:18.668 22:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:18.668 22:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:18.925 22:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:18.925 22:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:18.925 22:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:18.925 22:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.925 22:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:18.925 22:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:18.925 { 00:15:18.925 "cntlid": 73, 00:15:18.925 "qid": 0, 00:15:18.925 "state": "enabled", 00:15:18.925 "thread": "nvmf_tgt_poll_group_000", 00:15:18.925 "listen_address": { 00:15:18.925 "trtype": "TCP", 00:15:18.925 "adrfam": "IPv4", 00:15:18.925 "traddr": "10.0.0.2", 00:15:18.925 "trsvcid": "4420" 00:15:18.925 }, 00:15:18.925 "peer_address": { 00:15:18.925 "trtype": "TCP", 00:15:18.925 "adrfam": "IPv4", 00:15:18.925 "traddr": "10.0.0.1", 00:15:18.925 "trsvcid": "43736" 00:15:18.925 }, 00:15:18.925 "auth": { 00:15:18.925 "state": "completed", 00:15:18.925 "digest": "sha384", 00:15:18.925 "dhgroup": "ffdhe4096" 00:15:18.925 } 00:15:18.925 } 00:15:18.925 ]' 00:15:18.925 22:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:18.925 22:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:18.925 22:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:18.925 22:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:18.925 22:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:18.925 22:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:18.925 22:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:18.925 22:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:19.183 22:24:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:NGEzMWVhYTlhMjk5Y2M4NDc3OWNkMjFiMTdjYjQ1NzY4MTczMjczMmNiYWVkMDFjUM/DaA==: --dhchap-ctrl-secret DHHC-1:03:MDU4YjJmY2JkODcxOGRlMTBhMWFhZDA2Y2EwNDViZmVlMDZmMmM4YWZlMjMwZjcwMDY0NGJjMGRmMTZjODUxNQyopow=: 00:15:20.555 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:20.555 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:20.556 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:20.556 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.556 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.556 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.556 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:20.556 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:20.556 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:20.814 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:15:20.814 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:20.814 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:20.814 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:20.814 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:20.814 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:20.814 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:20.814 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:20.814 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.814 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:20.814 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:20.814 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:21.380 00:15:21.380 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:21.380 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:21.380 22:24:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:21.638 22:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:21.638 22:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:21.638 22:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.638 22:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.638 22:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.638 22:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:21.638 { 00:15:21.638 "cntlid": 75, 00:15:21.638 "qid": 0, 00:15:21.638 "state": "enabled", 00:15:21.638 "thread": "nvmf_tgt_poll_group_000", 00:15:21.638 "listen_address": { 00:15:21.638 "trtype": "TCP", 00:15:21.638 "adrfam": "IPv4", 00:15:21.638 "traddr": "10.0.0.2", 00:15:21.638 "trsvcid": "4420" 00:15:21.638 }, 00:15:21.638 "peer_address": { 00:15:21.638 "trtype": "TCP", 00:15:21.638 "adrfam": "IPv4", 00:15:21.638 "traddr": "10.0.0.1", 00:15:21.638 "trsvcid": "60290" 00:15:21.638 }, 00:15:21.638 "auth": { 00:15:21.638 "state": "completed", 00:15:21.638 "digest": "sha384", 00:15:21.638 "dhgroup": "ffdhe4096" 00:15:21.638 } 00:15:21.638 } 00:15:21.638 ]' 00:15:21.638 22:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:21.638 22:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:21.638 22:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:21.638 22:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:21.638 22:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:21.638 22:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:21.638 22:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:21.638 22:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:21.896 22:24:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:YjlmZjBkYTBlZjhiOWMxMDBlYzNhMWE4NjFlMTM5NzcebKtQ: --dhchap-ctrl-secret DHHC-1:02:YjEyZTRlNzUwNWQyY2RiYTZjOTk2ZjJkMjJlOWNkM2FmMGM1Yjc0Yzc2YjIwOWI5roOT2A==: 00:15:23.270 22:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:23.270 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:23.270 22:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:23.270 22:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.270 22:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.270 22:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.270 22:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:23.270 22:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:23.270 22:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:23.528 22:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:15:23.528 22:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:23.528 22:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:23.528 22:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:23.528 22:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:23.528 22:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:23.528 22:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:23.528 22:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.528 22:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.528 22:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.529 22:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:23.529 22:24:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:23.787 00:15:23.787 22:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:23.787 22:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:23.787 22:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:24.046 22:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.046 22:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:24.046 22:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.046 22:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.046 22:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.046 22:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:24.046 { 00:15:24.046 "cntlid": 77, 00:15:24.046 "qid": 0, 00:15:24.046 "state": "enabled", 00:15:24.046 "thread": "nvmf_tgt_poll_group_000", 00:15:24.046 "listen_address": { 00:15:24.046 "trtype": "TCP", 00:15:24.046 "adrfam": "IPv4", 00:15:24.046 "traddr": "10.0.0.2", 00:15:24.046 "trsvcid": "4420" 00:15:24.046 }, 00:15:24.046 "peer_address": { 00:15:24.046 "trtype": "TCP", 00:15:24.046 "adrfam": "IPv4", 00:15:24.046 "traddr": "10.0.0.1", 00:15:24.046 "trsvcid": "60334" 00:15:24.046 }, 00:15:24.046 "auth": { 00:15:24.046 "state": "completed", 00:15:24.046 "digest": "sha384", 00:15:24.046 "dhgroup": "ffdhe4096" 00:15:24.046 } 00:15:24.046 } 00:15:24.046 ]' 00:15:24.046 22:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:24.304 22:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:24.304 22:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:24.304 22:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:24.304 22:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:24.304 22:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:24.304 22:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:24.304 22:24:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:24.562 22:24:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:ZDRmNDAzMGUyOTE0ZTQyOTMwZmNkOGM1MWJmYmM5MWRjNWYxNWQwYjkzZTI4NDRk64rDTA==: --dhchap-ctrl-secret DHHC-1:01:NWJjMTZhMjgzYWM4MTUzZmMwYmI0MTg4ZWYzYWM3MmZQQcBP: 00:15:25.933 22:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:25.934 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:25.934 22:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:25.934 22:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.934 22:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.934 22:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.934 22:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:25.934 22:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:25.934 22:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:26.190 22:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:15:26.190 22:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:26.190 22:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:26.190 22:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:26.190 22:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:26.190 22:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:26.190 22:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:15:26.190 22:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.190 22:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.190 22:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.190 22:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:26.190 22:24:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:26.455 00:15:26.455 22:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:26.455 22:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:26.455 22:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.713 22:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.713 22:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:26.713 22:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.713 22:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.970 22:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.970 22:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:26.970 { 00:15:26.970 "cntlid": 79, 00:15:26.970 "qid": 0, 00:15:26.970 "state": "enabled", 00:15:26.970 "thread": "nvmf_tgt_poll_group_000", 00:15:26.970 "listen_address": { 00:15:26.970 "trtype": "TCP", 00:15:26.970 "adrfam": "IPv4", 00:15:26.970 "traddr": "10.0.0.2", 00:15:26.970 "trsvcid": "4420" 00:15:26.970 }, 00:15:26.970 "peer_address": { 00:15:26.970 "trtype": "TCP", 00:15:26.970 "adrfam": "IPv4", 00:15:26.970 "traddr": "10.0.0.1", 00:15:26.970 "trsvcid": "60364" 00:15:26.970 }, 00:15:26.970 "auth": { 00:15:26.970 "state": "completed", 00:15:26.970 "digest": "sha384", 00:15:26.970 "dhgroup": "ffdhe4096" 00:15:26.970 } 00:15:26.970 } 00:15:26.970 ]' 00:15:26.970 22:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:26.970 22:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:26.970 22:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:26.970 22:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:26.970 22:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:26.970 22:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:26.971 22:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:26.971 22:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:27.228 22:24:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:NDg5NmI2ZjIzMGY3OTlhMzhmMDIyMjE1NTQyNzA3YzhjMzE0ZjJmM2Y5N2FkYjRjNTRkYzM1NmVhZDEyMTA1NA2KbfU=: 00:15:28.599 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:28.599 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:28.599 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:28.599 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.599 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.599 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.599 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:28.599 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:28.599 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:28.599 22:24:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:28.599 22:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:15:28.599 22:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:28.599 22:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:28.599 22:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:28.599 22:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:28.599 22:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:28.599 22:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:28.599 22:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.599 22:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.599 22:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.599 22:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:28.599 22:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:29.163 00:15:29.163 22:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:29.163 22:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:29.163 22:24:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:29.727 22:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.727 22:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:29.727 22:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:29.727 22:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.727 22:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:29.727 22:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:29.727 { 00:15:29.727 "cntlid": 81, 00:15:29.727 "qid": 0, 00:15:29.727 "state": "enabled", 00:15:29.727 "thread": "nvmf_tgt_poll_group_000", 00:15:29.727 "listen_address": { 00:15:29.727 "trtype": "TCP", 00:15:29.727 "adrfam": "IPv4", 00:15:29.727 "traddr": "10.0.0.2", 00:15:29.727 "trsvcid": "4420" 00:15:29.727 }, 00:15:29.727 "peer_address": { 00:15:29.727 "trtype": "TCP", 00:15:29.727 "adrfam": "IPv4", 00:15:29.727 "traddr": "10.0.0.1", 00:15:29.727 "trsvcid": "60402" 00:15:29.727 }, 00:15:29.727 "auth": { 00:15:29.727 "state": "completed", 00:15:29.727 "digest": "sha384", 00:15:29.727 "dhgroup": "ffdhe6144" 00:15:29.727 } 00:15:29.727 } 00:15:29.727 ]' 00:15:29.727 22:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:29.727 22:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:29.727 22:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:29.727 22:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:29.727 22:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:29.727 22:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:29.727 22:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:29.727 22:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:29.986 22:24:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:NGEzMWVhYTlhMjk5Y2M4NDc3OWNkMjFiMTdjYjQ1NzY4MTczMjczMmNiYWVkMDFjUM/DaA==: --dhchap-ctrl-secret DHHC-1:03:MDU4YjJmY2JkODcxOGRlMTBhMWFhZDA2Y2EwNDViZmVlMDZmMmM4YWZlMjMwZjcwMDY0NGJjMGRmMTZjODUxNQyopow=: 00:15:31.351 22:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:31.351 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:31.351 22:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:31.351 22:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.351 22:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.351 22:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.351 22:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:31.351 22:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:31.351 22:24:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:31.609 22:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:15:31.609 22:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:31.609 22:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:31.609 22:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:31.609 22:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:31.609 22:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:31.609 22:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:31.609 22:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:31.609 22:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.609 22:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:31.609 22:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:31.609 22:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:32.172 00:15:32.172 22:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:32.172 22:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:32.172 22:24:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:32.430 22:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.430 22:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:32.430 22:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.430 22:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.430 22:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.430 22:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:32.430 { 00:15:32.430 "cntlid": 83, 00:15:32.430 "qid": 0, 00:15:32.430 "state": "enabled", 00:15:32.430 "thread": "nvmf_tgt_poll_group_000", 00:15:32.430 "listen_address": { 00:15:32.430 "trtype": "TCP", 00:15:32.430 "adrfam": "IPv4", 00:15:32.430 "traddr": "10.0.0.2", 00:15:32.430 "trsvcid": "4420" 00:15:32.430 }, 00:15:32.430 "peer_address": { 00:15:32.430 "trtype": "TCP", 00:15:32.430 "adrfam": "IPv4", 00:15:32.430 "traddr": "10.0.0.1", 00:15:32.430 "trsvcid": "60030" 00:15:32.430 }, 00:15:32.430 "auth": { 00:15:32.430 "state": "completed", 00:15:32.430 "digest": "sha384", 00:15:32.430 "dhgroup": "ffdhe6144" 00:15:32.430 } 00:15:32.430 } 00:15:32.430 ]' 00:15:32.430 22:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:32.430 22:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:32.430 22:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:32.687 22:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:32.687 22:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:32.687 22:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:32.687 22:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:32.687 22:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:32.945 22:24:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:YjlmZjBkYTBlZjhiOWMxMDBlYzNhMWE4NjFlMTM5NzcebKtQ: --dhchap-ctrl-secret DHHC-1:02:YjEyZTRlNzUwNWQyY2RiYTZjOTk2ZjJkMjJlOWNkM2FmMGM1Yjc0Yzc2YjIwOWI5roOT2A==: 00:15:34.316 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:34.316 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:34.316 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:34.316 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.316 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.316 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.316 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:34.316 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:34.316 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:34.316 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:15:34.316 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:34.316 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:34.316 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:34.316 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:34.316 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:34.316 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:34.316 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.316 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.316 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.316 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:34.316 22:24:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:35.256 00:15:35.256 22:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:35.256 22:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:35.256 22:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:35.256 22:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.256 22:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:35.256 22:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.256 22:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.256 22:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.256 22:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:35.256 { 00:15:35.256 "cntlid": 85, 00:15:35.256 "qid": 0, 00:15:35.256 "state": "enabled", 00:15:35.256 "thread": "nvmf_tgt_poll_group_000", 00:15:35.256 "listen_address": { 00:15:35.256 "trtype": "TCP", 00:15:35.256 "adrfam": "IPv4", 00:15:35.256 "traddr": "10.0.0.2", 00:15:35.256 "trsvcid": "4420" 00:15:35.256 }, 00:15:35.256 "peer_address": { 00:15:35.256 "trtype": "TCP", 00:15:35.257 "adrfam": "IPv4", 00:15:35.257 "traddr": "10.0.0.1", 00:15:35.257 "trsvcid": "60066" 00:15:35.257 }, 00:15:35.257 "auth": { 00:15:35.257 "state": "completed", 00:15:35.257 "digest": "sha384", 00:15:35.257 "dhgroup": "ffdhe6144" 00:15:35.257 } 00:15:35.257 } 00:15:35.257 ]' 00:15:35.575 22:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:35.576 22:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:35.576 22:25:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:35.576 22:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:35.576 22:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:35.576 22:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:35.576 22:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:35.576 22:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:35.858 22:25:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:ZDRmNDAzMGUyOTE0ZTQyOTMwZmNkOGM1MWJmYmM5MWRjNWYxNWQwYjkzZTI4NDRk64rDTA==: --dhchap-ctrl-secret DHHC-1:01:NWJjMTZhMjgzYWM4MTUzZmMwYmI0MTg4ZWYzYWM3MmZQQcBP: 00:15:37.231 22:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:37.231 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:37.231 22:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:37.231 22:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.231 22:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.231 22:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.231 22:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:37.231 22:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:37.231 22:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:37.232 22:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:15:37.232 22:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:37.232 22:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:37.232 22:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:37.232 22:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:37.232 22:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:37.232 22:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:15:37.232 22:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.232 22:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.232 22:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.232 22:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:37.232 22:25:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:38.170 00:15:38.170 22:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:38.170 22:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:38.170 22:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:38.170 22:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:38.170 22:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:38.170 22:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:38.170 22:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.170 22:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:38.170 22:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:38.170 { 00:15:38.170 "cntlid": 87, 00:15:38.170 "qid": 0, 00:15:38.170 "state": "enabled", 00:15:38.170 "thread": "nvmf_tgt_poll_group_000", 00:15:38.170 "listen_address": { 00:15:38.170 "trtype": "TCP", 00:15:38.170 "adrfam": "IPv4", 00:15:38.170 "traddr": "10.0.0.2", 00:15:38.170 "trsvcid": "4420" 00:15:38.170 }, 00:15:38.170 "peer_address": { 00:15:38.170 "trtype": "TCP", 00:15:38.170 "adrfam": "IPv4", 00:15:38.170 "traddr": "10.0.0.1", 00:15:38.170 "trsvcid": "60090" 00:15:38.170 }, 00:15:38.170 "auth": { 00:15:38.170 "state": "completed", 00:15:38.170 "digest": "sha384", 00:15:38.170 "dhgroup": "ffdhe6144" 00:15:38.170 } 00:15:38.170 } 00:15:38.170 ]' 00:15:38.170 22:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:38.428 22:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:38.428 22:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:38.428 22:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:38.428 22:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:38.428 22:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:38.428 22:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:38.428 22:25:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:38.686 22:25:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:NDg5NmI2ZjIzMGY3OTlhMzhmMDIyMjE1NTQyNzA3YzhjMzE0ZjJmM2Y5N2FkYjRjNTRkYzM1NmVhZDEyMTA1NA2KbfU=: 00:15:40.062 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:40.062 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:40.062 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:40.062 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.062 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.062 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.062 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:40.062 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:40.062 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:40.062 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:40.321 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:15:40.321 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:40.321 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:40.321 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:40.321 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:40.321 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:40.321 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:40.321 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:40.321 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.321 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:40.321 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:40.321 22:25:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:41.258 00:15:41.258 22:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:41.258 22:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:41.258 22:25:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:41.517 22:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:41.517 22:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:41.517 22:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.517 22:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.517 22:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.517 22:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:41.517 { 00:15:41.517 "cntlid": 89, 00:15:41.517 "qid": 0, 00:15:41.517 "state": "enabled", 00:15:41.517 "thread": "nvmf_tgt_poll_group_000", 00:15:41.517 "listen_address": { 00:15:41.517 "trtype": "TCP", 00:15:41.517 "adrfam": "IPv4", 00:15:41.517 "traddr": "10.0.0.2", 00:15:41.517 "trsvcid": "4420" 00:15:41.517 }, 00:15:41.517 "peer_address": { 00:15:41.517 "trtype": "TCP", 00:15:41.517 "adrfam": "IPv4", 00:15:41.517 "traddr": "10.0.0.1", 00:15:41.517 "trsvcid": "59320" 00:15:41.517 }, 00:15:41.517 "auth": { 00:15:41.517 "state": "completed", 00:15:41.517 "digest": "sha384", 00:15:41.517 "dhgroup": "ffdhe8192" 00:15:41.517 } 00:15:41.517 } 00:15:41.517 ]' 00:15:41.517 22:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:41.517 22:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:41.517 22:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:41.517 22:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:41.775 22:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:41.775 22:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:41.775 22:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:41.775 22:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:42.033 22:25:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:NGEzMWVhYTlhMjk5Y2M4NDc3OWNkMjFiMTdjYjQ1NzY4MTczMjczMmNiYWVkMDFjUM/DaA==: --dhchap-ctrl-secret DHHC-1:03:MDU4YjJmY2JkODcxOGRlMTBhMWFhZDA2Y2EwNDViZmVlMDZmMmM4YWZlMjMwZjcwMDY0NGJjMGRmMTZjODUxNQyopow=: 00:15:43.412 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:43.412 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:43.412 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:43.412 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.412 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.412 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.412 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:43.412 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:43.412 22:25:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:43.412 22:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:15:43.412 22:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:43.412 22:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:43.412 22:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:43.412 22:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:43.412 22:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:43.412 22:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:43.412 22:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.412 22:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.412 22:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.412 22:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:43.412 22:25:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.795 00:15:44.795 22:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:44.795 22:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:44.795 22:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:44.795 22:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.795 22:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:44.795 22:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.795 22:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.795 22:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.795 22:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:44.795 { 00:15:44.795 "cntlid": 91, 00:15:44.795 "qid": 0, 00:15:44.795 "state": "enabled", 00:15:44.795 "thread": "nvmf_tgt_poll_group_000", 00:15:44.795 "listen_address": { 00:15:44.795 "trtype": "TCP", 00:15:44.795 "adrfam": "IPv4", 00:15:44.795 "traddr": "10.0.0.2", 00:15:44.795 "trsvcid": "4420" 00:15:44.795 }, 00:15:44.795 "peer_address": { 00:15:44.795 "trtype": "TCP", 00:15:44.795 "adrfam": "IPv4", 00:15:44.795 "traddr": "10.0.0.1", 00:15:44.795 "trsvcid": "59338" 00:15:44.795 }, 00:15:44.795 "auth": { 00:15:44.795 "state": "completed", 00:15:44.795 "digest": "sha384", 00:15:44.795 "dhgroup": "ffdhe8192" 00:15:44.795 } 00:15:44.795 } 00:15:44.795 ]' 00:15:44.795 22:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:44.795 22:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:44.795 22:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:45.053 22:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:45.053 22:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:45.053 22:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:45.053 22:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:45.053 22:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:45.311 22:25:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:YjlmZjBkYTBlZjhiOWMxMDBlYzNhMWE4NjFlMTM5NzcebKtQ: --dhchap-ctrl-secret DHHC-1:02:YjEyZTRlNzUwNWQyY2RiYTZjOTk2ZjJkMjJlOWNkM2FmMGM1Yjc0Yzc2YjIwOWI5roOT2A==: 00:15:46.686 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:46.686 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:46.686 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:46.686 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.686 22:25:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.686 22:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.686 22:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:46.686 22:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:46.686 22:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:46.686 22:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:15:46.686 22:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:46.686 22:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:46.686 22:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:46.686 22:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:46.686 22:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:46.686 22:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.686 22:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.686 22:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.686 22:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.686 22:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.686 22:25:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:48.065 00:15:48.065 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:48.065 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:48.065 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:48.065 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:48.065 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:48.065 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.065 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.065 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.065 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:48.065 { 00:15:48.065 "cntlid": 93, 00:15:48.065 "qid": 0, 00:15:48.065 "state": "enabled", 00:15:48.065 "thread": "nvmf_tgt_poll_group_000", 00:15:48.065 "listen_address": { 00:15:48.065 "trtype": "TCP", 00:15:48.065 "adrfam": "IPv4", 00:15:48.065 "traddr": "10.0.0.2", 00:15:48.065 "trsvcid": "4420" 00:15:48.065 }, 00:15:48.065 "peer_address": { 00:15:48.065 "trtype": "TCP", 00:15:48.065 "adrfam": "IPv4", 00:15:48.065 "traddr": "10.0.0.1", 00:15:48.065 "trsvcid": "59374" 00:15:48.065 }, 00:15:48.065 "auth": { 00:15:48.065 "state": "completed", 00:15:48.065 "digest": "sha384", 00:15:48.065 "dhgroup": "ffdhe8192" 00:15:48.065 } 00:15:48.065 } 00:15:48.065 ]' 00:15:48.065 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:48.065 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:48.065 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:48.065 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:48.065 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:48.322 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:48.322 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:48.322 22:25:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:48.580 22:25:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:ZDRmNDAzMGUyOTE0ZTQyOTMwZmNkOGM1MWJmYmM5MWRjNWYxNWQwYjkzZTI4NDRk64rDTA==: --dhchap-ctrl-secret DHHC-1:01:NWJjMTZhMjgzYWM4MTUzZmMwYmI0MTg4ZWYzYWM3MmZQQcBP: 00:15:49.959 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:49.959 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:49.959 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:49.959 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.959 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.959 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.959 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:49.959 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:49.959 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:49.959 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:15:49.959 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:49.959 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:49.959 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:49.959 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:49.959 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:49.959 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:15:49.959 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:49.959 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.959 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:49.960 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:49.960 22:25:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:50.900 00:15:50.900 22:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:50.900 22:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:50.900 22:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:51.467 22:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.467 22:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:51.467 22:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.467 22:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.467 22:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.467 22:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:51.467 { 00:15:51.467 "cntlid": 95, 00:15:51.467 "qid": 0, 00:15:51.467 "state": "enabled", 00:15:51.467 "thread": "nvmf_tgt_poll_group_000", 00:15:51.467 "listen_address": { 00:15:51.467 "trtype": "TCP", 00:15:51.467 "adrfam": "IPv4", 00:15:51.467 "traddr": "10.0.0.2", 00:15:51.467 "trsvcid": "4420" 00:15:51.467 }, 00:15:51.467 "peer_address": { 00:15:51.467 "trtype": "TCP", 00:15:51.467 "adrfam": "IPv4", 00:15:51.467 "traddr": "10.0.0.1", 00:15:51.467 "trsvcid": "50122" 00:15:51.467 }, 00:15:51.467 "auth": { 00:15:51.467 "state": "completed", 00:15:51.467 "digest": "sha384", 00:15:51.467 "dhgroup": "ffdhe8192" 00:15:51.467 } 00:15:51.467 } 00:15:51.467 ]' 00:15:51.467 22:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:51.467 22:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:51.467 22:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:51.467 22:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:51.467 22:25:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:51.467 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:51.467 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:51.467 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:51.726 22:25:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:NDg5NmI2ZjIzMGY3OTlhMzhmMDIyMjE1NTQyNzA3YzhjMzE0ZjJmM2Y5N2FkYjRjNTRkYzM1NmVhZDEyMTA1NA2KbfU=: 00:15:53.102 22:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.102 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.102 22:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:53.102 22:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.102 22:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.102 22:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.102 22:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:15:53.102 22:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:53.102 22:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:53.102 22:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:53.102 22:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:53.360 22:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:15:53.360 22:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:53.360 22:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:53.360 22:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:53.360 22:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:53.360 22:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.360 22:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:53.360 22:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.360 22:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.360 22:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.360 22:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:53.360 22:25:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:53.618 00:15:53.618 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:53.618 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:53.618 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.875 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.875 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.875 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.875 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.875 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.875 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:53.875 { 00:15:53.875 "cntlid": 97, 00:15:53.875 "qid": 0, 00:15:53.875 "state": "enabled", 00:15:53.875 "thread": "nvmf_tgt_poll_group_000", 00:15:53.875 "listen_address": { 00:15:53.875 "trtype": "TCP", 00:15:53.875 "adrfam": "IPv4", 00:15:53.875 "traddr": "10.0.0.2", 00:15:53.875 "trsvcid": "4420" 00:15:53.875 }, 00:15:53.875 "peer_address": { 00:15:53.875 "trtype": "TCP", 00:15:53.875 "adrfam": "IPv4", 00:15:53.875 "traddr": "10.0.0.1", 00:15:53.875 "trsvcid": "50134" 00:15:53.875 }, 00:15:53.875 "auth": { 00:15:53.875 "state": "completed", 00:15:53.875 "digest": "sha512", 00:15:53.875 "dhgroup": "null" 00:15:53.875 } 00:15:53.875 } 00:15:53.875 ]' 00:15:53.875 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:53.875 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:53.875 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:53.875 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:53.875 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:54.133 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:54.133 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:54.133 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.391 22:25:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:NGEzMWVhYTlhMjk5Y2M4NDc3OWNkMjFiMTdjYjQ1NzY4MTczMjczMmNiYWVkMDFjUM/DaA==: --dhchap-ctrl-secret DHHC-1:03:MDU4YjJmY2JkODcxOGRlMTBhMWFhZDA2Y2EwNDViZmVlMDZmMmM4YWZlMjMwZjcwMDY0NGJjMGRmMTZjODUxNQyopow=: 00:15:55.778 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:55.778 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:55.778 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:55.778 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.778 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.778 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.778 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:55.778 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:55.778 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:55.778 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:15:55.778 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:55.778 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:55.778 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:55.778 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:55.778 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:55.778 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.778 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.779 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.779 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.779 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.779 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:56.038 00:15:56.038 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:56.038 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:56.038 22:25:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:56.603 22:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:56.603 22:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:56.603 22:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.603 22:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.603 22:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.603 22:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:56.603 { 00:15:56.603 "cntlid": 99, 00:15:56.603 "qid": 0, 00:15:56.603 "state": "enabled", 00:15:56.603 "thread": "nvmf_tgt_poll_group_000", 00:15:56.603 "listen_address": { 00:15:56.603 "trtype": "TCP", 00:15:56.603 "adrfam": "IPv4", 00:15:56.603 "traddr": "10.0.0.2", 00:15:56.603 "trsvcid": "4420" 00:15:56.603 }, 00:15:56.603 "peer_address": { 00:15:56.603 "trtype": "TCP", 00:15:56.603 "adrfam": "IPv4", 00:15:56.603 "traddr": "10.0.0.1", 00:15:56.603 "trsvcid": "50166" 00:15:56.603 }, 00:15:56.603 "auth": { 00:15:56.603 "state": "completed", 00:15:56.603 "digest": "sha512", 00:15:56.603 "dhgroup": "null" 00:15:56.603 } 00:15:56.603 } 00:15:56.603 ]' 00:15:56.603 22:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:56.603 22:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:56.603 22:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:56.603 22:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:56.603 22:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:56.603 22:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:56.604 22:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:56.604 22:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.861 22:25:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:YjlmZjBkYTBlZjhiOWMxMDBlYzNhMWE4NjFlMTM5NzcebKtQ: --dhchap-ctrl-secret DHHC-1:02:YjEyZTRlNzUwNWQyY2RiYTZjOTk2ZjJkMjJlOWNkM2FmMGM1Yjc0Yzc2YjIwOWI5roOT2A==: 00:15:58.238 22:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.238 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.238 22:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:15:58.238 22:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.238 22:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.238 22:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.238 22:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:58.238 22:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:58.238 22:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:58.496 22:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:15:58.496 22:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:58.496 22:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:58.496 22:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:58.496 22:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:58.496 22:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.496 22:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:58.496 22:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.496 22:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.496 22:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.496 22:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:58.496 22:25:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:58.755 00:15:58.755 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:58.755 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:58.755 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:59.013 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.013 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:59.013 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.013 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.013 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.013 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:59.013 { 00:15:59.013 "cntlid": 101, 00:15:59.013 "qid": 0, 00:15:59.013 "state": "enabled", 00:15:59.013 "thread": "nvmf_tgt_poll_group_000", 00:15:59.013 "listen_address": { 00:15:59.013 "trtype": "TCP", 00:15:59.013 "adrfam": "IPv4", 00:15:59.013 "traddr": "10.0.0.2", 00:15:59.013 "trsvcid": "4420" 00:15:59.013 }, 00:15:59.013 "peer_address": { 00:15:59.013 "trtype": "TCP", 00:15:59.013 "adrfam": "IPv4", 00:15:59.013 "traddr": "10.0.0.1", 00:15:59.013 "trsvcid": "50188" 00:15:59.013 }, 00:15:59.013 "auth": { 00:15:59.013 "state": "completed", 00:15:59.013 "digest": "sha512", 00:15:59.013 "dhgroup": "null" 00:15:59.013 } 00:15:59.013 } 00:15:59.013 ]' 00:15:59.013 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:59.013 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:59.013 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:59.013 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:59.013 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:59.013 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:59.013 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:59.013 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.273 22:25:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:ZDRmNDAzMGUyOTE0ZTQyOTMwZmNkOGM1MWJmYmM5MWRjNWYxNWQwYjkzZTI4NDRk64rDTA==: --dhchap-ctrl-secret DHHC-1:01:NWJjMTZhMjgzYWM4MTUzZmMwYmI0MTg4ZWYzYWM3MmZQQcBP: 00:16:00.652 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:00.652 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:00.652 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:00.652 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.652 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.652 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.652 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:00.652 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:00.652 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:00.910 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:16:00.910 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:00.910 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:00.910 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:00.910 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:00.911 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.911 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:16:00.911 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.911 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.911 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.911 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:00.911 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:01.175 00:16:01.175 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:01.175 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:01.176 22:25:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.495 22:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.496 22:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.496 22:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:01.496 22:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.496 22:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:01.496 22:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:01.496 { 00:16:01.496 "cntlid": 103, 00:16:01.496 "qid": 0, 00:16:01.496 "state": "enabled", 00:16:01.496 "thread": "nvmf_tgt_poll_group_000", 00:16:01.496 "listen_address": { 00:16:01.496 "trtype": "TCP", 00:16:01.496 "adrfam": "IPv4", 00:16:01.496 "traddr": "10.0.0.2", 00:16:01.496 "trsvcid": "4420" 00:16:01.496 }, 00:16:01.496 "peer_address": { 00:16:01.496 "trtype": "TCP", 00:16:01.496 "adrfam": "IPv4", 00:16:01.496 "traddr": "10.0.0.1", 00:16:01.496 "trsvcid": "37464" 00:16:01.496 }, 00:16:01.496 "auth": { 00:16:01.496 "state": "completed", 00:16:01.496 "digest": "sha512", 00:16:01.496 "dhgroup": "null" 00:16:01.496 } 00:16:01.496 } 00:16:01.496 ]' 00:16:01.496 22:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:01.496 22:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:01.496 22:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:01.755 22:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:01.755 22:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:01.755 22:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.755 22:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.755 22:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.015 22:25:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:NDg5NmI2ZjIzMGY3OTlhMzhmMDIyMjE1NTQyNzA3YzhjMzE0ZjJmM2Y5N2FkYjRjNTRkYzM1NmVhZDEyMTA1NA2KbfU=: 00:16:03.390 22:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:03.390 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:03.390 22:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:03.390 22:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.390 22:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.390 22:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.390 22:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:03.390 22:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:03.390 22:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:03.390 22:25:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:03.390 22:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:16:03.390 22:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:03.390 22:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:03.390 22:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:03.390 22:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:03.390 22:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:03.390 22:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:03.390 22:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.390 22:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.390 22:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.390 22:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:03.390 22:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:03.961 00:16:03.961 22:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:03.961 22:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:03.961 22:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.221 22:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.221 22:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:04.221 22:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.221 22:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.221 22:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.221 22:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:04.221 { 00:16:04.221 "cntlid": 105, 00:16:04.221 "qid": 0, 00:16:04.221 "state": "enabled", 00:16:04.221 "thread": "nvmf_tgt_poll_group_000", 00:16:04.221 "listen_address": { 00:16:04.221 "trtype": "TCP", 00:16:04.221 "adrfam": "IPv4", 00:16:04.221 "traddr": "10.0.0.2", 00:16:04.221 "trsvcid": "4420" 00:16:04.221 }, 00:16:04.221 "peer_address": { 00:16:04.221 "trtype": "TCP", 00:16:04.221 "adrfam": "IPv4", 00:16:04.221 "traddr": "10.0.0.1", 00:16:04.221 "trsvcid": "37474" 00:16:04.221 }, 00:16:04.221 "auth": { 00:16:04.221 "state": "completed", 00:16:04.221 "digest": "sha512", 00:16:04.221 "dhgroup": "ffdhe2048" 00:16:04.221 } 00:16:04.221 } 00:16:04.221 ]' 00:16:04.221 22:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:04.221 22:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:04.221 22:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:04.221 22:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:04.221 22:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:04.221 22:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:04.221 22:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:04.221 22:25:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:04.480 22:25:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:NGEzMWVhYTlhMjk5Y2M4NDc3OWNkMjFiMTdjYjQ1NzY4MTczMjczMmNiYWVkMDFjUM/DaA==: --dhchap-ctrl-secret DHHC-1:03:MDU4YjJmY2JkODcxOGRlMTBhMWFhZDA2Y2EwNDViZmVlMDZmMmM4YWZlMjMwZjcwMDY0NGJjMGRmMTZjODUxNQyopow=: 00:16:05.860 22:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:05.860 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:05.860 22:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:05.860 22:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:05.860 22:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.860 22:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:05.860 22:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:05.860 22:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:05.860 22:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:06.119 22:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:16:06.119 22:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:06.119 22:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:06.119 22:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:06.119 22:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:06.119 22:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.119 22:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:06.119 22:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.119 22:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.119 22:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.119 22:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:06.119 22:25:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:06.377 00:16:06.377 22:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:06.377 22:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:06.378 22:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.636 22:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.636 22:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:06.636 22:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.636 22:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.636 22:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.636 22:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:06.636 { 00:16:06.636 "cntlid": 107, 00:16:06.636 "qid": 0, 00:16:06.636 "state": "enabled", 00:16:06.636 "thread": "nvmf_tgt_poll_group_000", 00:16:06.636 "listen_address": { 00:16:06.636 "trtype": "TCP", 00:16:06.636 "adrfam": "IPv4", 00:16:06.636 "traddr": "10.0.0.2", 00:16:06.636 "trsvcid": "4420" 00:16:06.636 }, 00:16:06.636 "peer_address": { 00:16:06.636 "trtype": "TCP", 00:16:06.636 "adrfam": "IPv4", 00:16:06.636 "traddr": "10.0.0.1", 00:16:06.636 "trsvcid": "37508" 00:16:06.636 }, 00:16:06.636 "auth": { 00:16:06.636 "state": "completed", 00:16:06.636 "digest": "sha512", 00:16:06.636 "dhgroup": "ffdhe2048" 00:16:06.636 } 00:16:06.636 } 00:16:06.636 ]' 00:16:06.636 22:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:06.894 22:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:06.894 22:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:06.894 22:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:06.894 22:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:06.894 22:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:06.894 22:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:06.894 22:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.152 22:25:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:YjlmZjBkYTBlZjhiOWMxMDBlYzNhMWE4NjFlMTM5NzcebKtQ: --dhchap-ctrl-secret DHHC-1:02:YjEyZTRlNzUwNWQyY2RiYTZjOTk2ZjJkMjJlOWNkM2FmMGM1Yjc0Yzc2YjIwOWI5roOT2A==: 00:16:08.529 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.529 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.529 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:08.530 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.530 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.530 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.530 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:08.530 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:08.530 22:25:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:08.530 22:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:16:08.530 22:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:08.530 22:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:08.530 22:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:08.530 22:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:08.530 22:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.530 22:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:08.530 22:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.530 22:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.788 22:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.788 22:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:08.788 22:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:09.046 00:16:09.046 22:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:09.046 22:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:09.046 22:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.304 22:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.304 22:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.304 22:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.304 22:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.304 22:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.304 22:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:09.304 { 00:16:09.304 "cntlid": 109, 00:16:09.304 "qid": 0, 00:16:09.304 "state": "enabled", 00:16:09.304 "thread": "nvmf_tgt_poll_group_000", 00:16:09.304 "listen_address": { 00:16:09.304 "trtype": "TCP", 00:16:09.304 "adrfam": "IPv4", 00:16:09.304 "traddr": "10.0.0.2", 00:16:09.304 "trsvcid": "4420" 00:16:09.304 }, 00:16:09.304 "peer_address": { 00:16:09.304 "trtype": "TCP", 00:16:09.304 "adrfam": "IPv4", 00:16:09.304 "traddr": "10.0.0.1", 00:16:09.304 "trsvcid": "37544" 00:16:09.304 }, 00:16:09.304 "auth": { 00:16:09.304 "state": "completed", 00:16:09.304 "digest": "sha512", 00:16:09.304 "dhgroup": "ffdhe2048" 00:16:09.304 } 00:16:09.304 } 00:16:09.304 ]' 00:16:09.304 22:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:09.304 22:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:09.304 22:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:09.304 22:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:09.304 22:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:09.304 22:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.304 22:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.304 22:25:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.564 22:25:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:ZDRmNDAzMGUyOTE0ZTQyOTMwZmNkOGM1MWJmYmM5MWRjNWYxNWQwYjkzZTI4NDRk64rDTA==: --dhchap-ctrl-secret DHHC-1:01:NWJjMTZhMjgzYWM4MTUzZmMwYmI0MTg4ZWYzYWM3MmZQQcBP: 00:16:10.940 22:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.940 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.940 22:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:10.940 22:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.940 22:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.940 22:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.940 22:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:10.940 22:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:10.940 22:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:11.200 22:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:16:11.200 22:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:11.200 22:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:11.200 22:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:11.200 22:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:11.200 22:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.200 22:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:16:11.200 22:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.200 22:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.200 22:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.200 22:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:11.200 22:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:11.458 00:16:11.458 22:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:11.458 22:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:11.458 22:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:11.716 22:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.716 22:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:11.716 22:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.716 22:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.716 22:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.716 22:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:11.716 { 00:16:11.716 "cntlid": 111, 00:16:11.716 "qid": 0, 00:16:11.716 "state": "enabled", 00:16:11.716 "thread": "nvmf_tgt_poll_group_000", 00:16:11.716 "listen_address": { 00:16:11.716 "trtype": "TCP", 00:16:11.716 "adrfam": "IPv4", 00:16:11.716 "traddr": "10.0.0.2", 00:16:11.716 "trsvcid": "4420" 00:16:11.716 }, 00:16:11.716 "peer_address": { 00:16:11.716 "trtype": "TCP", 00:16:11.716 "adrfam": "IPv4", 00:16:11.716 "traddr": "10.0.0.1", 00:16:11.716 "trsvcid": "33414" 00:16:11.716 }, 00:16:11.716 "auth": { 00:16:11.716 "state": "completed", 00:16:11.716 "digest": "sha512", 00:16:11.716 "dhgroup": "ffdhe2048" 00:16:11.716 } 00:16:11.716 } 00:16:11.716 ]' 00:16:11.716 22:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:11.974 22:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:11.974 22:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:11.974 22:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:11.974 22:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:11.974 22:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:11.974 22:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:11.974 22:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.232 22:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:NDg5NmI2ZjIzMGY3OTlhMzhmMDIyMjE1NTQyNzA3YzhjMzE0ZjJmM2Y5N2FkYjRjNTRkYzM1NmVhZDEyMTA1NA2KbfU=: 00:16:13.612 22:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.612 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.612 22:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:13.612 22:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.612 22:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.612 22:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.612 22:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:13.612 22:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:13.612 22:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:13.612 22:25:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:13.612 22:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:16:13.612 22:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:13.612 22:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:13.612 22:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:13.612 22:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:13.612 22:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.612 22:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.612 22:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.612 22:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.612 22:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.612 22:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.612 22:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:14.179 00:16:14.179 22:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:14.179 22:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:14.179 22:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.436 22:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.436 22:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.436 22:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.436 22:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.436 22:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.436 22:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:14.436 { 00:16:14.436 "cntlid": 113, 00:16:14.436 "qid": 0, 00:16:14.436 "state": "enabled", 00:16:14.436 "thread": "nvmf_tgt_poll_group_000", 00:16:14.436 "listen_address": { 00:16:14.437 "trtype": "TCP", 00:16:14.437 "adrfam": "IPv4", 00:16:14.437 "traddr": "10.0.0.2", 00:16:14.437 "trsvcid": "4420" 00:16:14.437 }, 00:16:14.437 "peer_address": { 00:16:14.437 "trtype": "TCP", 00:16:14.437 "adrfam": "IPv4", 00:16:14.437 "traddr": "10.0.0.1", 00:16:14.437 "trsvcid": "33450" 00:16:14.437 }, 00:16:14.437 "auth": { 00:16:14.437 "state": "completed", 00:16:14.437 "digest": "sha512", 00:16:14.437 "dhgroup": "ffdhe3072" 00:16:14.437 } 00:16:14.437 } 00:16:14.437 ]' 00:16:14.437 22:25:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:14.437 22:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:14.437 22:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:14.437 22:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:14.437 22:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:14.437 22:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.437 22:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.437 22:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.695 22:25:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:NGEzMWVhYTlhMjk5Y2M4NDc3OWNkMjFiMTdjYjQ1NzY4MTczMjczMmNiYWVkMDFjUM/DaA==: --dhchap-ctrl-secret DHHC-1:03:MDU4YjJmY2JkODcxOGRlMTBhMWFhZDA2Y2EwNDViZmVlMDZmMmM4YWZlMjMwZjcwMDY0NGJjMGRmMTZjODUxNQyopow=: 00:16:16.072 22:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.072 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.072 22:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:16.072 22:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.072 22:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.072 22:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.072 22:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:16.072 22:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:16.072 22:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:16.331 22:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:16:16.331 22:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:16.331 22:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:16.331 22:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:16.331 22:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:16.331 22:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.331 22:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:16.331 22:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.331 22:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.331 22:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.331 22:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:16.331 22:25:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:16.591 00:16:16.849 22:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:16.849 22:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:16.849 22:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.108 22:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.108 22:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.108 22:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.108 22:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.108 22:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.108 22:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:17.108 { 00:16:17.108 "cntlid": 115, 00:16:17.108 "qid": 0, 00:16:17.108 "state": "enabled", 00:16:17.108 "thread": "nvmf_tgt_poll_group_000", 00:16:17.108 "listen_address": { 00:16:17.108 "trtype": "TCP", 00:16:17.108 "adrfam": "IPv4", 00:16:17.108 "traddr": "10.0.0.2", 00:16:17.108 "trsvcid": "4420" 00:16:17.108 }, 00:16:17.108 "peer_address": { 00:16:17.108 "trtype": "TCP", 00:16:17.108 "adrfam": "IPv4", 00:16:17.108 "traddr": "10.0.0.1", 00:16:17.108 "trsvcid": "33482" 00:16:17.108 }, 00:16:17.108 "auth": { 00:16:17.108 "state": "completed", 00:16:17.108 "digest": "sha512", 00:16:17.108 "dhgroup": "ffdhe3072" 00:16:17.108 } 00:16:17.108 } 00:16:17.108 ]' 00:16:17.108 22:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:17.108 22:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:17.108 22:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:17.108 22:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:17.108 22:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:17.108 22:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.108 22:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.108 22:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.365 22:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:YjlmZjBkYTBlZjhiOWMxMDBlYzNhMWE4NjFlMTM5NzcebKtQ: --dhchap-ctrl-secret DHHC-1:02:YjEyZTRlNzUwNWQyY2RiYTZjOTk2ZjJkMjJlOWNkM2FmMGM1Yjc0Yzc2YjIwOWI5roOT2A==: 00:16:18.742 22:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.742 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.742 22:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:18.742 22:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.742 22:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.742 22:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.742 22:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:18.742 22:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:18.742 22:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:19.000 22:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:16:19.000 22:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:19.000 22:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:19.000 22:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:19.000 22:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:19.000 22:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.000 22:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.000 22:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.000 22:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.000 22:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.000 22:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.000 22:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.257 00:16:19.257 22:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:19.257 22:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:19.257 22:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.516 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.516 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.516 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.516 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.775 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.775 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:19.775 { 00:16:19.775 "cntlid": 117, 00:16:19.775 "qid": 0, 00:16:19.775 "state": "enabled", 00:16:19.775 "thread": "nvmf_tgt_poll_group_000", 00:16:19.775 "listen_address": { 00:16:19.775 "trtype": "TCP", 00:16:19.775 "adrfam": "IPv4", 00:16:19.775 "traddr": "10.0.0.2", 00:16:19.775 "trsvcid": "4420" 00:16:19.775 }, 00:16:19.775 "peer_address": { 00:16:19.775 "trtype": "TCP", 00:16:19.776 "adrfam": "IPv4", 00:16:19.776 "traddr": "10.0.0.1", 00:16:19.776 "trsvcid": "43204" 00:16:19.776 }, 00:16:19.776 "auth": { 00:16:19.776 "state": "completed", 00:16:19.776 "digest": "sha512", 00:16:19.776 "dhgroup": "ffdhe3072" 00:16:19.776 } 00:16:19.776 } 00:16:19.776 ]' 00:16:19.776 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:19.776 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:19.776 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:19.776 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:19.776 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:19.776 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.776 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.776 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.035 22:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:ZDRmNDAzMGUyOTE0ZTQyOTMwZmNkOGM1MWJmYmM5MWRjNWYxNWQwYjkzZTI4NDRk64rDTA==: --dhchap-ctrl-secret DHHC-1:01:NWJjMTZhMjgzYWM4MTUzZmMwYmI0MTg4ZWYzYWM3MmZQQcBP: 00:16:21.412 22:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.412 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.412 22:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:21.412 22:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.412 22:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.412 22:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.412 22:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:21.412 22:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:21.412 22:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:21.670 22:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:16:21.670 22:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:21.670 22:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:21.670 22:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:21.670 22:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:21.670 22:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.670 22:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:16:21.670 22:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:21.670 22:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.670 22:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:21.670 22:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:21.670 22:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:21.928 00:16:21.928 22:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:21.928 22:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:21.928 22:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.186 22:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.186 22:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.186 22:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.186 22:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.186 22:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.186 22:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:22.186 { 00:16:22.186 "cntlid": 119, 00:16:22.186 "qid": 0, 00:16:22.186 "state": "enabled", 00:16:22.186 "thread": "nvmf_tgt_poll_group_000", 00:16:22.186 "listen_address": { 00:16:22.186 "trtype": "TCP", 00:16:22.186 "adrfam": "IPv4", 00:16:22.186 "traddr": "10.0.0.2", 00:16:22.186 "trsvcid": "4420" 00:16:22.186 }, 00:16:22.186 "peer_address": { 00:16:22.186 "trtype": "TCP", 00:16:22.186 "adrfam": "IPv4", 00:16:22.186 "traddr": "10.0.0.1", 00:16:22.186 "trsvcid": "43238" 00:16:22.186 }, 00:16:22.186 "auth": { 00:16:22.186 "state": "completed", 00:16:22.186 "digest": "sha512", 00:16:22.186 "dhgroup": "ffdhe3072" 00:16:22.186 } 00:16:22.186 } 00:16:22.186 ]' 00:16:22.186 22:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:22.186 22:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:22.186 22:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:22.186 22:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:22.186 22:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:22.446 22:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.446 22:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.446 22:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.705 22:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:NDg5NmI2ZjIzMGY3OTlhMzhmMDIyMjE1NTQyNzA3YzhjMzE0ZjJmM2Y5N2FkYjRjNTRkYzM1NmVhZDEyMTA1NA2KbfU=: 00:16:24.082 22:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.082 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.082 22:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:24.082 22:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.082 22:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.082 22:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.082 22:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:24.082 22:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:24.082 22:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:24.082 22:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:24.082 22:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:16:24.082 22:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:24.082 22:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:24.082 22:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:24.082 22:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:24.082 22:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.082 22:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.082 22:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.082 22:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.082 22:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.082 22:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.082 22:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.650 00:16:24.650 22:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:24.650 22:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:24.650 22:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.913 22:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.914 22:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.914 22:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.914 22:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.914 22:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.914 22:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:24.914 { 00:16:24.914 "cntlid": 121, 00:16:24.914 "qid": 0, 00:16:24.914 "state": "enabled", 00:16:24.914 "thread": "nvmf_tgt_poll_group_000", 00:16:24.914 "listen_address": { 00:16:24.914 "trtype": "TCP", 00:16:24.914 "adrfam": "IPv4", 00:16:24.914 "traddr": "10.0.0.2", 00:16:24.914 "trsvcid": "4420" 00:16:24.914 }, 00:16:24.914 "peer_address": { 00:16:24.914 "trtype": "TCP", 00:16:24.914 "adrfam": "IPv4", 00:16:24.914 "traddr": "10.0.0.1", 00:16:24.914 "trsvcid": "43258" 00:16:24.914 }, 00:16:24.914 "auth": { 00:16:24.914 "state": "completed", 00:16:24.914 "digest": "sha512", 00:16:24.914 "dhgroup": "ffdhe4096" 00:16:24.914 } 00:16:24.914 } 00:16:24.914 ]' 00:16:24.914 22:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:24.914 22:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:24.914 22:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:24.914 22:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:24.914 22:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:24.914 22:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.914 22:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.914 22:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.528 22:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:NGEzMWVhYTlhMjk5Y2M4NDc3OWNkMjFiMTdjYjQ1NzY4MTczMjczMmNiYWVkMDFjUM/DaA==: --dhchap-ctrl-secret DHHC-1:03:MDU4YjJmY2JkODcxOGRlMTBhMWFhZDA2Y2EwNDViZmVlMDZmMmM4YWZlMjMwZjcwMDY0NGJjMGRmMTZjODUxNQyopow=: 00:16:26.465 22:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.465 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.465 22:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:26.465 22:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.465 22:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.465 22:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.465 22:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:26.465 22:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:26.465 22:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:26.724 22:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:16:26.724 22:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:26.724 22:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:26.724 22:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:26.724 22:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:26.724 22:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.724 22:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:26.724 22:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.724 22:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.724 22:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.724 22:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:26.724 22:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.291 00:16:27.291 22:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:27.291 22:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:27.291 22:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.549 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.549 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.549 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.549 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.549 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.549 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:27.549 { 00:16:27.549 "cntlid": 123, 00:16:27.549 "qid": 0, 00:16:27.549 "state": "enabled", 00:16:27.549 "thread": "nvmf_tgt_poll_group_000", 00:16:27.549 "listen_address": { 00:16:27.549 "trtype": "TCP", 00:16:27.549 "adrfam": "IPv4", 00:16:27.549 "traddr": "10.0.0.2", 00:16:27.549 "trsvcid": "4420" 00:16:27.549 }, 00:16:27.549 "peer_address": { 00:16:27.549 "trtype": "TCP", 00:16:27.549 "adrfam": "IPv4", 00:16:27.549 "traddr": "10.0.0.1", 00:16:27.549 "trsvcid": "43284" 00:16:27.549 }, 00:16:27.549 "auth": { 00:16:27.549 "state": "completed", 00:16:27.549 "digest": "sha512", 00:16:27.549 "dhgroup": "ffdhe4096" 00:16:27.549 } 00:16:27.549 } 00:16:27.549 ]' 00:16:27.549 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:27.549 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:27.549 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:27.805 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:27.805 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:27.805 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.805 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.805 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.062 22:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:YjlmZjBkYTBlZjhiOWMxMDBlYzNhMWE4NjFlMTM5NzcebKtQ: --dhchap-ctrl-secret DHHC-1:02:YjEyZTRlNzUwNWQyY2RiYTZjOTk2ZjJkMjJlOWNkM2FmMGM1Yjc0Yzc2YjIwOWI5roOT2A==: 00:16:29.438 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.438 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.438 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:29.438 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.438 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.438 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.438 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:29.438 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:29.438 22:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:29.438 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:16:29.438 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:29.438 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:29.438 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:29.438 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:29.438 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.438 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.438 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.438 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.438 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.438 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.438 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.004 00:16:30.004 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:30.004 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:30.004 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.262 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.262 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.262 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.262 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.262 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.262 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:30.262 { 00:16:30.262 "cntlid": 125, 00:16:30.262 "qid": 0, 00:16:30.262 "state": "enabled", 00:16:30.262 "thread": "nvmf_tgt_poll_group_000", 00:16:30.262 "listen_address": { 00:16:30.262 "trtype": "TCP", 00:16:30.262 "adrfam": "IPv4", 00:16:30.262 "traddr": "10.0.0.2", 00:16:30.262 "trsvcid": "4420" 00:16:30.262 }, 00:16:30.262 "peer_address": { 00:16:30.262 "trtype": "TCP", 00:16:30.262 "adrfam": "IPv4", 00:16:30.262 "traddr": "10.0.0.1", 00:16:30.262 "trsvcid": "41578" 00:16:30.262 }, 00:16:30.262 "auth": { 00:16:30.262 "state": "completed", 00:16:30.262 "digest": "sha512", 00:16:30.262 "dhgroup": "ffdhe4096" 00:16:30.262 } 00:16:30.262 } 00:16:30.262 ]' 00:16:30.262 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:30.262 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:30.263 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:30.263 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:30.263 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:30.521 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.521 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.521 22:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.778 22:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:ZDRmNDAzMGUyOTE0ZTQyOTMwZmNkOGM1MWJmYmM5MWRjNWYxNWQwYjkzZTI4NDRk64rDTA==: --dhchap-ctrl-secret DHHC-1:01:NWJjMTZhMjgzYWM4MTUzZmMwYmI0MTg4ZWYzYWM3MmZQQcBP: 00:16:31.716 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.973 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.973 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:31.973 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.973 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.973 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.973 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:31.973 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:31.973 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:32.230 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:16:32.230 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:32.230 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:32.230 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:32.230 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:32.230 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.230 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:16:32.230 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.230 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.230 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.230 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:32.230 22:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:32.489 00:16:32.748 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:32.748 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:32.748 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.007 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.007 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.007 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.007 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.007 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.007 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:33.007 { 00:16:33.007 "cntlid": 127, 00:16:33.007 "qid": 0, 00:16:33.007 "state": "enabled", 00:16:33.007 "thread": "nvmf_tgt_poll_group_000", 00:16:33.007 "listen_address": { 00:16:33.007 "trtype": "TCP", 00:16:33.007 "adrfam": "IPv4", 00:16:33.007 "traddr": "10.0.0.2", 00:16:33.007 "trsvcid": "4420" 00:16:33.007 }, 00:16:33.007 "peer_address": { 00:16:33.007 "trtype": "TCP", 00:16:33.007 "adrfam": "IPv4", 00:16:33.007 "traddr": "10.0.0.1", 00:16:33.007 "trsvcid": "41594" 00:16:33.007 }, 00:16:33.007 "auth": { 00:16:33.007 "state": "completed", 00:16:33.007 "digest": "sha512", 00:16:33.007 "dhgroup": "ffdhe4096" 00:16:33.007 } 00:16:33.007 } 00:16:33.007 ]' 00:16:33.007 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:33.007 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:33.007 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:33.007 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:33.007 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:33.007 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.007 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.007 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.265 22:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:NDg5NmI2ZjIzMGY3OTlhMzhmMDIyMjE1NTQyNzA3YzhjMzE0ZjJmM2Y5N2FkYjRjNTRkYzM1NmVhZDEyMTA1NA2KbfU=: 00:16:34.642 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.642 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.642 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:34.642 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.642 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.642 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.642 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:34.642 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:34.642 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:34.642 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:34.901 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:16:34.901 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:34.901 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:34.901 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:34.901 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:34.901 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.901 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.901 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.901 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.901 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.901 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:34.901 22:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.470 00:16:35.470 22:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:35.470 22:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:35.470 22:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.728 22:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.728 22:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.728 22:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.728 22:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.728 22:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.728 22:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:35.728 { 00:16:35.728 "cntlid": 129, 00:16:35.728 "qid": 0, 00:16:35.728 "state": "enabled", 00:16:35.728 "thread": "nvmf_tgt_poll_group_000", 00:16:35.728 "listen_address": { 00:16:35.728 "trtype": "TCP", 00:16:35.728 "adrfam": "IPv4", 00:16:35.728 "traddr": "10.0.0.2", 00:16:35.728 "trsvcid": "4420" 00:16:35.728 }, 00:16:35.728 "peer_address": { 00:16:35.728 "trtype": "TCP", 00:16:35.728 "adrfam": "IPv4", 00:16:35.728 "traddr": "10.0.0.1", 00:16:35.728 "trsvcid": "41614" 00:16:35.728 }, 00:16:35.728 "auth": { 00:16:35.728 "state": "completed", 00:16:35.728 "digest": "sha512", 00:16:35.728 "dhgroup": "ffdhe6144" 00:16:35.728 } 00:16:35.728 } 00:16:35.728 ]' 00:16:35.728 22:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:35.728 22:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:35.728 22:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:35.728 22:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:35.728 22:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:35.988 22:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.988 22:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.988 22:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.246 22:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:NGEzMWVhYTlhMjk5Y2M4NDc3OWNkMjFiMTdjYjQ1NzY4MTczMjczMmNiYWVkMDFjUM/DaA==: --dhchap-ctrl-secret DHHC-1:03:MDU4YjJmY2JkODcxOGRlMTBhMWFhZDA2Y2EwNDViZmVlMDZmMmM4YWZlMjMwZjcwMDY0NGJjMGRmMTZjODUxNQyopow=: 00:16:37.184 22:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.184 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.184 22:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:37.184 22:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.184 22:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.184 22:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.184 22:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:37.184 22:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:37.184 22:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:37.751 22:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:16:37.751 22:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:37.751 22:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:37.751 22:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:37.751 22:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:37.751 22:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.751 22:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.751 22:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.751 22:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.751 22:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.751 22:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.751 22:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.318 00:16:38.318 22:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:38.318 22:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:38.318 22:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.576 22:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.576 22:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.576 22:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.576 22:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.576 22:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.576 22:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:38.576 { 00:16:38.576 "cntlid": 131, 00:16:38.576 "qid": 0, 00:16:38.576 "state": "enabled", 00:16:38.576 "thread": "nvmf_tgt_poll_group_000", 00:16:38.576 "listen_address": { 00:16:38.576 "trtype": "TCP", 00:16:38.576 "adrfam": "IPv4", 00:16:38.576 "traddr": "10.0.0.2", 00:16:38.576 "trsvcid": "4420" 00:16:38.576 }, 00:16:38.577 "peer_address": { 00:16:38.577 "trtype": "TCP", 00:16:38.577 "adrfam": "IPv4", 00:16:38.577 "traddr": "10.0.0.1", 00:16:38.577 "trsvcid": "41654" 00:16:38.577 }, 00:16:38.577 "auth": { 00:16:38.577 "state": "completed", 00:16:38.577 "digest": "sha512", 00:16:38.577 "dhgroup": "ffdhe6144" 00:16:38.577 } 00:16:38.577 } 00:16:38.577 ]' 00:16:38.577 22:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:38.577 22:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:38.577 22:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:38.577 22:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:38.577 22:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:38.577 22:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.577 22:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.577 22:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.146 22:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:YjlmZjBkYTBlZjhiOWMxMDBlYzNhMWE4NjFlMTM5NzcebKtQ: --dhchap-ctrl-secret DHHC-1:02:YjEyZTRlNzUwNWQyY2RiYTZjOTk2ZjJkMjJlOWNkM2FmMGM1Yjc0Yzc2YjIwOWI5roOT2A==: 00:16:40.082 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.082 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.082 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:40.082 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.082 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.082 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.082 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:40.082 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:40.082 22:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:40.341 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:16:40.341 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:40.341 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:40.341 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:40.341 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:40.341 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.341 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.341 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.341 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.341 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.341 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.341 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:41.280 00:16:41.280 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:41.280 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:41.280 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.280 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.280 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.280 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.280 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.539 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.539 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:41.539 { 00:16:41.539 "cntlid": 133, 00:16:41.539 "qid": 0, 00:16:41.539 "state": "enabled", 00:16:41.539 "thread": "nvmf_tgt_poll_group_000", 00:16:41.539 "listen_address": { 00:16:41.539 "trtype": "TCP", 00:16:41.539 "adrfam": "IPv4", 00:16:41.539 "traddr": "10.0.0.2", 00:16:41.539 "trsvcid": "4420" 00:16:41.539 }, 00:16:41.539 "peer_address": { 00:16:41.539 "trtype": "TCP", 00:16:41.539 "adrfam": "IPv4", 00:16:41.539 "traddr": "10.0.0.1", 00:16:41.539 "trsvcid": "41936" 00:16:41.539 }, 00:16:41.539 "auth": { 00:16:41.539 "state": "completed", 00:16:41.539 "digest": "sha512", 00:16:41.539 "dhgroup": "ffdhe6144" 00:16:41.539 } 00:16:41.539 } 00:16:41.539 ]' 00:16:41.539 22:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:41.539 22:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:41.539 22:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:41.539 22:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:41.539 22:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:41.539 22:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.539 22:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.539 22:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.799 22:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:ZDRmNDAzMGUyOTE0ZTQyOTMwZmNkOGM1MWJmYmM5MWRjNWYxNWQwYjkzZTI4NDRk64rDTA==: --dhchap-ctrl-secret DHHC-1:01:NWJjMTZhMjgzYWM4MTUzZmMwYmI0MTg4ZWYzYWM3MmZQQcBP: 00:16:43.176 22:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.176 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.176 22:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:43.176 22:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.176 22:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.176 22:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.176 22:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:43.176 22:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:43.176 22:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:43.434 22:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:16:43.434 22:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:43.434 22:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:43.434 22:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:43.434 22:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:43.434 22:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.434 22:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:16:43.434 22:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.434 22:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.434 22:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.435 22:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:43.435 22:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:44.003 00:16:44.003 22:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:44.003 22:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.003 22:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:44.260 22:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.261 22:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.261 22:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.261 22:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.261 22:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.261 22:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:44.261 { 00:16:44.261 "cntlid": 135, 00:16:44.261 "qid": 0, 00:16:44.261 "state": "enabled", 00:16:44.261 "thread": "nvmf_tgt_poll_group_000", 00:16:44.261 "listen_address": { 00:16:44.261 "trtype": "TCP", 00:16:44.261 "adrfam": "IPv4", 00:16:44.261 "traddr": "10.0.0.2", 00:16:44.261 "trsvcid": "4420" 00:16:44.261 }, 00:16:44.261 "peer_address": { 00:16:44.261 "trtype": "TCP", 00:16:44.261 "adrfam": "IPv4", 00:16:44.261 "traddr": "10.0.0.1", 00:16:44.261 "trsvcid": "41956" 00:16:44.261 }, 00:16:44.261 "auth": { 00:16:44.261 "state": "completed", 00:16:44.261 "digest": "sha512", 00:16:44.261 "dhgroup": "ffdhe6144" 00:16:44.261 } 00:16:44.261 } 00:16:44.261 ]' 00:16:44.261 22:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:44.261 22:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:44.261 22:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:44.519 22:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:44.519 22:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:44.519 22:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.519 22:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.519 22:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.777 22:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:NDg5NmI2ZjIzMGY3OTlhMzhmMDIyMjE1NTQyNzA3YzhjMzE0ZjJmM2Y5N2FkYjRjNTRkYzM1NmVhZDEyMTA1NA2KbfU=: 00:16:46.153 22:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.153 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.153 22:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:46.153 22:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.153 22:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.153 22:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.153 22:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:46.154 22:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:46.154 22:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:46.154 22:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:46.154 22:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:16:46.154 22:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:46.154 22:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:46.154 22:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:46.154 22:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:46.154 22:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.154 22:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.154 22:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.154 22:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.154 22:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.154 22:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.154 22:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.526 00:16:47.526 22:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:47.526 22:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:47.526 22:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.526 22:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.526 22:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.526 22:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.526 22:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.526 22:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.526 22:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:47.526 { 00:16:47.526 "cntlid": 137, 00:16:47.526 "qid": 0, 00:16:47.526 "state": "enabled", 00:16:47.526 "thread": "nvmf_tgt_poll_group_000", 00:16:47.526 "listen_address": { 00:16:47.526 "trtype": "TCP", 00:16:47.526 "adrfam": "IPv4", 00:16:47.526 "traddr": "10.0.0.2", 00:16:47.526 "trsvcid": "4420" 00:16:47.526 }, 00:16:47.526 "peer_address": { 00:16:47.526 "trtype": "TCP", 00:16:47.526 "adrfam": "IPv4", 00:16:47.526 "traddr": "10.0.0.1", 00:16:47.526 "trsvcid": "41988" 00:16:47.526 }, 00:16:47.526 "auth": { 00:16:47.526 "state": "completed", 00:16:47.526 "digest": "sha512", 00:16:47.526 "dhgroup": "ffdhe8192" 00:16:47.526 } 00:16:47.526 } 00:16:47.526 ]' 00:16:47.526 22:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:47.526 22:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:47.526 22:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:47.784 22:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:47.785 22:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:47.785 22:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.785 22:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.785 22:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.041 22:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:NGEzMWVhYTlhMjk5Y2M4NDc3OWNkMjFiMTdjYjQ1NzY4MTczMjczMmNiYWVkMDFjUM/DaA==: --dhchap-ctrl-secret DHHC-1:03:MDU4YjJmY2JkODcxOGRlMTBhMWFhZDA2Y2EwNDViZmVlMDZmMmM4YWZlMjMwZjcwMDY0NGJjMGRmMTZjODUxNQyopow=: 00:16:49.448 22:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.448 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.448 22:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:49.448 22:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.448 22:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.448 22:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.448 22:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:49.448 22:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:49.448 22:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:49.448 22:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:16:49.448 22:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:49.448 22:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:49.448 22:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:49.448 22:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:49.448 22:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.448 22:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.448 22:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.448 22:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.448 22:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.448 22:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.448 22:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.405 00:16:50.663 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:50.663 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:50.663 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.921 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.921 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.921 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.921 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.921 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.921 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:50.921 { 00:16:50.921 "cntlid": 139, 00:16:50.921 "qid": 0, 00:16:50.921 "state": "enabled", 00:16:50.921 "thread": "nvmf_tgt_poll_group_000", 00:16:50.921 "listen_address": { 00:16:50.921 "trtype": "TCP", 00:16:50.921 "adrfam": "IPv4", 00:16:50.921 "traddr": "10.0.0.2", 00:16:50.921 "trsvcid": "4420" 00:16:50.921 }, 00:16:50.921 "peer_address": { 00:16:50.921 "trtype": "TCP", 00:16:50.921 "adrfam": "IPv4", 00:16:50.921 "traddr": "10.0.0.1", 00:16:50.921 "trsvcid": "41732" 00:16:50.921 }, 00:16:50.921 "auth": { 00:16:50.921 "state": "completed", 00:16:50.921 "digest": "sha512", 00:16:50.921 "dhgroup": "ffdhe8192" 00:16:50.921 } 00:16:50.921 } 00:16:50.921 ]' 00:16:50.921 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:50.921 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:50.921 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:50.921 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:50.921 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:50.921 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.921 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.921 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.179 22:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:01:YjlmZjBkYTBlZjhiOWMxMDBlYzNhMWE4NjFlMTM5NzcebKtQ: --dhchap-ctrl-secret DHHC-1:02:YjEyZTRlNzUwNWQyY2RiYTZjOTk2ZjJkMjJlOWNkM2FmMGM1Yjc0Yzc2YjIwOWI5roOT2A==: 00:16:52.558 22:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.558 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.558 22:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:52.558 22:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.558 22:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.558 22:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.558 22:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:52.558 22:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:52.558 22:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:52.817 22:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:16:52.817 22:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:52.817 22:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:52.817 22:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:52.817 22:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:52.817 22:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.817 22:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.817 22:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.817 22:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.817 22:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.817 22:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.817 22:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.756 00:16:53.756 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:53.756 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:53.756 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.015 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.015 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.015 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.015 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.015 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.015 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:54.015 { 00:16:54.015 "cntlid": 141, 00:16:54.015 "qid": 0, 00:16:54.015 "state": "enabled", 00:16:54.015 "thread": "nvmf_tgt_poll_group_000", 00:16:54.015 "listen_address": { 00:16:54.015 "trtype": "TCP", 00:16:54.015 "adrfam": "IPv4", 00:16:54.015 "traddr": "10.0.0.2", 00:16:54.015 "trsvcid": "4420" 00:16:54.015 }, 00:16:54.015 "peer_address": { 00:16:54.015 "trtype": "TCP", 00:16:54.015 "adrfam": "IPv4", 00:16:54.015 "traddr": "10.0.0.1", 00:16:54.015 "trsvcid": "41756" 00:16:54.015 }, 00:16:54.015 "auth": { 00:16:54.015 "state": "completed", 00:16:54.015 "digest": "sha512", 00:16:54.015 "dhgroup": "ffdhe8192" 00:16:54.015 } 00:16:54.015 } 00:16:54.015 ]' 00:16:54.015 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:54.015 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:54.015 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:54.015 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:54.015 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:54.274 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.274 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.274 22:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.533 22:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:02:ZDRmNDAzMGUyOTE0ZTQyOTMwZmNkOGM1MWJmYmM5MWRjNWYxNWQwYjkzZTI4NDRk64rDTA==: --dhchap-ctrl-secret DHHC-1:01:NWJjMTZhMjgzYWM4MTUzZmMwYmI0MTg4ZWYzYWM3MmZQQcBP: 00:16:55.912 22:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.912 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.912 22:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:55.912 22:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.912 22:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.912 22:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.912 22:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:55.912 22:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:55.912 22:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:55.912 22:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:16:55.912 22:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:55.912 22:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:55.912 22:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:55.912 22:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:55.912 22:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.912 22:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:16:55.912 22:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.912 22:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.912 22:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.912 22:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:55.912 22:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:57.295 00:16:57.295 22:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:57.295 22:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:57.295 22:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.295 22:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.295 22:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.295 22:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.295 22:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.295 22:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.295 22:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:57.295 { 00:16:57.295 "cntlid": 143, 00:16:57.295 "qid": 0, 00:16:57.295 "state": "enabled", 00:16:57.295 "thread": "nvmf_tgt_poll_group_000", 00:16:57.295 "listen_address": { 00:16:57.295 "trtype": "TCP", 00:16:57.295 "adrfam": "IPv4", 00:16:57.295 "traddr": "10.0.0.2", 00:16:57.295 "trsvcid": "4420" 00:16:57.295 }, 00:16:57.295 "peer_address": { 00:16:57.295 "trtype": "TCP", 00:16:57.295 "adrfam": "IPv4", 00:16:57.295 "traddr": "10.0.0.1", 00:16:57.295 "trsvcid": "41796" 00:16:57.295 }, 00:16:57.295 "auth": { 00:16:57.295 "state": "completed", 00:16:57.295 "digest": "sha512", 00:16:57.295 "dhgroup": "ffdhe8192" 00:16:57.295 } 00:16:57.295 } 00:16:57.295 ]' 00:16:57.295 22:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:57.295 22:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:57.295 22:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:57.295 22:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:57.295 22:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:57.554 22:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.554 22:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.554 22:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.812 22:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:NDg5NmI2ZjIzMGY3OTlhMzhmMDIyMjE1NTQyNzA3YzhjMzE0ZjJmM2Y5N2FkYjRjNTRkYzM1NmVhZDEyMTA1NA2KbfU=: 00:16:59.190 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.190 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.190 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:16:59.190 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.190 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.190 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.190 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:16:59.190 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:16:59.190 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:16:59.190 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:59.190 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:59.190 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:59.190 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:16:59.190 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:59.190 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:59.190 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:59.190 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:59.190 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.190 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.190 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.190 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.190 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.190 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.190 22:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.571 00:17:00.571 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:00.571 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:00.571 22:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.571 22:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.571 22:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.571 22:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.571 22:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.571 22:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.571 22:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:00.571 { 00:17:00.571 "cntlid": 145, 00:17:00.571 "qid": 0, 00:17:00.571 "state": "enabled", 00:17:00.571 "thread": "nvmf_tgt_poll_group_000", 00:17:00.571 "listen_address": { 00:17:00.571 "trtype": "TCP", 00:17:00.571 "adrfam": "IPv4", 00:17:00.571 "traddr": "10.0.0.2", 00:17:00.571 "trsvcid": "4420" 00:17:00.571 }, 00:17:00.571 "peer_address": { 00:17:00.571 "trtype": "TCP", 00:17:00.571 "adrfam": "IPv4", 00:17:00.571 "traddr": "10.0.0.1", 00:17:00.571 "trsvcid": "35132" 00:17:00.571 }, 00:17:00.571 "auth": { 00:17:00.571 "state": "completed", 00:17:00.571 "digest": "sha512", 00:17:00.571 "dhgroup": "ffdhe8192" 00:17:00.571 } 00:17:00.571 } 00:17:00.571 ]' 00:17:00.571 22:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:00.571 22:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:00.571 22:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:00.571 22:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:00.571 22:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:00.829 22:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.829 22:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.829 22:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.088 22:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:00:NGEzMWVhYTlhMjk5Y2M4NDc3OWNkMjFiMTdjYjQ1NzY4MTczMjczMmNiYWVkMDFjUM/DaA==: --dhchap-ctrl-secret DHHC-1:03:MDU4YjJmY2JkODcxOGRlMTBhMWFhZDA2Y2EwNDViZmVlMDZmMmM4YWZlMjMwZjcwMDY0NGJjMGRmMTZjODUxNQyopow=: 00:17:02.028 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.028 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.028 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:17:02.028 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.028 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.287 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.287 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 00:17:02.287 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.287 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.287 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.287 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:02.287 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:02.287 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:02.287 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:02.287 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:02.287 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:02.287 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:02.287 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:02.287 22:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:03.226 request: 00:17:03.226 { 00:17:03.226 "name": "nvme0", 00:17:03.226 "trtype": "tcp", 00:17:03.226 "traddr": "10.0.0.2", 00:17:03.226 "adrfam": "ipv4", 00:17:03.226 "trsvcid": "4420", 00:17:03.226 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:03.226 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc", 00:17:03.226 "prchk_reftag": false, 00:17:03.226 "prchk_guard": false, 00:17:03.226 "hdgst": false, 00:17:03.226 "ddgst": false, 00:17:03.226 "dhchap_key": "key2", 00:17:03.226 "method": "bdev_nvme_attach_controller", 00:17:03.226 "req_id": 1 00:17:03.226 } 00:17:03.226 Got JSON-RPC error response 00:17:03.226 response: 00:17:03.226 { 00:17:03.226 "code": -5, 00:17:03.226 "message": "Input/output error" 00:17:03.226 } 00:17:03.226 22:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:03.226 22:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:03.226 22:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:03.226 22:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:03.226 22:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:17:03.226 22:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.226 22:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.227 22:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.227 22:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.227 22:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.227 22:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.227 22:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.227 22:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:03.227 22:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:03.227 22:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:03.227 22:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:03.227 22:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:03.227 22:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:03.227 22:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:03.227 22:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:03.227 22:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:04.166 request: 00:17:04.166 { 00:17:04.166 "name": "nvme0", 00:17:04.166 "trtype": "tcp", 00:17:04.166 "traddr": "10.0.0.2", 00:17:04.166 "adrfam": "ipv4", 00:17:04.166 "trsvcid": "4420", 00:17:04.166 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:04.166 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc", 00:17:04.166 "prchk_reftag": false, 00:17:04.166 "prchk_guard": false, 00:17:04.166 "hdgst": false, 00:17:04.166 "ddgst": false, 00:17:04.166 "dhchap_key": "key1", 00:17:04.166 "dhchap_ctrlr_key": "ckey2", 00:17:04.166 "method": "bdev_nvme_attach_controller", 00:17:04.166 "req_id": 1 00:17:04.166 } 00:17:04.166 Got JSON-RPC error response 00:17:04.166 response: 00:17:04.166 { 00:17:04.166 "code": -5, 00:17:04.166 "message": "Input/output error" 00:17:04.166 } 00:17:04.166 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:04.166 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:04.166 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:04.166 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:04.166 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:17:04.166 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.166 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.166 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.166 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key1 00:17:04.166 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.166 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.166 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.166 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.166 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:04.166 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.166 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:04.166 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:04.166 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:04.166 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:04.166 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.166 22:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.105 request: 00:17:05.105 { 00:17:05.105 "name": "nvme0", 00:17:05.105 "trtype": "tcp", 00:17:05.105 "traddr": "10.0.0.2", 00:17:05.105 "adrfam": "ipv4", 00:17:05.105 "trsvcid": "4420", 00:17:05.105 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:05.105 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc", 00:17:05.105 "prchk_reftag": false, 00:17:05.105 "prchk_guard": false, 00:17:05.105 "hdgst": false, 00:17:05.105 "ddgst": false, 00:17:05.105 "dhchap_key": "key1", 00:17:05.105 "dhchap_ctrlr_key": "ckey1", 00:17:05.105 "method": "bdev_nvme_attach_controller", 00:17:05.105 "req_id": 1 00:17:05.105 } 00:17:05.105 Got JSON-RPC error response 00:17:05.105 response: 00:17:05.105 { 00:17:05.105 "code": -5, 00:17:05.105 "message": "Input/output error" 00:17:05.105 } 00:17:05.105 22:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:05.105 22:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:05.105 22:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:05.105 22:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:05.105 22:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:17:05.106 22:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.106 22:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.106 22:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.106 22:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 3835370 00:17:05.106 22:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 3835370 ']' 00:17:05.106 22:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 3835370 00:17:05.106 22:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:17:05.106 22:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:05.106 22:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3835370 00:17:05.106 22:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:05.106 22:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:05.106 22:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3835370' 00:17:05.106 killing process with pid 3835370 00:17:05.106 22:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 3835370 00:17:05.106 22:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 3835370 00:17:05.365 22:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:17:05.365 22:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:05.365 22:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:05.365 22:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.365 22:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:17:05.365 22:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3855845 00:17:05.365 22:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3855845 00:17:05.365 22:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3855845 ']' 00:17:05.365 22:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:05.365 22:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:05.365 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:05.365 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:05.365 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.623 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:05.623 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:05.623 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:05.623 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:05.623 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.623 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:05.623 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:05.623 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 3855845 00:17:05.623 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3855845 ']' 00:17:05.623 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:05.623 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:05.623 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:05.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:05.623 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:05.623 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.191 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:06.191 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:06.191 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:17:06.191 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.191 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.191 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.191 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:17:06.191 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:06.191 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:06.191 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:06.191 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:06.191 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.191 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:17:06.191 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.191 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.191 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.191 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:06.191 22:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:07.129 00:17:07.129 22:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:07.129 22:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:07.129 22:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.389 22:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.389 22:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.389 22:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.389 22:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.647 22:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.647 22:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:07.647 { 00:17:07.647 "cntlid": 1, 00:17:07.647 "qid": 0, 00:17:07.647 "state": "enabled", 00:17:07.647 "thread": "nvmf_tgt_poll_group_000", 00:17:07.647 "listen_address": { 00:17:07.647 "trtype": "TCP", 00:17:07.647 "adrfam": "IPv4", 00:17:07.647 "traddr": "10.0.0.2", 00:17:07.647 "trsvcid": "4420" 00:17:07.647 }, 00:17:07.647 "peer_address": { 00:17:07.647 "trtype": "TCP", 00:17:07.647 "adrfam": "IPv4", 00:17:07.647 "traddr": "10.0.0.1", 00:17:07.647 "trsvcid": "35178" 00:17:07.647 }, 00:17:07.648 "auth": { 00:17:07.648 "state": "completed", 00:17:07.648 "digest": "sha512", 00:17:07.648 "dhgroup": "ffdhe8192" 00:17:07.648 } 00:17:07.648 } 00:17:07.648 ]' 00:17:07.648 22:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:07.648 22:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:07.648 22:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:07.648 22:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:07.648 22:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:07.648 22:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.648 22:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.648 22:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.906 22:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-secret DHHC-1:03:NDg5NmI2ZjIzMGY3OTlhMzhmMDIyMjE1NTQyNzA3YzhjMzE0ZjJmM2Y5N2FkYjRjNTRkYzM1NmVhZDEyMTA1NA2KbfU=: 00:17:09.283 22:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.283 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.283 22:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:17:09.283 22:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.283 22:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.283 22:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.283 22:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --dhchap-key key3 00:17:09.283 22:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.283 22:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.283 22:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.283 22:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:09.283 22:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:09.543 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:09.543 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:09.543 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:09.543 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:09.543 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:09.543 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:09.543 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:09.543 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:09.543 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:09.801 request: 00:17:09.801 { 00:17:09.801 "name": "nvme0", 00:17:09.801 "trtype": "tcp", 00:17:09.801 "traddr": "10.0.0.2", 00:17:09.801 "adrfam": "ipv4", 00:17:09.801 "trsvcid": "4420", 00:17:09.801 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:09.801 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc", 00:17:09.801 "prchk_reftag": false, 00:17:09.801 "prchk_guard": false, 00:17:09.801 "hdgst": false, 00:17:09.801 "ddgst": false, 00:17:09.801 "dhchap_key": "key3", 00:17:09.801 "method": "bdev_nvme_attach_controller", 00:17:09.801 "req_id": 1 00:17:09.801 } 00:17:09.801 Got JSON-RPC error response 00:17:09.801 response: 00:17:09.801 { 00:17:09.801 "code": -5, 00:17:09.801 "message": "Input/output error" 00:17:09.801 } 00:17:09.801 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:09.801 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:09.801 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:09.801 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:09.801 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:17:09.801 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:17:09.801 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:09.801 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:10.060 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:10.060 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:10.060 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:10.060 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:10.060 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:10.060 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:10.060 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:10.060 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:10.060 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:10.318 request: 00:17:10.318 { 00:17:10.318 "name": "nvme0", 00:17:10.318 "trtype": "tcp", 00:17:10.318 "traddr": "10.0.0.2", 00:17:10.318 "adrfam": "ipv4", 00:17:10.318 "trsvcid": "4420", 00:17:10.318 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:10.318 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc", 00:17:10.318 "prchk_reftag": false, 00:17:10.318 "prchk_guard": false, 00:17:10.318 "hdgst": false, 00:17:10.318 "ddgst": false, 00:17:10.318 "dhchap_key": "key3", 00:17:10.318 "method": "bdev_nvme_attach_controller", 00:17:10.318 "req_id": 1 00:17:10.318 } 00:17:10.318 Got JSON-RPC error response 00:17:10.318 response: 00:17:10.318 { 00:17:10.318 "code": -5, 00:17:10.318 "message": "Input/output error" 00:17:10.318 } 00:17:10.318 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:10.318 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:10.318 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:10.318 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:10.318 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:17:10.318 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:17:10.318 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:17:10.318 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:10.318 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:10.318 22:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:10.577 22:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:17:10.577 22:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.577 22:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.577 22:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.577 22:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:17:10.577 22:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.577 22:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.577 22:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.577 22:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:10.577 22:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:10.577 22:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:10.577 22:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:10.577 22:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:10.577 22:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:10.577 22:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:10.577 22:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:10.577 22:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:10.835 request: 00:17:10.835 { 00:17:10.835 "name": "nvme0", 00:17:10.835 "trtype": "tcp", 00:17:10.835 "traddr": "10.0.0.2", 00:17:10.835 "adrfam": "ipv4", 00:17:10.835 "trsvcid": "4420", 00:17:10.835 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:10.835 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc", 00:17:10.835 "prchk_reftag": false, 00:17:10.835 "prchk_guard": false, 00:17:10.835 "hdgst": false, 00:17:10.835 "ddgst": false, 00:17:10.835 "dhchap_key": "key0", 00:17:10.835 "dhchap_ctrlr_key": "key1", 00:17:10.835 "method": "bdev_nvme_attach_controller", 00:17:10.835 "req_id": 1 00:17:10.835 } 00:17:10.835 Got JSON-RPC error response 00:17:10.835 response: 00:17:10.835 { 00:17:10.835 "code": -5, 00:17:10.835 "message": "Input/output error" 00:17:10.835 } 00:17:10.835 22:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:10.835 22:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:10.835 22:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:10.835 22:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:10.835 22:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:10.835 22:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:11.094 00:17:11.094 22:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:17:11.094 22:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:17:11.094 22:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.353 22:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.353 22:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.353 22:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.612 22:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:17:11.612 22:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:17:11.612 22:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3835396 00:17:11.612 22:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 3835396 ']' 00:17:11.612 22:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 3835396 00:17:11.612 22:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:17:11.612 22:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:11.612 22:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3835396 00:17:11.871 22:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:11.871 22:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:11.871 22:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3835396' 00:17:11.871 killing process with pid 3835396 00:17:11.871 22:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 3835396 00:17:11.871 22:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 3835396 00:17:12.131 22:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:12.131 22:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:12.131 22:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:17:12.131 22:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:12.131 22:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:17:12.131 22:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:12.131 22:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:12.131 rmmod nvme_tcp 00:17:12.131 rmmod nvme_fabrics 00:17:12.131 rmmod nvme_keyring 00:17:12.131 22:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:12.131 22:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:17:12.131 22:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:17:12.131 22:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 3855845 ']' 00:17:12.131 22:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 3855845 00:17:12.131 22:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 3855845 ']' 00:17:12.131 22:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 3855845 00:17:12.131 22:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:17:12.131 22:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:12.131 22:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3855845 00:17:12.131 22:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:12.131 22:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:12.131 22:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3855845' 00:17:12.131 killing process with pid 3855845 00:17:12.131 22:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 3855845 00:17:12.131 22:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 3855845 00:17:12.391 22:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:12.391 22:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:12.391 22:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:12.391 22:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:12.391 22:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:12.391 22:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:12.391 22:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:12.391 22:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:14.332 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:14.332 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.aZB /tmp/spdk.key-sha256.VLZ /tmp/spdk.key-sha384.NHI /tmp/spdk.key-sha512.CpI /tmp/spdk.key-sha512.ZjA /tmp/spdk.key-sha384.3mJ /tmp/spdk.key-sha256.TCZ '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:17:14.332 00:17:14.332 real 3m38.873s 00:17:14.332 user 8m29.771s 00:17:14.332 sys 0m25.978s 00:17:14.332 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:14.332 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.332 ************************************ 00:17:14.332 END TEST nvmf_auth_target 00:17:14.332 ************************************ 00:17:14.591 22:26:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:17:14.591 22:26:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:14.591 22:26:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:17:14.591 22:26:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:14.591 22:26:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:14.591 ************************************ 00:17:14.591 START TEST nvmf_bdevio_no_huge 00:17:14.591 ************************************ 00:17:14.591 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:14.591 * Looking for test storage... 00:17:14.591 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:14.591 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:14.591 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:17:14.591 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:14.591 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:14.591 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:14.592 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:14.592 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:14.592 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:14.592 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:14.592 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:14.592 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:14.592 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:14.592 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:17:14.592 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:17:14.592 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:14.592 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:14.592 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:14.592 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:14.592 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:14.592 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:14.592 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:14.592 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:14.592 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.592 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.592 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.592 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:17:14.592 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.592 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:17:14.592 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:14.592 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:14.592 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:14.592 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:14.592 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:14.592 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:14.592 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:14.592 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:14.592 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:14.592 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:14.592 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:17:14.592 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:14.592 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:14.592 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:14.592 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:14.592 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:14.592 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:14.592 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:14.592 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:14.592 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:14.592 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:14.592 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:17:14.592 22:26:40 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:15.973 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:15.973 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:17:15.973 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:15.973 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:15.973 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:15.973 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:15.973 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:15.973 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:17:15.973 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:15.973 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:17:15.973 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:17:15.973 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:17:15.973 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:17:15.973 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:17:15.973 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:17:15.973 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:15.973 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:15.973 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:15.973 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:15.973 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:15.973 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:15.973 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:15.973 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:15.973 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:15.973 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:15.973 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:15.973 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:15.973 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:15.973 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:15.973 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:15.973 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:15.973 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:15.973 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:15.973 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:17:15.973 Found 0000:08:00.0 (0x8086 - 0x159b) 00:17:15.973 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:15.973 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:15.973 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:15.973 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:16.234 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:16.234 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:16.234 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:17:16.234 Found 0000:08:00.1 (0x8086 - 0x159b) 00:17:16.234 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:16.234 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:16.234 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:16.234 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:16.234 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:16.234 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:16.234 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:16.234 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:16.234 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:16.234 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:16.234 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:16.234 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:16.234 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:16.234 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:16.234 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:16.234 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:17:16.234 Found net devices under 0000:08:00.0: cvl_0_0 00:17:16.234 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:16.234 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:16.234 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:16.234 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:16.234 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:16.234 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:16.234 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:16.234 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:16.234 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:17:16.234 Found net devices under 0000:08:00.1: cvl_0_1 00:17:16.234 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:16.234 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:16.234 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:17:16.234 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:16.234 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:16.234 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:16.234 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:16.234 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:16.234 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:16.234 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:16.234 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:16.234 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:16.234 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:16.234 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:16.234 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:16.234 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:16.234 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:16.234 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:16.234 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:16.234 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:16.234 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:16.234 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:16.234 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:16.234 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:16.234 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:16.234 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:16.234 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:16.235 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:17:16.235 00:17:16.235 --- 10.0.0.2 ping statistics --- 00:17:16.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:16.235 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:17:16.235 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:16.235 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:16.235 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:17:16.235 00:17:16.235 --- 10.0.0.1 ping statistics --- 00:17:16.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:16.235 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:17:16.235 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:16.235 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:17:16.235 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:16.235 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:16.235 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:16.235 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:16.235 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:16.235 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:16.235 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:16.235 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:16.235 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:16.235 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:16.235 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:16.235 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=3858005 00:17:16.235 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 3858005 00:17:16.235 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:16.235 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 3858005 ']' 00:17:16.235 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:16.235 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:16.235 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:16.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:16.235 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:16.235 22:26:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:16.235 [2024-07-24 22:26:41.873307] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:17:16.235 [2024-07-24 22:26:41.873409] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:16.495 [2024-07-24 22:26:41.950959] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:16.495 [2024-07-24 22:26:42.074654] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:16.495 [2024-07-24 22:26:42.074714] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:16.495 [2024-07-24 22:26:42.074730] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:16.495 [2024-07-24 22:26:42.074743] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:16.495 [2024-07-24 22:26:42.074755] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:16.495 [2024-07-24 22:26:42.074839] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:17:16.495 [2024-07-24 22:26:42.075154] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:17:16.495 [2024-07-24 22:26:42.075208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:17:16.495 [2024-07-24 22:26:42.075214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:16.495 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:16.495 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:17:16.495 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:16.495 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:16.495 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:16.495 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:16.495 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:16.495 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.495 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:16.495 [2024-07-24 22:26:42.196756] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:16.754 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.754 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:16.754 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.754 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:16.754 Malloc0 00:17:16.754 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.754 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:16.754 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.754 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:16.754 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.754 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:16.754 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.754 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:16.754 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.754 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:16.754 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.754 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:16.754 [2024-07-24 22:26:42.235423] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:16.754 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.754 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:16.754 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:16.754 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:17:16.754 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:17:16.754 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:16.754 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:16.754 { 00:17:16.754 "params": { 00:17:16.754 "name": "Nvme$subsystem", 00:17:16.754 "trtype": "$TEST_TRANSPORT", 00:17:16.754 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:16.754 "adrfam": "ipv4", 00:17:16.754 "trsvcid": "$NVMF_PORT", 00:17:16.754 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:16.754 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:16.754 "hdgst": ${hdgst:-false}, 00:17:16.754 "ddgst": ${ddgst:-false} 00:17:16.754 }, 00:17:16.754 "method": "bdev_nvme_attach_controller" 00:17:16.754 } 00:17:16.754 EOF 00:17:16.754 )") 00:17:16.754 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:17:16.754 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:17:16.754 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:17:16.754 22:26:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:16.754 "params": { 00:17:16.754 "name": "Nvme1", 00:17:16.754 "trtype": "tcp", 00:17:16.754 "traddr": "10.0.0.2", 00:17:16.754 "adrfam": "ipv4", 00:17:16.754 "trsvcid": "4420", 00:17:16.754 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:16.754 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:16.754 "hdgst": false, 00:17:16.754 "ddgst": false 00:17:16.754 }, 00:17:16.754 "method": "bdev_nvme_attach_controller" 00:17:16.754 }' 00:17:16.754 [2024-07-24 22:26:42.286098] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:17:16.754 [2024-07-24 22:26:42.286196] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3858029 ] 00:17:16.754 [2024-07-24 22:26:42.352152] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:17.014 [2024-07-24 22:26:42.474915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:17.014 [2024-07-24 22:26:42.474994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:17.014 [2024-07-24 22:26:42.474998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:17.275 I/O targets: 00:17:17.275 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:17.275 00:17:17.275 00:17:17.275 CUnit - A unit testing framework for C - Version 2.1-3 00:17:17.275 http://cunit.sourceforge.net/ 00:17:17.275 00:17:17.275 00:17:17.275 Suite: bdevio tests on: Nvme1n1 00:17:17.275 Test: blockdev write read block ...passed 00:17:17.275 Test: blockdev write zeroes read block ...passed 00:17:17.275 Test: blockdev write zeroes read no split ...passed 00:17:17.275 Test: blockdev write zeroes read split ...passed 00:17:17.275 Test: blockdev write zeroes read split partial ...passed 00:17:17.275 Test: blockdev reset ...[2024-07-24 22:26:42.967820] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:17.275 [2024-07-24 22:26:42.967967] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xebc570 (9): Bad file descriptor 00:17:17.533 [2024-07-24 22:26:43.069884] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:17.533 passed 00:17:17.533 Test: blockdev write read 8 blocks ...passed 00:17:17.533 Test: blockdev write read size > 128k ...passed 00:17:17.533 Test: blockdev write read invalid size ...passed 00:17:17.533 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:17.533 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:17.533 Test: blockdev write read max offset ...passed 00:17:17.533 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:17.795 Test: blockdev writev readv 8 blocks ...passed 00:17:17.795 Test: blockdev writev readv 30 x 1block ...passed 00:17:17.796 Test: blockdev writev readv block ...passed 00:17:17.796 Test: blockdev writev readv size > 128k ...passed 00:17:17.796 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:17.796 Test: blockdev comparev and writev ...[2024-07-24 22:26:43.365676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:17.796 [2024-07-24 22:26:43.365718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:17.796 [2024-07-24 22:26:43.365745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:17.796 [2024-07-24 22:26:43.365767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:17.796 [2024-07-24 22:26:43.366145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:17.796 [2024-07-24 22:26:43.366171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:17.796 [2024-07-24 22:26:43.366194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:17.796 [2024-07-24 22:26:43.366220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:17.796 [2024-07-24 22:26:43.366592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:17.796 [2024-07-24 22:26:43.366619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:17.796 [2024-07-24 22:26:43.366643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:17.796 [2024-07-24 22:26:43.366659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:17.796 [2024-07-24 22:26:43.367004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:17.796 [2024-07-24 22:26:43.367029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:17.796 [2024-07-24 22:26:43.367053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:17.796 [2024-07-24 22:26:43.367069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:17.796 passed 00:17:17.796 Test: blockdev nvme passthru rw ...passed 00:17:17.796 Test: blockdev nvme passthru vendor specific ...[2024-07-24 22:26:43.448847] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:17.796 [2024-07-24 22:26:43.448876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:17.796 [2024-07-24 22:26:43.449077] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:17.796 [2024-07-24 22:26:43.449102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:17.796 [2024-07-24 22:26:43.449306] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:17.796 [2024-07-24 22:26:43.449332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:17.796 [2024-07-24 22:26:43.449539] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:17.796 [2024-07-24 22:26:43.449563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:17.796 passed 00:17:17.796 Test: blockdev nvme admin passthru ...passed 00:17:18.059 Test: blockdev copy ...passed 00:17:18.059 00:17:18.059 Run Summary: Type Total Ran Passed Failed Inactive 00:17:18.059 suites 1 1 n/a 0 0 00:17:18.059 tests 23 23 23 0 0 00:17:18.059 asserts 152 152 152 0 n/a 00:17:18.059 00:17:18.059 Elapsed time = 1.416 seconds 00:17:18.317 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:18.317 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.318 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:18.318 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.318 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:18.318 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:17:18.318 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:18.318 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:17:18.318 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:18.318 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:17:18.318 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:18.318 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:18.318 rmmod nvme_tcp 00:17:18.318 rmmod nvme_fabrics 00:17:18.318 rmmod nvme_keyring 00:17:18.318 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:18.318 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:17:18.318 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:17:18.318 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 3858005 ']' 00:17:18.318 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 3858005 00:17:18.318 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 3858005 ']' 00:17:18.318 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 3858005 00:17:18.318 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:17:18.318 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:18.318 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3858005 00:17:18.318 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:17:18.318 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:17:18.318 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3858005' 00:17:18.318 killing process with pid 3858005 00:17:18.318 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 3858005 00:17:18.318 22:26:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 3858005 00:17:18.888 22:26:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:18.888 22:26:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:18.888 22:26:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:18.888 22:26:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:18.888 22:26:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:18.888 22:26:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:18.888 22:26:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:18.888 22:26:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:20.793 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:20.793 00:17:20.793 real 0m6.360s 00:17:20.793 user 0m12.207s 00:17:20.793 sys 0m2.178s 00:17:20.793 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:20.793 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:20.793 ************************************ 00:17:20.793 END TEST nvmf_bdevio_no_huge 00:17:20.793 ************************************ 00:17:20.793 22:26:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:20.793 22:26:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:20.793 22:26:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:20.793 22:26:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:20.793 ************************************ 00:17:20.793 START TEST nvmf_tls 00:17:20.793 ************************************ 00:17:20.793 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:21.052 * Looking for test storage... 00:17:21.052 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:21.052 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:21.052 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:17:21.052 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:21.052 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:21.052 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:21.052 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:21.052 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:21.052 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:21.052 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:21.052 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:21.052 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:21.052 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:21.052 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:17:21.052 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:17:21.052 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:21.052 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:21.052 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:21.052 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:21.052 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:21.052 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:21.052 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:21.052 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:21.052 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.052 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.052 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.052 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:17:21.052 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.052 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:17:21.052 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:21.052 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:21.052 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:21.052 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:21.052 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:21.052 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:21.052 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:21.052 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:21.052 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:21.052 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:17:21.052 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:21.052 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:21.052 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:21.052 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:21.052 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:21.052 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:21.052 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:21.052 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:21.052 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:21.052 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:21.052 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:17:21.052 22:26:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:17:22.431 Found 0000:08:00.0 (0x8086 - 0x159b) 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:17:22.431 Found 0000:08:00.1 (0x8086 - 0x159b) 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:17:22.431 Found net devices under 0000:08:00.0: cvl_0_0 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:17:22.431 Found net devices under 0000:08:00.1: cvl_0_1 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:22.431 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:22.690 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:22.690 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:22.690 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:22.690 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:22.690 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:22.690 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:22.690 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:22.690 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:22.690 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:22.690 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:22.690 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:22.690 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:17:22.690 00:17:22.690 --- 10.0.0.2 ping statistics --- 00:17:22.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:22.690 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:17:22.690 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:22.690 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:22.690 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:17:22.690 00:17:22.690 --- 10.0.0.1 ping statistics --- 00:17:22.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:22.690 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:17:22.690 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:22.690 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:17:22.690 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:22.690 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:22.690 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:22.690 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:22.690 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:22.690 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:22.690 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:22.690 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:22.690 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:22.690 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:22.690 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:22.690 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:22.690 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3859718 00:17:22.690 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3859718 00:17:22.690 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3859718 ']' 00:17:22.690 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:22.690 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:22.690 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:22.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:22.690 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:22.690 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:22.690 [2024-07-24 22:26:48.335569] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:17:22.690 [2024-07-24 22:26:48.335668] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:22.690 EAL: No free 2048 kB hugepages reported on node 1 00:17:22.951 [2024-07-24 22:26:48.403057] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.951 [2024-07-24 22:26:48.518711] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:22.951 [2024-07-24 22:26:48.518773] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:22.951 [2024-07-24 22:26:48.518790] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:22.951 [2024-07-24 22:26:48.518803] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:22.951 [2024-07-24 22:26:48.518815] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:22.951 [2024-07-24 22:26:48.518859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:22.951 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:22.951 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:22.951 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:22.951 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:22.951 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:22.951 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:22.951 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:17:22.951 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:23.210 true 00:17:23.210 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:23.210 22:26:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:17:23.777 22:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:17:23.777 22:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:17:23.777 22:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:24.036 22:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:24.036 22:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:17:24.295 22:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:17:24.295 22:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:17:24.295 22:26:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:24.553 22:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:17:24.553 22:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:24.811 22:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:17:24.811 22:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:17:24.811 22:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:17:24.811 22:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:25.069 22:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:17:25.069 22:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:17:25.069 22:26:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:25.328 22:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:25.328 22:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:17:25.895 22:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:17:25.895 22:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:17:25.895 22:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:26.153 22:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:17:26.153 22:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:26.412 22:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:17:26.412 22:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:17:26.412 22:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:17:26.412 22:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:17:26.412 22:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:26.412 22:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:26.412 22:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:17:26.412 22:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:17:26.412 22:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:26.412 22:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:26.412 22:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:17:26.412 22:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:17:26.412 22:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:26.412 22:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:26.412 22:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:17:26.412 22:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:17:26.412 22:26:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:26.412 22:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:26.412 22:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:17:26.412 22:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.m1xUyMKlNQ 00:17:26.412 22:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:17:26.412 22:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.tpcpZIGqnF 00:17:26.412 22:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:26.412 22:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:26.412 22:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.m1xUyMKlNQ 00:17:26.412 22:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.tpcpZIGqnF 00:17:26.412 22:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:26.670 22:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:17:26.929 22:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.m1xUyMKlNQ 00:17:26.929 22:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.m1xUyMKlNQ 00:17:26.929 22:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:27.187 [2024-07-24 22:26:52.851415] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:27.187 22:26:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:27.445 22:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:27.703 [2024-07-24 22:26:53.332707] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:27.703 [2024-07-24 22:26:53.332938] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:27.703 22:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:27.960 malloc0 00:17:27.960 22:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:28.218 22:26:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.m1xUyMKlNQ 00:17:28.478 [2024-07-24 22:26:54.080007] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:28.478 22:26:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.m1xUyMKlNQ 00:17:28.478 EAL: No free 2048 kB hugepages reported on node 1 00:17:40.700 Initializing NVMe Controllers 00:17:40.700 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:40.700 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:40.700 Initialization complete. Launching workers. 00:17:40.700 ======================================================== 00:17:40.700 Latency(us) 00:17:40.700 Device Information : IOPS MiB/s Average min max 00:17:40.700 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7524.40 29.39 8508.54 1068.37 9422.37 00:17:40.701 ======================================================== 00:17:40.701 Total : 7524.40 29.39 8508.54 1068.37 9422.37 00:17:40.701 00:17:40.701 22:27:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.m1xUyMKlNQ 00:17:40.701 22:27:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:40.701 22:27:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:40.701 22:27:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:40.701 22:27:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.m1xUyMKlNQ' 00:17:40.701 22:27:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:40.701 22:27:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3861287 00:17:40.701 22:27:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:40.701 22:27:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:40.701 22:27:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3861287 /var/tmp/bdevperf.sock 00:17:40.701 22:27:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3861287 ']' 00:17:40.701 22:27:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:40.701 22:27:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:40.701 22:27:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:40.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:40.701 22:27:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:40.701 22:27:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:40.701 [2024-07-24 22:27:04.271560] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:17:40.701 [2024-07-24 22:27:04.271661] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3861287 ] 00:17:40.701 EAL: No free 2048 kB hugepages reported on node 1 00:17:40.701 [2024-07-24 22:27:04.332433] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.701 [2024-07-24 22:27:04.449437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:40.701 22:27:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:40.701 22:27:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:40.701 22:27:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.m1xUyMKlNQ 00:17:40.701 [2024-07-24 22:27:04.830006] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:40.701 [2024-07-24 22:27:04.830151] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:40.701 TLSTESTn1 00:17:40.701 22:27:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:40.701 Running I/O for 10 seconds... 00:17:50.695 00:17:50.695 Latency(us) 00:17:50.695 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:50.695 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:50.695 Verification LBA range: start 0x0 length 0x2000 00:17:50.695 TLSTESTn1 : 10.02 3883.80 15.17 0.00 0.00 32897.36 8349.77 40972.14 00:17:50.695 =================================================================================================================== 00:17:50.695 Total : 3883.80 15.17 0.00 0.00 32897.36 8349.77 40972.14 00:17:50.695 0 00:17:50.695 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:50.695 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 3861287 00:17:50.695 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3861287 ']' 00:17:50.695 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3861287 00:17:50.695 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:50.695 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:50.695 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3861287 00:17:50.695 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:50.695 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:50.695 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3861287' 00:17:50.695 killing process with pid 3861287 00:17:50.695 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3861287 00:17:50.695 Received shutdown signal, test time was about 10.000000 seconds 00:17:50.695 00:17:50.695 Latency(us) 00:17:50.695 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:50.695 =================================================================================================================== 00:17:50.695 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:50.695 [2024-07-24 22:27:15.124518] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:50.695 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3861287 00:17:50.695 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.tpcpZIGqnF 00:17:50.695 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:50.695 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.tpcpZIGqnF 00:17:50.695 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:50.695 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:50.695 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:50.695 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:50.695 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.tpcpZIGqnF 00:17:50.695 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:50.695 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:50.695 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:50.695 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.tpcpZIGqnF' 00:17:50.695 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:50.695 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3862800 00:17:50.695 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:50.695 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:50.696 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3862800 /var/tmp/bdevperf.sock 00:17:50.696 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3862800 ']' 00:17:50.696 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:50.696 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:50.696 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:50.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:50.696 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:50.696 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:50.696 [2024-07-24 22:27:15.366462] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:17:50.696 [2024-07-24 22:27:15.366561] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3862800 ] 00:17:50.696 EAL: No free 2048 kB hugepages reported on node 1 00:17:50.696 [2024-07-24 22:27:15.423611] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.696 [2024-07-24 22:27:15.529518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:50.696 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:50.696 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:50.696 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.tpcpZIGqnF 00:17:50.696 [2024-07-24 22:27:15.897099] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:50.696 [2024-07-24 22:27:15.897209] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:50.696 [2024-07-24 22:27:15.902343] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:50.696 [2024-07-24 22:27:15.902934] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f15470 (107): Transport endpoint is not connected 00:17:50.696 [2024-07-24 22:27:15.903924] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f15470 (9): Bad file descriptor 00:17:50.696 [2024-07-24 22:27:15.904923] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:50.696 [2024-07-24 22:27:15.904943] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:50.696 [2024-07-24 22:27:15.904960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:50.696 request: 00:17:50.696 { 00:17:50.696 "name": "TLSTEST", 00:17:50.696 "trtype": "tcp", 00:17:50.696 "traddr": "10.0.0.2", 00:17:50.696 "adrfam": "ipv4", 00:17:50.696 "trsvcid": "4420", 00:17:50.696 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:50.696 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:50.696 "prchk_reftag": false, 00:17:50.696 "prchk_guard": false, 00:17:50.696 "hdgst": false, 00:17:50.696 "ddgst": false, 00:17:50.696 "psk": "/tmp/tmp.tpcpZIGqnF", 00:17:50.696 "method": "bdev_nvme_attach_controller", 00:17:50.696 "req_id": 1 00:17:50.696 } 00:17:50.696 Got JSON-RPC error response 00:17:50.696 response: 00:17:50.696 { 00:17:50.696 "code": -5, 00:17:50.696 "message": "Input/output error" 00:17:50.696 } 00:17:50.696 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 3862800 00:17:50.696 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3862800 ']' 00:17:50.696 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3862800 00:17:50.696 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:50.696 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:50.696 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3862800 00:17:50.696 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:50.696 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:50.696 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3862800' 00:17:50.696 killing process with pid 3862800 00:17:50.696 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3862800 00:17:50.696 Received shutdown signal, test time was about 10.000000 seconds 00:17:50.696 00:17:50.696 Latency(us) 00:17:50.696 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:50.696 =================================================================================================================== 00:17:50.696 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:50.696 [2024-07-24 22:27:15.952253] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:50.696 22:27:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3862800 00:17:50.696 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:50.696 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:50.696 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:50.696 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:50.696 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:50.696 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.m1xUyMKlNQ 00:17:50.696 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:50.696 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.m1xUyMKlNQ 00:17:50.696 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:50.696 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:50.696 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:50.696 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:50.696 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.m1xUyMKlNQ 00:17:50.696 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:50.696 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:50.696 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:17:50.696 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.m1xUyMKlNQ' 00:17:50.696 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:50.696 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3862907 00:17:50.696 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:50.696 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:50.696 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3862907 /var/tmp/bdevperf.sock 00:17:50.696 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3862907 ']' 00:17:50.696 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:50.696 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:50.696 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:50.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:50.696 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:50.696 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:50.696 [2024-07-24 22:27:16.191346] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:17:50.696 [2024-07-24 22:27:16.191438] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3862907 ] 00:17:50.696 EAL: No free 2048 kB hugepages reported on node 1 00:17:50.696 [2024-07-24 22:27:16.249046] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.696 [2024-07-24 22:27:16.348401] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:50.956 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:50.956 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:50.956 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.m1xUyMKlNQ 00:17:51.216 [2024-07-24 22:27:16.718409] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:51.216 [2024-07-24 22:27:16.718546] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:51.216 [2024-07-24 22:27:16.723512] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:51.216 [2024-07-24 22:27:16.723560] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:51.216 [2024-07-24 22:27:16.723593] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:51.216 [2024-07-24 22:27:16.724205] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1330470 (107): Transport endpoint is not connected 00:17:51.216 [2024-07-24 22:27:16.725200] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1330470 (9): Bad file descriptor 00:17:51.216 [2024-07-24 22:27:16.726212] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:51.216 [2024-07-24 22:27:16.726230] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:51.216 [2024-07-24 22:27:16.726246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:51.216 request: 00:17:51.216 { 00:17:51.216 "name": "TLSTEST", 00:17:51.216 "trtype": "tcp", 00:17:51.216 "traddr": "10.0.0.2", 00:17:51.216 "adrfam": "ipv4", 00:17:51.216 "trsvcid": "4420", 00:17:51.216 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:51.216 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:51.216 "prchk_reftag": false, 00:17:51.216 "prchk_guard": false, 00:17:51.216 "hdgst": false, 00:17:51.216 "ddgst": false, 00:17:51.216 "psk": "/tmp/tmp.m1xUyMKlNQ", 00:17:51.216 "method": "bdev_nvme_attach_controller", 00:17:51.216 "req_id": 1 00:17:51.216 } 00:17:51.216 Got JSON-RPC error response 00:17:51.216 response: 00:17:51.216 { 00:17:51.216 "code": -5, 00:17:51.216 "message": "Input/output error" 00:17:51.216 } 00:17:51.216 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 3862907 00:17:51.216 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3862907 ']' 00:17:51.216 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3862907 00:17:51.216 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:51.216 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:51.216 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3862907 00:17:51.217 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:51.217 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:51.217 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3862907' 00:17:51.217 killing process with pid 3862907 00:17:51.217 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3862907 00:17:51.217 Received shutdown signal, test time was about 10.000000 seconds 00:17:51.217 00:17:51.217 Latency(us) 00:17:51.217 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:51.217 =================================================================================================================== 00:17:51.217 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:51.217 [2024-07-24 22:27:16.770237] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:51.217 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3862907 00:17:51.489 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:51.489 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:51.489 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:51.489 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:51.489 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:51.489 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.m1xUyMKlNQ 00:17:51.489 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:51.489 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.m1xUyMKlNQ 00:17:51.489 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:51.489 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:51.489 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:51.489 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:51.489 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.m1xUyMKlNQ 00:17:51.489 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:51.489 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:17:51.489 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:51.489 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.m1xUyMKlNQ' 00:17:51.489 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:51.489 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3863000 00:17:51.489 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:51.489 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:51.489 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3863000 /var/tmp/bdevperf.sock 00:17:51.489 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3863000 ']' 00:17:51.489 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:51.489 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:51.489 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:51.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:51.490 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:51.490 22:27:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:51.490 [2024-07-24 22:27:17.007366] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:17:51.490 [2024-07-24 22:27:17.007458] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3863000 ] 00:17:51.490 EAL: No free 2048 kB hugepages reported on node 1 00:17:51.490 [2024-07-24 22:27:17.063625] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.490 [2024-07-24 22:27:17.166652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:51.834 22:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:51.834 22:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:51.834 22:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.m1xUyMKlNQ 00:17:52.125 [2024-07-24 22:27:17.534703] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:52.125 [2024-07-24 22:27:17.534809] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:52.125 [2024-07-24 22:27:17.543983] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:52.125 [2024-07-24 22:27:17.544011] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:52.125 [2024-07-24 22:27:17.544053] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:52.125 [2024-07-24 22:27:17.544582] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a3470 (107): Transport endpoint is not connected 00:17:52.125 [2024-07-24 22:27:17.545570] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a3470 (9): Bad file descriptor 00:17:52.125 [2024-07-24 22:27:17.546570] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:17:52.125 [2024-07-24 22:27:17.546589] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:52.125 [2024-07-24 22:27:17.546605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:17:52.125 request: 00:17:52.125 { 00:17:52.125 "name": "TLSTEST", 00:17:52.125 "trtype": "tcp", 00:17:52.125 "traddr": "10.0.0.2", 00:17:52.125 "adrfam": "ipv4", 00:17:52.125 "trsvcid": "4420", 00:17:52.125 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:52.125 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:52.125 "prchk_reftag": false, 00:17:52.125 "prchk_guard": false, 00:17:52.125 "hdgst": false, 00:17:52.125 "ddgst": false, 00:17:52.125 "psk": "/tmp/tmp.m1xUyMKlNQ", 00:17:52.125 "method": "bdev_nvme_attach_controller", 00:17:52.125 "req_id": 1 00:17:52.125 } 00:17:52.125 Got JSON-RPC error response 00:17:52.125 response: 00:17:52.125 { 00:17:52.125 "code": -5, 00:17:52.125 "message": "Input/output error" 00:17:52.125 } 00:17:52.125 22:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 3863000 00:17:52.125 22:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3863000 ']' 00:17:52.125 22:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3863000 00:17:52.125 22:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:52.125 22:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:52.125 22:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3863000 00:17:52.125 22:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:52.125 22:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:52.125 22:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3863000' 00:17:52.125 killing process with pid 3863000 00:17:52.125 22:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3863000 00:17:52.125 Received shutdown signal, test time was about 10.000000 seconds 00:17:52.125 00:17:52.125 Latency(us) 00:17:52.125 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:52.125 =================================================================================================================== 00:17:52.125 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:52.125 [2024-07-24 22:27:17.590333] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:52.125 22:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3863000 00:17:52.125 22:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:52.125 22:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:52.125 22:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:52.125 22:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:52.125 22:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:52.125 22:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:52.125 22:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:17:52.125 22:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:52.125 22:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:17:52.125 22:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:52.125 22:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:17:52.125 22:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:52.125 22:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:52.126 22:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:52.126 22:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:52.126 22:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:52.126 22:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:17:52.126 22:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:52.126 22:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3863111 00:17:52.126 22:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:52.126 22:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:52.126 22:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3863111 /var/tmp/bdevperf.sock 00:17:52.126 22:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3863111 ']' 00:17:52.126 22:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:52.126 22:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:52.126 22:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:52.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:52.126 22:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:52.126 22:27:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:52.126 [2024-07-24 22:27:17.805991] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:17:52.126 [2024-07-24 22:27:17.806077] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3863111 ] 00:17:52.383 EAL: No free 2048 kB hugepages reported on node 1 00:17:52.383 [2024-07-24 22:27:17.855147] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:52.383 [2024-07-24 22:27:17.950722] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:52.383 22:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:52.383 22:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:52.383 22:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:52.951 [2024-07-24 22:27:18.356152] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:52.951 [2024-07-24 22:27:18.358329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xff8a20 (9): Bad file descriptor 00:17:52.951 [2024-07-24 22:27:18.359328] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:52.951 [2024-07-24 22:27:18.359347] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:52.951 [2024-07-24 22:27:18.359364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:52.951 request: 00:17:52.951 { 00:17:52.951 "name": "TLSTEST", 00:17:52.951 "trtype": "tcp", 00:17:52.951 "traddr": "10.0.0.2", 00:17:52.951 "adrfam": "ipv4", 00:17:52.951 "trsvcid": "4420", 00:17:52.951 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:52.951 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:52.951 "prchk_reftag": false, 00:17:52.951 "prchk_guard": false, 00:17:52.951 "hdgst": false, 00:17:52.951 "ddgst": false, 00:17:52.951 "method": "bdev_nvme_attach_controller", 00:17:52.951 "req_id": 1 00:17:52.951 } 00:17:52.951 Got JSON-RPC error response 00:17:52.951 response: 00:17:52.951 { 00:17:52.951 "code": -5, 00:17:52.951 "message": "Input/output error" 00:17:52.951 } 00:17:52.951 22:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 3863111 00:17:52.951 22:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3863111 ']' 00:17:52.951 22:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3863111 00:17:52.951 22:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:52.951 22:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:52.951 22:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3863111 00:17:52.951 22:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:17:52.951 22:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:17:52.951 22:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3863111' 00:17:52.951 killing process with pid 3863111 00:17:52.951 22:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3863111 00:17:52.951 Received shutdown signal, test time was about 10.000000 seconds 00:17:52.951 00:17:52.951 Latency(us) 00:17:52.951 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:52.951 =================================================================================================================== 00:17:52.952 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:52.952 22:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3863111 00:17:52.952 22:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:52.952 22:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:17:52.952 22:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:52.952 22:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:52.952 22:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:52.952 22:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 3859718 00:17:52.952 22:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3859718 ']' 00:17:52.952 22:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3859718 00:17:52.952 22:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:17:52.952 22:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:52.952 22:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3859718 00:17:52.952 22:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:52.952 22:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:52.952 22:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3859718' 00:17:52.952 killing process with pid 3859718 00:17:52.952 22:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3859718 00:17:52.952 [2024-07-24 22:27:18.610639] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:52.952 22:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3859718 00:17:53.211 22:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:17:53.211 22:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:17:53.211 22:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:53.211 22:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:53.211 22:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:17:53.211 22:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:17:53.211 22:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:53.211 22:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:53.211 22:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:17:53.211 22:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.SlthOY8JbD 00:17:53.211 22:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:53.211 22:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.SlthOY8JbD 00:17:53.211 22:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:17:53.211 22:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:53.211 22:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:53.211 22:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:53.211 22:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3863230 00:17:53.211 22:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3863230 00:17:53.211 22:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:53.211 22:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3863230 ']' 00:17:53.211 22:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:53.211 22:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:53.211 22:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:53.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:53.211 22:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:53.211 22:27:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:53.211 [2024-07-24 22:27:18.912229] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:17:53.211 [2024-07-24 22:27:18.912317] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:53.470 EAL: No free 2048 kB hugepages reported on node 1 00:17:53.470 [2024-07-24 22:27:18.966336] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:53.470 [2024-07-24 22:27:19.062017] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:53.470 [2024-07-24 22:27:19.062079] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:53.470 [2024-07-24 22:27:19.062092] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:53.470 [2024-07-24 22:27:19.062102] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:53.470 [2024-07-24 22:27:19.062111] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:53.470 [2024-07-24 22:27:19.062135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:53.470 22:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:53.470 22:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:53.470 22:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:53.470 22:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:53.470 22:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:53.470 22:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:53.470 22:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.SlthOY8JbD 00:17:53.470 22:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.SlthOY8JbD 00:17:53.470 22:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:53.728 [2024-07-24 22:27:19.394946] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:53.728 22:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:53.986 22:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:54.245 [2024-07-24 22:27:19.924386] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:54.245 [2024-07-24 22:27:19.924643] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:54.245 22:27:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:54.505 malloc0 00:17:54.505 22:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:55.075 22:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.SlthOY8JbD 00:17:55.075 [2024-07-24 22:27:20.773266] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:55.333 22:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.SlthOY8JbD 00:17:55.333 22:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:55.333 22:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:55.333 22:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:55.333 22:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.SlthOY8JbD' 00:17:55.333 22:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:55.333 22:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3863368 00:17:55.333 22:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:55.333 22:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:55.333 22:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3863368 /var/tmp/bdevperf.sock 00:17:55.333 22:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3863368 ']' 00:17:55.333 22:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:55.333 22:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:55.333 22:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:55.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:55.333 22:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:55.333 22:27:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:55.333 [2024-07-24 22:27:20.843087] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:17:55.333 [2024-07-24 22:27:20.843181] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3863368 ] 00:17:55.333 EAL: No free 2048 kB hugepages reported on node 1 00:17:55.333 [2024-07-24 22:27:20.900029] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.333 [2024-07-24 22:27:20.999362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:55.591 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:55.591 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:55.591 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.SlthOY8JbD 00:17:55.849 [2024-07-24 22:27:21.369254] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:55.849 [2024-07-24 22:27:21.369381] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:55.849 TLSTESTn1 00:17:55.849 22:27:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:56.108 Running I/O for 10 seconds... 00:18:06.101 00:18:06.101 Latency(us) 00:18:06.101 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:06.101 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:06.101 Verification LBA range: start 0x0 length 0x2000 00:18:06.101 TLSTESTn1 : 10.03 3435.04 13.42 0.00 0.00 37192.87 9175.04 55535.69 00:18:06.101 =================================================================================================================== 00:18:06.101 Total : 3435.04 13.42 0.00 0.00 37192.87 9175.04 55535.69 00:18:06.101 0 00:18:06.101 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:06.101 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 3863368 00:18:06.101 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3863368 ']' 00:18:06.101 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3863368 00:18:06.101 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:06.101 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:06.101 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3863368 00:18:06.101 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:06.101 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:06.101 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3863368' 00:18:06.101 killing process with pid 3863368 00:18:06.101 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3863368 00:18:06.101 Received shutdown signal, test time was about 10.000000 seconds 00:18:06.101 00:18:06.101 Latency(us) 00:18:06.101 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:06.101 =================================================================================================================== 00:18:06.101 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:06.101 [2024-07-24 22:27:31.677336] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:06.101 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3863368 00:18:06.362 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.SlthOY8JbD 00:18:06.362 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.SlthOY8JbD 00:18:06.362 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:06.362 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.SlthOY8JbD 00:18:06.362 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:06.362 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:06.362 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:06.362 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:06.362 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.SlthOY8JbD 00:18:06.362 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:06.362 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:06.362 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:06.362 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.SlthOY8JbD' 00:18:06.362 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:06.362 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3864389 00:18:06.362 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:06.362 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:06.362 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3864389 /var/tmp/bdevperf.sock 00:18:06.362 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3864389 ']' 00:18:06.362 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:06.362 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:06.362 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:06.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:06.362 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:06.362 22:27:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:06.362 [2024-07-24 22:27:31.950061] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:18:06.362 [2024-07-24 22:27:31.950151] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3864389 ] 00:18:06.362 EAL: No free 2048 kB hugepages reported on node 1 00:18:06.362 [2024-07-24 22:27:32.011489] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.622 [2024-07-24 22:27:32.131521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:06.622 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:06.622 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:06.622 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.SlthOY8JbD 00:18:06.880 [2024-07-24 22:27:32.503563] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:06.880 [2024-07-24 22:27:32.503626] bdev_nvme.c:6153:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:06.880 [2024-07-24 22:27:32.503639] bdev_nvme.c:6258:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.SlthOY8JbD 00:18:06.880 request: 00:18:06.880 { 00:18:06.880 "name": "TLSTEST", 00:18:06.880 "trtype": "tcp", 00:18:06.880 "traddr": "10.0.0.2", 00:18:06.880 "adrfam": "ipv4", 00:18:06.880 "trsvcid": "4420", 00:18:06.880 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:06.880 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:06.880 "prchk_reftag": false, 00:18:06.880 "prchk_guard": false, 00:18:06.880 "hdgst": false, 00:18:06.880 "ddgst": false, 00:18:06.880 "psk": "/tmp/tmp.SlthOY8JbD", 00:18:06.880 "method": "bdev_nvme_attach_controller", 00:18:06.880 "req_id": 1 00:18:06.880 } 00:18:06.880 Got JSON-RPC error response 00:18:06.880 response: 00:18:06.880 { 00:18:06.880 "code": -1, 00:18:06.880 "message": "Operation not permitted" 00:18:06.880 } 00:18:06.880 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 3864389 00:18:06.880 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3864389 ']' 00:18:06.880 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3864389 00:18:06.880 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:06.880 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:06.880 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3864389 00:18:06.880 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:06.880 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:06.880 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3864389' 00:18:06.880 killing process with pid 3864389 00:18:06.880 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3864389 00:18:06.880 Received shutdown signal, test time was about 10.000000 seconds 00:18:06.880 00:18:06.880 Latency(us) 00:18:06.880 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:06.880 =================================================================================================================== 00:18:06.880 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:06.880 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3864389 00:18:07.138 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:07.138 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:07.138 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:07.138 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:07.138 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:07.138 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 3863230 00:18:07.138 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3863230 ']' 00:18:07.138 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3863230 00:18:07.138 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:07.138 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:07.138 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3863230 00:18:07.138 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:07.138 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:07.138 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3863230' 00:18:07.138 killing process with pid 3863230 00:18:07.138 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3863230 00:18:07.138 [2024-07-24 22:27:32.744914] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:07.138 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3863230 00:18:07.399 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:18:07.399 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:07.399 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:07.399 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:07.399 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3864478 00:18:07.399 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:07.399 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3864478 00:18:07.399 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3864478 ']' 00:18:07.399 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:07.399 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:07.399 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:07.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:07.399 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:07.399 22:27:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:07.399 [2024-07-24 22:27:32.993574] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:18:07.399 [2024-07-24 22:27:32.993660] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:07.399 EAL: No free 2048 kB hugepages reported on node 1 00:18:07.399 [2024-07-24 22:27:33.057176] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:07.658 [2024-07-24 22:27:33.162263] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:07.658 [2024-07-24 22:27:33.162327] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:07.658 [2024-07-24 22:27:33.162339] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:07.658 [2024-07-24 22:27:33.162350] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:07.658 [2024-07-24 22:27:33.162369] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:07.658 [2024-07-24 22:27:33.162413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:07.658 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:07.658 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:07.658 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:07.658 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:07.658 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:07.658 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:07.658 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.SlthOY8JbD 00:18:07.658 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:07.658 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.SlthOY8JbD 00:18:07.658 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:18:07.658 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:07.658 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:18:07.658 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:07.658 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.SlthOY8JbD 00:18:07.658 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.SlthOY8JbD 00:18:07.658 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:07.915 [2024-07-24 22:27:33.561733] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:07.915 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:08.174 22:27:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:08.740 [2024-07-24 22:27:34.155349] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:08.740 [2024-07-24 22:27:34.155632] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:08.740 22:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:08.997 malloc0 00:18:08.997 22:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:09.255 22:27:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.SlthOY8JbD 00:18:09.514 [2024-07-24 22:27:35.051058] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:09.514 [2024-07-24 22:27:35.051103] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:18:09.514 [2024-07-24 22:27:35.051134] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:18:09.514 request: 00:18:09.514 { 00:18:09.514 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:09.514 "host": "nqn.2016-06.io.spdk:host1", 00:18:09.514 "psk": "/tmp/tmp.SlthOY8JbD", 00:18:09.514 "method": "nvmf_subsystem_add_host", 00:18:09.514 "req_id": 1 00:18:09.514 } 00:18:09.514 Got JSON-RPC error response 00:18:09.514 response: 00:18:09.514 { 00:18:09.514 "code": -32603, 00:18:09.514 "message": "Internal error" 00:18:09.514 } 00:18:09.514 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:09.514 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:09.514 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:09.514 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:09.514 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 3864478 00:18:09.514 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3864478 ']' 00:18:09.514 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3864478 00:18:09.514 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:09.514 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:09.514 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3864478 00:18:09.514 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:09.514 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:09.514 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3864478' 00:18:09.514 killing process with pid 3864478 00:18:09.514 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3864478 00:18:09.514 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3864478 00:18:09.773 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.SlthOY8JbD 00:18:09.773 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:18:09.773 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:09.773 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:09.773 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:09.773 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3864793 00:18:09.773 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:09.773 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3864793 00:18:09.773 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3864793 ']' 00:18:09.773 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:09.773 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:09.773 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:09.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:09.773 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:09.773 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:09.773 [2024-07-24 22:27:35.353948] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:18:09.773 [2024-07-24 22:27:35.354033] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:09.773 EAL: No free 2048 kB hugepages reported on node 1 00:18:09.773 [2024-07-24 22:27:35.405216] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.031 [2024-07-24 22:27:35.499100] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:10.031 [2024-07-24 22:27:35.499156] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:10.031 [2024-07-24 22:27:35.499168] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:10.031 [2024-07-24 22:27:35.499178] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:10.031 [2024-07-24 22:27:35.499196] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:10.031 [2024-07-24 22:27:35.499220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:10.031 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:10.031 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:10.031 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:10.031 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:10.031 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:10.031 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:10.031 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.SlthOY8JbD 00:18:10.031 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.SlthOY8JbD 00:18:10.031 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:10.291 [2024-07-24 22:27:35.851620] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:10.291 22:27:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:10.550 22:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:10.809 [2024-07-24 22:27:36.409136] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:10.809 [2024-07-24 22:27:36.409377] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:10.809 22:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:11.067 malloc0 00:18:11.067 22:27:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:11.325 22:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.SlthOY8JbD 00:18:11.895 [2024-07-24 22:27:37.305654] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:11.895 22:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=3864936 00:18:11.895 22:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:11.895 22:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:11.895 22:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 3864936 /var/tmp/bdevperf.sock 00:18:11.895 22:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3864936 ']' 00:18:11.895 22:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:11.895 22:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:11.895 22:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:11.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:11.895 22:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:11.895 22:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:11.895 [2024-07-24 22:27:37.374566] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:18:11.895 [2024-07-24 22:27:37.374650] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3864936 ] 00:18:11.895 EAL: No free 2048 kB hugepages reported on node 1 00:18:11.895 [2024-07-24 22:27:37.431259] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.895 [2024-07-24 22:27:37.537434] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:12.154 22:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:12.154 22:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:12.154 22:27:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.SlthOY8JbD 00:18:12.412 [2024-07-24 22:27:37.910214] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:12.412 [2024-07-24 22:27:37.910349] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:12.412 TLSTESTn1 00:18:12.412 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:18:12.983 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:18:12.983 "subsystems": [ 00:18:12.983 { 00:18:12.983 "subsystem": "keyring", 00:18:12.983 "config": [] 00:18:12.983 }, 00:18:12.983 { 00:18:12.983 "subsystem": "iobuf", 00:18:12.983 "config": [ 00:18:12.983 { 00:18:12.983 "method": "iobuf_set_options", 00:18:12.983 "params": { 00:18:12.983 "small_pool_count": 8192, 00:18:12.983 "large_pool_count": 1024, 00:18:12.983 "small_bufsize": 8192, 00:18:12.983 "large_bufsize": 135168 00:18:12.983 } 00:18:12.983 } 00:18:12.983 ] 00:18:12.983 }, 00:18:12.983 { 00:18:12.983 "subsystem": "sock", 00:18:12.983 "config": [ 00:18:12.983 { 00:18:12.983 "method": "sock_set_default_impl", 00:18:12.983 "params": { 00:18:12.983 "impl_name": "posix" 00:18:12.983 } 00:18:12.983 }, 00:18:12.983 { 00:18:12.983 "method": "sock_impl_set_options", 00:18:12.983 "params": { 00:18:12.983 "impl_name": "ssl", 00:18:12.983 "recv_buf_size": 4096, 00:18:12.983 "send_buf_size": 4096, 00:18:12.983 "enable_recv_pipe": true, 00:18:12.983 "enable_quickack": false, 00:18:12.983 "enable_placement_id": 0, 00:18:12.983 "enable_zerocopy_send_server": true, 00:18:12.983 "enable_zerocopy_send_client": false, 00:18:12.983 "zerocopy_threshold": 0, 00:18:12.983 "tls_version": 0, 00:18:12.983 "enable_ktls": false 00:18:12.983 } 00:18:12.983 }, 00:18:12.983 { 00:18:12.983 "method": "sock_impl_set_options", 00:18:12.983 "params": { 00:18:12.983 "impl_name": "posix", 00:18:12.983 "recv_buf_size": 2097152, 00:18:12.983 "send_buf_size": 2097152, 00:18:12.983 "enable_recv_pipe": true, 00:18:12.983 "enable_quickack": false, 00:18:12.983 "enable_placement_id": 0, 00:18:12.983 "enable_zerocopy_send_server": true, 00:18:12.983 "enable_zerocopy_send_client": false, 00:18:12.983 "zerocopy_threshold": 0, 00:18:12.983 "tls_version": 0, 00:18:12.983 "enable_ktls": false 00:18:12.983 } 00:18:12.983 } 00:18:12.983 ] 00:18:12.983 }, 00:18:12.983 { 00:18:12.983 "subsystem": "vmd", 00:18:12.983 "config": [] 00:18:12.983 }, 00:18:12.983 { 00:18:12.983 "subsystem": "accel", 00:18:12.983 "config": [ 00:18:12.983 { 00:18:12.983 "method": "accel_set_options", 00:18:12.983 "params": { 00:18:12.983 "small_cache_size": 128, 00:18:12.983 "large_cache_size": 16, 00:18:12.983 "task_count": 2048, 00:18:12.983 "sequence_count": 2048, 00:18:12.983 "buf_count": 2048 00:18:12.983 } 00:18:12.983 } 00:18:12.983 ] 00:18:12.983 }, 00:18:12.983 { 00:18:12.983 "subsystem": "bdev", 00:18:12.983 "config": [ 00:18:12.983 { 00:18:12.983 "method": "bdev_set_options", 00:18:12.983 "params": { 00:18:12.983 "bdev_io_pool_size": 65535, 00:18:12.983 "bdev_io_cache_size": 256, 00:18:12.983 "bdev_auto_examine": true, 00:18:12.983 "iobuf_small_cache_size": 128, 00:18:12.983 "iobuf_large_cache_size": 16 00:18:12.983 } 00:18:12.983 }, 00:18:12.983 { 00:18:12.983 "method": "bdev_raid_set_options", 00:18:12.983 "params": { 00:18:12.983 "process_window_size_kb": 1024, 00:18:12.983 "process_max_bandwidth_mb_sec": 0 00:18:12.983 } 00:18:12.983 }, 00:18:12.983 { 00:18:12.983 "method": "bdev_iscsi_set_options", 00:18:12.983 "params": { 00:18:12.983 "timeout_sec": 30 00:18:12.983 } 00:18:12.983 }, 00:18:12.983 { 00:18:12.983 "method": "bdev_nvme_set_options", 00:18:12.983 "params": { 00:18:12.983 "action_on_timeout": "none", 00:18:12.983 "timeout_us": 0, 00:18:12.983 "timeout_admin_us": 0, 00:18:12.983 "keep_alive_timeout_ms": 10000, 00:18:12.983 "arbitration_burst": 0, 00:18:12.983 "low_priority_weight": 0, 00:18:12.983 "medium_priority_weight": 0, 00:18:12.983 "high_priority_weight": 0, 00:18:12.983 "nvme_adminq_poll_period_us": 10000, 00:18:12.983 "nvme_ioq_poll_period_us": 0, 00:18:12.983 "io_queue_requests": 0, 00:18:12.983 "delay_cmd_submit": true, 00:18:12.983 "transport_retry_count": 4, 00:18:12.983 "bdev_retry_count": 3, 00:18:12.983 "transport_ack_timeout": 0, 00:18:12.983 "ctrlr_loss_timeout_sec": 0, 00:18:12.983 "reconnect_delay_sec": 0, 00:18:12.983 "fast_io_fail_timeout_sec": 0, 00:18:12.983 "disable_auto_failback": false, 00:18:12.983 "generate_uuids": false, 00:18:12.983 "transport_tos": 0, 00:18:12.983 "nvme_error_stat": false, 00:18:12.983 "rdma_srq_size": 0, 00:18:12.983 "io_path_stat": false, 00:18:12.983 "allow_accel_sequence": false, 00:18:12.983 "rdma_max_cq_size": 0, 00:18:12.983 "rdma_cm_event_timeout_ms": 0, 00:18:12.983 "dhchap_digests": [ 00:18:12.983 "sha256", 00:18:12.983 "sha384", 00:18:12.983 "sha512" 00:18:12.983 ], 00:18:12.983 "dhchap_dhgroups": [ 00:18:12.983 "null", 00:18:12.983 "ffdhe2048", 00:18:12.983 "ffdhe3072", 00:18:12.983 "ffdhe4096", 00:18:12.983 "ffdhe6144", 00:18:12.983 "ffdhe8192" 00:18:12.983 ] 00:18:12.983 } 00:18:12.983 }, 00:18:12.983 { 00:18:12.983 "method": "bdev_nvme_set_hotplug", 00:18:12.983 "params": { 00:18:12.983 "period_us": 100000, 00:18:12.983 "enable": false 00:18:12.983 } 00:18:12.983 }, 00:18:12.983 { 00:18:12.983 "method": "bdev_malloc_create", 00:18:12.983 "params": { 00:18:12.983 "name": "malloc0", 00:18:12.983 "num_blocks": 8192, 00:18:12.983 "block_size": 4096, 00:18:12.983 "physical_block_size": 4096, 00:18:12.983 "uuid": "5fc2a9dd-8e22-435c-b779-7ab4245f2eed", 00:18:12.983 "optimal_io_boundary": 0, 00:18:12.983 "md_size": 0, 00:18:12.983 "dif_type": 0, 00:18:12.983 "dif_is_head_of_md": false, 00:18:12.983 "dif_pi_format": 0 00:18:12.983 } 00:18:12.983 }, 00:18:12.983 { 00:18:12.983 "method": "bdev_wait_for_examine" 00:18:12.983 } 00:18:12.983 ] 00:18:12.983 }, 00:18:12.983 { 00:18:12.983 "subsystem": "nbd", 00:18:12.983 "config": [] 00:18:12.983 }, 00:18:12.983 { 00:18:12.983 "subsystem": "scheduler", 00:18:12.983 "config": [ 00:18:12.983 { 00:18:12.983 "method": "framework_set_scheduler", 00:18:12.983 "params": { 00:18:12.983 "name": "static" 00:18:12.983 } 00:18:12.983 } 00:18:12.983 ] 00:18:12.983 }, 00:18:12.983 { 00:18:12.983 "subsystem": "nvmf", 00:18:12.983 "config": [ 00:18:12.984 { 00:18:12.984 "method": "nvmf_set_config", 00:18:12.984 "params": { 00:18:12.984 "discovery_filter": "match_any", 00:18:12.984 "admin_cmd_passthru": { 00:18:12.984 "identify_ctrlr": false 00:18:12.984 } 00:18:12.984 } 00:18:12.984 }, 00:18:12.984 { 00:18:12.984 "method": "nvmf_set_max_subsystems", 00:18:12.984 "params": { 00:18:12.984 "max_subsystems": 1024 00:18:12.984 } 00:18:12.984 }, 00:18:12.984 { 00:18:12.984 "method": "nvmf_set_crdt", 00:18:12.984 "params": { 00:18:12.984 "crdt1": 0, 00:18:12.984 "crdt2": 0, 00:18:12.984 "crdt3": 0 00:18:12.984 } 00:18:12.984 }, 00:18:12.984 { 00:18:12.984 "method": "nvmf_create_transport", 00:18:12.984 "params": { 00:18:12.984 "trtype": "TCP", 00:18:12.984 "max_queue_depth": 128, 00:18:12.984 "max_io_qpairs_per_ctrlr": 127, 00:18:12.984 "in_capsule_data_size": 4096, 00:18:12.984 "max_io_size": 131072, 00:18:12.984 "io_unit_size": 131072, 00:18:12.984 "max_aq_depth": 128, 00:18:12.984 "num_shared_buffers": 511, 00:18:12.984 "buf_cache_size": 4294967295, 00:18:12.984 "dif_insert_or_strip": false, 00:18:12.984 "zcopy": false, 00:18:12.984 "c2h_success": false, 00:18:12.984 "sock_priority": 0, 00:18:12.984 "abort_timeout_sec": 1, 00:18:12.984 "ack_timeout": 0, 00:18:12.984 "data_wr_pool_size": 0 00:18:12.984 } 00:18:12.984 }, 00:18:12.984 { 00:18:12.984 "method": "nvmf_create_subsystem", 00:18:12.984 "params": { 00:18:12.984 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:12.984 "allow_any_host": false, 00:18:12.984 "serial_number": "SPDK00000000000001", 00:18:12.984 "model_number": "SPDK bdev Controller", 00:18:12.984 "max_namespaces": 10, 00:18:12.984 "min_cntlid": 1, 00:18:12.984 "max_cntlid": 65519, 00:18:12.984 "ana_reporting": false 00:18:12.984 } 00:18:12.984 }, 00:18:12.984 { 00:18:12.984 "method": "nvmf_subsystem_add_host", 00:18:12.984 "params": { 00:18:12.984 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:12.984 "host": "nqn.2016-06.io.spdk:host1", 00:18:12.984 "psk": "/tmp/tmp.SlthOY8JbD" 00:18:12.984 } 00:18:12.984 }, 00:18:12.984 { 00:18:12.984 "method": "nvmf_subsystem_add_ns", 00:18:12.984 "params": { 00:18:12.984 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:12.984 "namespace": { 00:18:12.984 "nsid": 1, 00:18:12.984 "bdev_name": "malloc0", 00:18:12.984 "nguid": "5FC2A9DD8E22435CB7797AB4245F2EED", 00:18:12.984 "uuid": "5fc2a9dd-8e22-435c-b779-7ab4245f2eed", 00:18:12.984 "no_auto_visible": false 00:18:12.984 } 00:18:12.984 } 00:18:12.984 }, 00:18:12.984 { 00:18:12.984 "method": "nvmf_subsystem_add_listener", 00:18:12.984 "params": { 00:18:12.984 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:12.984 "listen_address": { 00:18:12.984 "trtype": "TCP", 00:18:12.984 "adrfam": "IPv4", 00:18:12.984 "traddr": "10.0.0.2", 00:18:12.984 "trsvcid": "4420" 00:18:12.984 }, 00:18:12.984 "secure_channel": true 00:18:12.984 } 00:18:12.984 } 00:18:12.984 ] 00:18:12.984 } 00:18:12.984 ] 00:18:12.984 }' 00:18:12.984 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:13.245 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:18:13.245 "subsystems": [ 00:18:13.245 { 00:18:13.245 "subsystem": "keyring", 00:18:13.245 "config": [] 00:18:13.245 }, 00:18:13.245 { 00:18:13.245 "subsystem": "iobuf", 00:18:13.245 "config": [ 00:18:13.245 { 00:18:13.245 "method": "iobuf_set_options", 00:18:13.245 "params": { 00:18:13.245 "small_pool_count": 8192, 00:18:13.245 "large_pool_count": 1024, 00:18:13.245 "small_bufsize": 8192, 00:18:13.245 "large_bufsize": 135168 00:18:13.245 } 00:18:13.245 } 00:18:13.245 ] 00:18:13.245 }, 00:18:13.245 { 00:18:13.245 "subsystem": "sock", 00:18:13.245 "config": [ 00:18:13.245 { 00:18:13.245 "method": "sock_set_default_impl", 00:18:13.245 "params": { 00:18:13.245 "impl_name": "posix" 00:18:13.245 } 00:18:13.245 }, 00:18:13.245 { 00:18:13.245 "method": "sock_impl_set_options", 00:18:13.245 "params": { 00:18:13.245 "impl_name": "ssl", 00:18:13.245 "recv_buf_size": 4096, 00:18:13.245 "send_buf_size": 4096, 00:18:13.245 "enable_recv_pipe": true, 00:18:13.245 "enable_quickack": false, 00:18:13.245 "enable_placement_id": 0, 00:18:13.245 "enable_zerocopy_send_server": true, 00:18:13.245 "enable_zerocopy_send_client": false, 00:18:13.245 "zerocopy_threshold": 0, 00:18:13.245 "tls_version": 0, 00:18:13.245 "enable_ktls": false 00:18:13.245 } 00:18:13.245 }, 00:18:13.245 { 00:18:13.245 "method": "sock_impl_set_options", 00:18:13.245 "params": { 00:18:13.245 "impl_name": "posix", 00:18:13.245 "recv_buf_size": 2097152, 00:18:13.245 "send_buf_size": 2097152, 00:18:13.245 "enable_recv_pipe": true, 00:18:13.245 "enable_quickack": false, 00:18:13.245 "enable_placement_id": 0, 00:18:13.245 "enable_zerocopy_send_server": true, 00:18:13.245 "enable_zerocopy_send_client": false, 00:18:13.245 "zerocopy_threshold": 0, 00:18:13.245 "tls_version": 0, 00:18:13.245 "enable_ktls": false 00:18:13.245 } 00:18:13.245 } 00:18:13.245 ] 00:18:13.245 }, 00:18:13.245 { 00:18:13.245 "subsystem": "vmd", 00:18:13.245 "config": [] 00:18:13.245 }, 00:18:13.245 { 00:18:13.245 "subsystem": "accel", 00:18:13.245 "config": [ 00:18:13.245 { 00:18:13.245 "method": "accel_set_options", 00:18:13.245 "params": { 00:18:13.245 "small_cache_size": 128, 00:18:13.245 "large_cache_size": 16, 00:18:13.245 "task_count": 2048, 00:18:13.245 "sequence_count": 2048, 00:18:13.245 "buf_count": 2048 00:18:13.245 } 00:18:13.245 } 00:18:13.245 ] 00:18:13.245 }, 00:18:13.245 { 00:18:13.245 "subsystem": "bdev", 00:18:13.245 "config": [ 00:18:13.245 { 00:18:13.245 "method": "bdev_set_options", 00:18:13.245 "params": { 00:18:13.245 "bdev_io_pool_size": 65535, 00:18:13.245 "bdev_io_cache_size": 256, 00:18:13.245 "bdev_auto_examine": true, 00:18:13.245 "iobuf_small_cache_size": 128, 00:18:13.245 "iobuf_large_cache_size": 16 00:18:13.245 } 00:18:13.245 }, 00:18:13.245 { 00:18:13.245 "method": "bdev_raid_set_options", 00:18:13.245 "params": { 00:18:13.245 "process_window_size_kb": 1024, 00:18:13.245 "process_max_bandwidth_mb_sec": 0 00:18:13.245 } 00:18:13.245 }, 00:18:13.245 { 00:18:13.245 "method": "bdev_iscsi_set_options", 00:18:13.245 "params": { 00:18:13.245 "timeout_sec": 30 00:18:13.245 } 00:18:13.245 }, 00:18:13.245 { 00:18:13.245 "method": "bdev_nvme_set_options", 00:18:13.245 "params": { 00:18:13.245 "action_on_timeout": "none", 00:18:13.245 "timeout_us": 0, 00:18:13.245 "timeout_admin_us": 0, 00:18:13.245 "keep_alive_timeout_ms": 10000, 00:18:13.245 "arbitration_burst": 0, 00:18:13.245 "low_priority_weight": 0, 00:18:13.245 "medium_priority_weight": 0, 00:18:13.245 "high_priority_weight": 0, 00:18:13.245 "nvme_adminq_poll_period_us": 10000, 00:18:13.245 "nvme_ioq_poll_period_us": 0, 00:18:13.245 "io_queue_requests": 512, 00:18:13.245 "delay_cmd_submit": true, 00:18:13.245 "transport_retry_count": 4, 00:18:13.245 "bdev_retry_count": 3, 00:18:13.245 "transport_ack_timeout": 0, 00:18:13.245 "ctrlr_loss_timeout_sec": 0, 00:18:13.245 "reconnect_delay_sec": 0, 00:18:13.245 "fast_io_fail_timeout_sec": 0, 00:18:13.245 "disable_auto_failback": false, 00:18:13.245 "generate_uuids": false, 00:18:13.245 "transport_tos": 0, 00:18:13.245 "nvme_error_stat": false, 00:18:13.245 "rdma_srq_size": 0, 00:18:13.245 "io_path_stat": false, 00:18:13.245 "allow_accel_sequence": false, 00:18:13.245 "rdma_max_cq_size": 0, 00:18:13.245 "rdma_cm_event_timeout_ms": 0, 00:18:13.245 "dhchap_digests": [ 00:18:13.245 "sha256", 00:18:13.245 "sha384", 00:18:13.245 "sha512" 00:18:13.245 ], 00:18:13.245 "dhchap_dhgroups": [ 00:18:13.245 "null", 00:18:13.245 "ffdhe2048", 00:18:13.245 "ffdhe3072", 00:18:13.245 "ffdhe4096", 00:18:13.245 "ffdhe6144", 00:18:13.245 "ffdhe8192" 00:18:13.245 ] 00:18:13.245 } 00:18:13.245 }, 00:18:13.245 { 00:18:13.246 "method": "bdev_nvme_attach_controller", 00:18:13.246 "params": { 00:18:13.246 "name": "TLSTEST", 00:18:13.246 "trtype": "TCP", 00:18:13.246 "adrfam": "IPv4", 00:18:13.246 "traddr": "10.0.0.2", 00:18:13.246 "trsvcid": "4420", 00:18:13.246 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:13.246 "prchk_reftag": false, 00:18:13.246 "prchk_guard": false, 00:18:13.246 "ctrlr_loss_timeout_sec": 0, 00:18:13.246 "reconnect_delay_sec": 0, 00:18:13.246 "fast_io_fail_timeout_sec": 0, 00:18:13.246 "psk": "/tmp/tmp.SlthOY8JbD", 00:18:13.246 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:13.246 "hdgst": false, 00:18:13.246 "ddgst": false 00:18:13.246 } 00:18:13.246 }, 00:18:13.246 { 00:18:13.246 "method": "bdev_nvme_set_hotplug", 00:18:13.246 "params": { 00:18:13.246 "period_us": 100000, 00:18:13.246 "enable": false 00:18:13.246 } 00:18:13.246 }, 00:18:13.246 { 00:18:13.246 "method": "bdev_wait_for_examine" 00:18:13.246 } 00:18:13.246 ] 00:18:13.246 }, 00:18:13.246 { 00:18:13.246 "subsystem": "nbd", 00:18:13.246 "config": [] 00:18:13.246 } 00:18:13.246 ] 00:18:13.246 }' 00:18:13.246 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 3864936 00:18:13.246 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3864936 ']' 00:18:13.246 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3864936 00:18:13.246 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:13.246 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:13.246 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3864936 00:18:13.246 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:13.246 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:13.246 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3864936' 00:18:13.246 killing process with pid 3864936 00:18:13.246 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3864936 00:18:13.246 Received shutdown signal, test time was about 10.000000 seconds 00:18:13.246 00:18:13.246 Latency(us) 00:18:13.246 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.246 =================================================================================================================== 00:18:13.246 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:13.246 [2024-07-24 22:27:38.770885] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:13.246 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3864936 00:18:13.505 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 3864793 00:18:13.505 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3864793 ']' 00:18:13.505 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3864793 00:18:13.505 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:13.505 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:13.505 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3864793 00:18:13.505 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:13.505 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:13.505 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3864793' 00:18:13.505 killing process with pid 3864793 00:18:13.505 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3864793 00:18:13.505 [2024-07-24 22:27:38.988438] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:13.505 22:27:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3864793 00:18:13.505 22:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:13.505 22:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:13.505 22:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:13.505 22:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:18:13.505 "subsystems": [ 00:18:13.505 { 00:18:13.505 "subsystem": "keyring", 00:18:13.505 "config": [] 00:18:13.505 }, 00:18:13.505 { 00:18:13.505 "subsystem": "iobuf", 00:18:13.505 "config": [ 00:18:13.505 { 00:18:13.505 "method": "iobuf_set_options", 00:18:13.505 "params": { 00:18:13.505 "small_pool_count": 8192, 00:18:13.505 "large_pool_count": 1024, 00:18:13.505 "small_bufsize": 8192, 00:18:13.505 "large_bufsize": 135168 00:18:13.505 } 00:18:13.505 } 00:18:13.505 ] 00:18:13.505 }, 00:18:13.505 { 00:18:13.505 "subsystem": "sock", 00:18:13.505 "config": [ 00:18:13.505 { 00:18:13.505 "method": "sock_set_default_impl", 00:18:13.505 "params": { 00:18:13.505 "impl_name": "posix" 00:18:13.505 } 00:18:13.505 }, 00:18:13.505 { 00:18:13.505 "method": "sock_impl_set_options", 00:18:13.505 "params": { 00:18:13.505 "impl_name": "ssl", 00:18:13.506 "recv_buf_size": 4096, 00:18:13.506 "send_buf_size": 4096, 00:18:13.506 "enable_recv_pipe": true, 00:18:13.506 "enable_quickack": false, 00:18:13.506 "enable_placement_id": 0, 00:18:13.506 "enable_zerocopy_send_server": true, 00:18:13.506 "enable_zerocopy_send_client": false, 00:18:13.506 "zerocopy_threshold": 0, 00:18:13.506 "tls_version": 0, 00:18:13.506 "enable_ktls": false 00:18:13.506 } 00:18:13.506 }, 00:18:13.506 { 00:18:13.506 "method": "sock_impl_set_options", 00:18:13.506 "params": { 00:18:13.506 "impl_name": "posix", 00:18:13.506 "recv_buf_size": 2097152, 00:18:13.506 "send_buf_size": 2097152, 00:18:13.506 "enable_recv_pipe": true, 00:18:13.506 "enable_quickack": false, 00:18:13.506 "enable_placement_id": 0, 00:18:13.506 "enable_zerocopy_send_server": true, 00:18:13.506 "enable_zerocopy_send_client": false, 00:18:13.506 "zerocopy_threshold": 0, 00:18:13.506 "tls_version": 0, 00:18:13.506 "enable_ktls": false 00:18:13.506 } 00:18:13.506 } 00:18:13.506 ] 00:18:13.506 }, 00:18:13.506 { 00:18:13.506 "subsystem": "vmd", 00:18:13.506 "config": [] 00:18:13.506 }, 00:18:13.506 { 00:18:13.506 "subsystem": "accel", 00:18:13.506 "config": [ 00:18:13.506 { 00:18:13.506 "method": "accel_set_options", 00:18:13.506 "params": { 00:18:13.506 "small_cache_size": 128, 00:18:13.506 "large_cache_size": 16, 00:18:13.506 "task_count": 2048, 00:18:13.506 "sequence_count": 2048, 00:18:13.506 "buf_count": 2048 00:18:13.506 } 00:18:13.506 } 00:18:13.506 ] 00:18:13.506 }, 00:18:13.506 { 00:18:13.506 "subsystem": "bdev", 00:18:13.506 "config": [ 00:18:13.506 { 00:18:13.506 "method": "bdev_set_options", 00:18:13.506 "params": { 00:18:13.506 "bdev_io_pool_size": 65535, 00:18:13.506 "bdev_io_cache_size": 256, 00:18:13.506 "bdev_auto_examine": true, 00:18:13.506 "iobuf_small_cache_size": 128, 00:18:13.506 "iobuf_large_cache_size": 16 00:18:13.506 } 00:18:13.506 }, 00:18:13.506 { 00:18:13.506 "method": "bdev_raid_set_options", 00:18:13.506 "params": { 00:18:13.506 "process_window_size_kb": 1024, 00:18:13.506 "process_max_bandwidth_mb_sec": 0 00:18:13.506 } 00:18:13.506 }, 00:18:13.506 { 00:18:13.506 "method": "bdev_iscsi_set_options", 00:18:13.506 "params": { 00:18:13.506 "timeout_sec": 30 00:18:13.506 } 00:18:13.506 }, 00:18:13.506 { 00:18:13.506 "method": "bdev_nvme_set_options", 00:18:13.506 "params": { 00:18:13.506 "action_on_timeout": "none", 00:18:13.506 "timeout_us": 0, 00:18:13.506 "timeout_admin_us": 0, 00:18:13.506 "keep_alive_timeout_ms": 10000, 00:18:13.506 "arbitration_burst": 0, 00:18:13.506 "low_priority_weight": 0, 00:18:13.506 "medium_priority_weight": 0, 00:18:13.506 "high_priority_weight": 0, 00:18:13.506 "nvme_adminq_poll_period_us": 10000, 00:18:13.506 "nvme_ioq_poll_period_us": 0, 00:18:13.506 "io_queue_requests": 0, 00:18:13.506 "delay_cmd_submit": true, 00:18:13.506 "transport_retry_count": 4, 00:18:13.506 "bdev_retry_count": 3, 00:18:13.506 "transport_ack_timeout": 0, 00:18:13.506 "ctrlr_loss_timeout_sec": 0, 00:18:13.506 "reconnect_delay_sec": 0, 00:18:13.506 "fast_io_fail_timeout_sec": 0, 00:18:13.506 "disable_auto_failback": false, 00:18:13.506 "generate_uuids": false, 00:18:13.506 "transport_tos": 0, 00:18:13.506 "nvme_error_stat": false, 00:18:13.506 "rdma_srq_size": 0, 00:18:13.506 "io_path_stat": false, 00:18:13.506 "allow_accel_sequence": false, 00:18:13.506 "rdma_max_cq_size": 0, 00:18:13.506 "rdma_cm_event_timeout_ms": 0, 00:18:13.506 "dhchap_digests": [ 00:18:13.506 "sha256", 00:18:13.506 "sha384", 00:18:13.506 "sha512" 00:18:13.506 ], 00:18:13.506 "dhchap_dhgroups": [ 00:18:13.506 "null", 00:18:13.506 "ffdhe2048", 00:18:13.506 "ffdhe3072", 00:18:13.506 "ffdhe4096", 00:18:13.506 "ffdhe6144", 00:18:13.506 "ffdhe8192" 00:18:13.506 ] 00:18:13.506 } 00:18:13.506 }, 00:18:13.506 { 00:18:13.506 "method": "bdev_nvme_set_hotplug", 00:18:13.506 "params": { 00:18:13.506 "period_us": 100000, 00:18:13.506 "enable": false 00:18:13.506 } 00:18:13.506 }, 00:18:13.506 { 00:18:13.506 "method": "bdev_malloc_create", 00:18:13.506 "params": { 00:18:13.506 "name": "malloc0", 00:18:13.506 "num_blocks": 8192, 00:18:13.506 "block_size": 4096, 00:18:13.506 "physical_block_size": 4096, 00:18:13.506 "uuid": "5fc2a9dd-8e22-435c-b779-7ab4245f2eed", 00:18:13.506 "optimal_io_boundary": 0, 00:18:13.506 "md_size": 0, 00:18:13.506 "dif_type": 0, 00:18:13.506 "dif_is_head_of_md": false, 00:18:13.506 "dif_pi_format": 0 00:18:13.506 } 00:18:13.506 }, 00:18:13.506 { 00:18:13.506 "method": "bdev_wait_for_examine" 00:18:13.506 } 00:18:13.506 ] 00:18:13.506 }, 00:18:13.506 { 00:18:13.506 "subsystem": "nbd", 00:18:13.506 "config": [] 00:18:13.506 }, 00:18:13.506 { 00:18:13.506 "subsystem": "scheduler", 00:18:13.506 "config": [ 00:18:13.506 { 00:18:13.506 "method": "framework_set_scheduler", 00:18:13.506 "params": { 00:18:13.506 "name": "static" 00:18:13.506 } 00:18:13.506 } 00:18:13.506 ] 00:18:13.506 }, 00:18:13.506 { 00:18:13.506 "subsystem": "nvmf", 00:18:13.506 "config": [ 00:18:13.506 { 00:18:13.506 "method": "nvmf_set_config", 00:18:13.506 "params": { 00:18:13.506 "discovery_filter": "match_any", 00:18:13.506 "admin_cmd_passthru": { 00:18:13.506 "identify_ctrlr": false 00:18:13.506 } 00:18:13.506 } 00:18:13.506 }, 00:18:13.506 { 00:18:13.506 "method": "nvmf_set_max_subsystems", 00:18:13.506 "params": { 00:18:13.506 "max_subsystems": 1024 00:18:13.506 } 00:18:13.506 }, 00:18:13.506 { 00:18:13.506 "method": "nvmf_set_crdt", 00:18:13.506 "params": { 00:18:13.506 "crdt1": 0, 00:18:13.506 "crdt2": 0, 00:18:13.506 "crdt3": 0 00:18:13.506 } 00:18:13.506 }, 00:18:13.506 { 00:18:13.506 "method": "nvmf_create_transport", 00:18:13.506 "params": { 00:18:13.506 "trtype": "TCP", 00:18:13.506 "max_queue_depth": 128, 00:18:13.506 "max_io_qpairs_per_ctrlr": 127, 00:18:13.506 "in_capsule_data_size": 4096, 00:18:13.506 "max_io_size": 131072, 00:18:13.506 "io_unit_size": 131072, 00:18:13.506 "max_aq_depth": 128, 00:18:13.506 "num_shared_buffers": 511, 00:18:13.506 "buf_cache_size": 4294967295, 00:18:13.506 "dif_insert_or_strip": false, 00:18:13.506 "zcopy": false, 00:18:13.506 "c2h_success": false, 00:18:13.506 "sock_priority": 0, 00:18:13.506 "abort_timeout_sec": 1, 00:18:13.506 "ack_timeout": 0, 00:18:13.506 "data_wr_pool_size": 0 00:18:13.506 } 00:18:13.506 }, 00:18:13.506 { 00:18:13.506 "method": "nvmf_create_subsystem", 00:18:13.506 "params": { 00:18:13.506 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:13.506 "allow_any_host": false, 00:18:13.506 "serial_number": "SPDK00000000000001", 00:18:13.506 "model_number": "SPDK bdev Controller", 00:18:13.506 "max_namespaces": 10, 00:18:13.506 "min_cntlid": 1, 00:18:13.506 "max_cntlid": 65519, 00:18:13.506 "ana_reporting": false 00:18:13.506 } 00:18:13.506 }, 00:18:13.506 { 00:18:13.506 "method": "nvmf_subsystem_add_host", 00:18:13.506 "params": { 00:18:13.506 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:13.506 "host": "nqn.2016-06.io.spdk:host1", 00:18:13.506 "psk": "/tmp/tmp.SlthOY8JbD" 00:18:13.506 } 00:18:13.506 }, 00:18:13.506 { 00:18:13.506 "method": "nvmf_subsystem_add_ns", 00:18:13.506 "params": { 00:18:13.506 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:13.506 "namespace": { 00:18:13.506 "nsid": 1, 00:18:13.506 "bdev_name": "malloc0", 00:18:13.506 "nguid": "5FC2A9DD8E22435CB7797AB4245F2EED", 00:18:13.507 "uuid": "5fc2a9dd-8e22-435c-b779-7ab4245f2eed", 00:18:13.507 "no_auto_visible": false 00:18:13.507 } 00:18:13.507 } 00:18:13.507 }, 00:18:13.507 { 00:18:13.507 "method": "nvmf_subsystem_add_listener", 00:18:13.507 "params": { 00:18:13.507 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:13.507 "listen_address": { 00:18:13.507 "trtype": "TCP", 00:18:13.507 "adrfam": "IPv4", 00:18:13.507 "traddr": "10.0.0.2", 00:18:13.507 "trsvcid": "4420" 00:18:13.507 }, 00:18:13.507 "secure_channel": true 00:18:13.507 } 00:18:13.507 } 00:18:13.507 ] 00:18:13.507 } 00:18:13.507 ] 00:18:13.507 }' 00:18:13.507 22:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:13.507 22:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3865139 00:18:13.507 22:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:13.507 22:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3865139 00:18:13.507 22:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3865139 ']' 00:18:13.507 22:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:13.507 22:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:13.507 22:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:13.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:13.507 22:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:13.507 22:27:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:13.766 [2024-07-24 22:27:39.240442] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:18:13.766 [2024-07-24 22:27:39.240545] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:13.766 EAL: No free 2048 kB hugepages reported on node 1 00:18:13.766 [2024-07-24 22:27:39.301424] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.766 [2024-07-24 22:27:39.403054] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:13.766 [2024-07-24 22:27:39.403118] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:13.766 [2024-07-24 22:27:39.403131] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:13.766 [2024-07-24 22:27:39.403142] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:13.767 [2024-07-24 22:27:39.403151] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:13.767 [2024-07-24 22:27:39.403250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:14.027 [2024-07-24 22:27:39.614339] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:14.027 [2024-07-24 22:27:39.639917] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:14.027 [2024-07-24 22:27:39.655992] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:14.027 [2024-07-24 22:27:39.656198] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:14.594 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:14.594 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:14.594 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:14.594 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:14.594 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:14.594 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:14.594 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=3865259 00:18:14.594 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 3865259 /var/tmp/bdevperf.sock 00:18:14.594 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3865259 ']' 00:18:14.594 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:14.594 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:14.594 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:14.594 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:14.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:14.594 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:14.594 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:18:14.594 "subsystems": [ 00:18:14.594 { 00:18:14.594 "subsystem": "keyring", 00:18:14.594 "config": [] 00:18:14.594 }, 00:18:14.594 { 00:18:14.594 "subsystem": "iobuf", 00:18:14.594 "config": [ 00:18:14.594 { 00:18:14.594 "method": "iobuf_set_options", 00:18:14.594 "params": { 00:18:14.594 "small_pool_count": 8192, 00:18:14.594 "large_pool_count": 1024, 00:18:14.594 "small_bufsize": 8192, 00:18:14.594 "large_bufsize": 135168 00:18:14.594 } 00:18:14.594 } 00:18:14.594 ] 00:18:14.594 }, 00:18:14.594 { 00:18:14.594 "subsystem": "sock", 00:18:14.594 "config": [ 00:18:14.594 { 00:18:14.594 "method": "sock_set_default_impl", 00:18:14.594 "params": { 00:18:14.594 "impl_name": "posix" 00:18:14.594 } 00:18:14.594 }, 00:18:14.594 { 00:18:14.594 "method": "sock_impl_set_options", 00:18:14.594 "params": { 00:18:14.594 "impl_name": "ssl", 00:18:14.594 "recv_buf_size": 4096, 00:18:14.594 "send_buf_size": 4096, 00:18:14.594 "enable_recv_pipe": true, 00:18:14.594 "enable_quickack": false, 00:18:14.594 "enable_placement_id": 0, 00:18:14.594 "enable_zerocopy_send_server": true, 00:18:14.594 "enable_zerocopy_send_client": false, 00:18:14.594 "zerocopy_threshold": 0, 00:18:14.594 "tls_version": 0, 00:18:14.594 "enable_ktls": false 00:18:14.594 } 00:18:14.594 }, 00:18:14.594 { 00:18:14.594 "method": "sock_impl_set_options", 00:18:14.594 "params": { 00:18:14.594 "impl_name": "posix", 00:18:14.594 "recv_buf_size": 2097152, 00:18:14.594 "send_buf_size": 2097152, 00:18:14.594 "enable_recv_pipe": true, 00:18:14.594 "enable_quickack": false, 00:18:14.594 "enable_placement_id": 0, 00:18:14.595 "enable_zerocopy_send_server": true, 00:18:14.595 "enable_zerocopy_send_client": false, 00:18:14.595 "zerocopy_threshold": 0, 00:18:14.595 "tls_version": 0, 00:18:14.595 "enable_ktls": false 00:18:14.595 } 00:18:14.595 } 00:18:14.595 ] 00:18:14.595 }, 00:18:14.595 { 00:18:14.595 "subsystem": "vmd", 00:18:14.595 "config": [] 00:18:14.595 }, 00:18:14.595 { 00:18:14.595 "subsystem": "accel", 00:18:14.595 "config": [ 00:18:14.595 { 00:18:14.595 "method": "accel_set_options", 00:18:14.595 "params": { 00:18:14.595 "small_cache_size": 128, 00:18:14.595 "large_cache_size": 16, 00:18:14.595 "task_count": 2048, 00:18:14.595 "sequence_count": 2048, 00:18:14.595 "buf_count": 2048 00:18:14.595 } 00:18:14.595 } 00:18:14.595 ] 00:18:14.595 }, 00:18:14.595 { 00:18:14.595 "subsystem": "bdev", 00:18:14.595 "config": [ 00:18:14.595 { 00:18:14.595 "method": "bdev_set_options", 00:18:14.595 "params": { 00:18:14.595 "bdev_io_pool_size": 65535, 00:18:14.595 "bdev_io_cache_size": 256, 00:18:14.595 "bdev_auto_examine": true, 00:18:14.595 "iobuf_small_cache_size": 128, 00:18:14.595 "iobuf_large_cache_size": 16 00:18:14.595 } 00:18:14.595 }, 00:18:14.595 { 00:18:14.595 "method": "bdev_raid_set_options", 00:18:14.595 "params": { 00:18:14.595 "process_window_size_kb": 1024, 00:18:14.595 "process_max_bandwidth_mb_sec": 0 00:18:14.595 } 00:18:14.595 }, 00:18:14.595 { 00:18:14.595 "method": "bdev_iscsi_set_options", 00:18:14.595 "params": { 00:18:14.595 "timeout_sec": 30 00:18:14.595 } 00:18:14.595 }, 00:18:14.595 { 00:18:14.595 "method": "bdev_nvme_set_options", 00:18:14.595 "params": { 00:18:14.595 "action_on_timeout": "none", 00:18:14.595 "timeout_us": 0, 00:18:14.595 "timeout_admin_us": 0, 00:18:14.595 "keep_alive_timeout_ms": 10000, 00:18:14.595 "arbitration_burst": 0, 00:18:14.595 "low_priority_weight": 0, 00:18:14.595 "medium_priority_weight": 0, 00:18:14.595 "high_priority_weight": 0, 00:18:14.595 "nvme_adminq_poll_period_us": 10000, 00:18:14.595 "nvme_ioq_poll_period_us": 0, 00:18:14.595 "io_queue_requests": 512, 00:18:14.595 "delay_cmd_submit": true, 00:18:14.595 "transport_retry_count": 4, 00:18:14.595 "bdev_retry_count": 3, 00:18:14.595 "transport_ack_timeout": 0, 00:18:14.595 "ctrlr_loss_timeout_sec": 0, 00:18:14.595 "reconnect_delay_sec": 0, 00:18:14.595 "fast_io_fail_timeout_sec": 0, 00:18:14.595 "disable_auto_failback": false, 00:18:14.595 "generate_uuids": false, 00:18:14.595 "transport_tos": 0, 00:18:14.595 "nvme_error_stat": false, 00:18:14.595 "rdma_srq_size": 0, 00:18:14.595 "io_path_stat": false, 00:18:14.595 "allow_accel_sequence": false, 00:18:14.595 "rdma_max_cq_size": 0, 00:18:14.595 "rdma_cm_event_timeout_ms": 0, 00:18:14.595 "dhchap_digests": [ 00:18:14.595 "sha256", 00:18:14.595 "sha384", 00:18:14.595 "sha512" 00:18:14.595 ], 00:18:14.595 "dhchap_dhgroups": [ 00:18:14.595 "null", 00:18:14.595 "ffdhe2048", 00:18:14.595 "ffdhe3072", 00:18:14.595 "ffdhe4096", 00:18:14.595 "ffdhe6144", 00:18:14.595 "ffdhe8192" 00:18:14.595 ] 00:18:14.595 } 00:18:14.595 }, 00:18:14.595 { 00:18:14.595 "method": "bdev_nvme_attach_controller", 00:18:14.595 "params": { 00:18:14.595 "name": "TLSTEST", 00:18:14.595 "trtype": "TCP", 00:18:14.595 "adrfam": "IPv4", 00:18:14.595 "traddr": "10.0.0.2", 00:18:14.595 "trsvcid": "4420", 00:18:14.595 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:14.595 "prchk_reftag": false, 00:18:14.595 "prchk_guard": false, 00:18:14.595 "ctrlr_loss_timeout_sec": 0, 00:18:14.595 "reconnect_delay_sec": 0, 00:18:14.595 "fast_io_fail_timeout_sec": 0, 00:18:14.595 "psk": "/tmp/tmp.SlthOY8JbD", 00:18:14.595 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:14.595 "hdgst": false, 00:18:14.595 "ddgst": false 00:18:14.595 } 00:18:14.595 }, 00:18:14.595 { 00:18:14.595 "method": "bdev_nvme_set_hotplug", 00:18:14.595 "params": { 00:18:14.595 "period_us": 100000, 00:18:14.595 "enable": false 00:18:14.595 } 00:18:14.595 }, 00:18:14.595 { 00:18:14.595 "method": "bdev_wait_for_examine" 00:18:14.595 } 00:18:14.595 ] 00:18:14.595 }, 00:18:14.595 { 00:18:14.595 "subsystem": "nbd", 00:18:14.595 "config": [] 00:18:14.595 } 00:18:14.595 ] 00:18:14.595 }' 00:18:14.595 22:27:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:14.595 [2024-07-24 22:27:40.284229] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:18:14.595 [2024-07-24 22:27:40.284326] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3865259 ] 00:18:14.853 EAL: No free 2048 kB hugepages reported on node 1 00:18:14.853 [2024-07-24 22:27:40.345487] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:14.853 [2024-07-24 22:27:40.465502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:15.111 [2024-07-24 22:27:40.625197] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:15.112 [2024-07-24 22:27:40.625344] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:15.676 22:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:15.676 22:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:15.676 22:27:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:15.934 Running I/O for 10 seconds... 00:18:25.899 00:18:25.899 Latency(us) 00:18:25.899 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:25.899 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:25.899 Verification LBA range: start 0x0 length 0x2000 00:18:25.899 TLSTESTn1 : 10.03 3639.28 14.22 0.00 0.00 35095.65 8204.14 37476.88 00:18:25.899 =================================================================================================================== 00:18:25.899 Total : 3639.28 14.22 0.00 0.00 35095.65 8204.14 37476.88 00:18:25.899 0 00:18:25.899 22:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:25.899 22:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 3865259 00:18:25.899 22:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3865259 ']' 00:18:25.899 22:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3865259 00:18:25.899 22:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:25.899 22:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:25.899 22:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3865259 00:18:25.899 22:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:25.899 22:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:25.899 22:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3865259' 00:18:25.899 killing process with pid 3865259 00:18:25.899 22:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3865259 00:18:25.899 Received shutdown signal, test time was about 10.000000 seconds 00:18:25.899 00:18:25.899 Latency(us) 00:18:25.899 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:25.899 =================================================================================================================== 00:18:25.899 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:25.899 [2024-07-24 22:27:51.540920] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:25.899 22:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3865259 00:18:26.158 22:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 3865139 00:18:26.159 22:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3865139 ']' 00:18:26.159 22:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3865139 00:18:26.159 22:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:26.159 22:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:26.159 22:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3865139 00:18:26.159 22:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:26.159 22:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:26.159 22:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3865139' 00:18:26.159 killing process with pid 3865139 00:18:26.159 22:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3865139 00:18:26.159 [2024-07-24 22:27:51.782003] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:26.159 22:27:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3865139 00:18:26.441 22:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:18:26.441 22:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:26.441 22:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:26.441 22:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:26.441 22:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3866360 00:18:26.441 22:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:26.441 22:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3866360 00:18:26.441 22:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3866360 ']' 00:18:26.441 22:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:26.441 22:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:26.441 22:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:26.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:26.441 22:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:26.441 22:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:26.441 [2024-07-24 22:27:52.068247] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:18:26.441 [2024-07-24 22:27:52.068350] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:26.441 EAL: No free 2048 kB hugepages reported on node 1 00:18:26.703 [2024-07-24 22:27:52.134937] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.703 [2024-07-24 22:27:52.254239] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:26.703 [2024-07-24 22:27:52.254302] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:26.703 [2024-07-24 22:27:52.254318] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:26.703 [2024-07-24 22:27:52.254331] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:26.703 [2024-07-24 22:27:52.254343] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:26.703 [2024-07-24 22:27:52.254373] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:26.703 22:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:26.703 22:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:26.703 22:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:26.703 22:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:26.703 22:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:26.703 22:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:26.703 22:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.SlthOY8JbD 00:18:26.703 22:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.SlthOY8JbD 00:18:26.704 22:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:26.961 [2024-07-24 22:27:52.656317] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:27.219 22:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:27.476 22:27:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:27.734 [2024-07-24 22:27:53.249887] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:27.734 [2024-07-24 22:27:53.250142] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:27.734 22:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:27.991 malloc0 00:18:27.991 22:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:28.249 22:27:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.SlthOY8JbD 00:18:28.507 [2024-07-24 22:27:54.150429] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:28.507 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=3866584 00:18:28.507 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:28.507 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:28.507 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 3866584 /var/tmp/bdevperf.sock 00:18:28.507 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3866584 ']' 00:18:28.507 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:28.507 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:28.507 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:28.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:28.507 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:28.507 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:28.765 [2024-07-24 22:27:54.219816] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:18:28.765 [2024-07-24 22:27:54.219907] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3866584 ] 00:18:28.765 EAL: No free 2048 kB hugepages reported on node 1 00:18:28.765 [2024-07-24 22:27:54.280931] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.765 [2024-07-24 22:27:54.400811] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:29.023 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:29.023 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:29.023 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.SlthOY8JbD 00:18:29.280 22:27:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:29.538 [2024-07-24 22:27:55.082254] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:29.538 nvme0n1 00:18:29.538 22:27:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:29.797 Running I/O for 1 seconds... 00:18:30.731 00:18:30.731 Latency(us) 00:18:30.731 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.731 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:30.731 Verification LBA range: start 0x0 length 0x2000 00:18:30.731 nvme0n1 : 1.02 3173.44 12.40 0.00 0.00 39863.26 9369.22 41748.86 00:18:30.731 =================================================================================================================== 00:18:30.731 Total : 3173.44 12.40 0.00 0.00 39863.26 9369.22 41748.86 00:18:30.731 0 00:18:30.731 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 3866584 00:18:30.731 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3866584 ']' 00:18:30.731 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3866584 00:18:30.731 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:30.731 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:30.731 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3866584 00:18:30.731 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:30.731 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:30.731 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3866584' 00:18:30.731 killing process with pid 3866584 00:18:30.731 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3866584 00:18:30.731 Received shutdown signal, test time was about 1.000000 seconds 00:18:30.731 00:18:30.731 Latency(us) 00:18:30.731 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.731 =================================================================================================================== 00:18:30.731 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:30.731 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3866584 00:18:30.990 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 3866360 00:18:30.990 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3866360 ']' 00:18:30.990 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3866360 00:18:30.990 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:30.990 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:30.990 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3866360 00:18:30.990 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:30.990 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:30.990 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3866360' 00:18:30.991 killing process with pid 3866360 00:18:30.991 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3866360 00:18:30.991 [2024-07-24 22:27:56.623693] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:30.991 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3866360 00:18:31.249 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:18:31.249 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:31.249 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:31.249 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:31.249 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3866805 00:18:31.249 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:31.249 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3866805 00:18:31.249 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3866805 ']' 00:18:31.249 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:31.249 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:31.249 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:31.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:31.249 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:31.249 22:27:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:31.249 [2024-07-24 22:27:56.908772] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:18:31.249 [2024-07-24 22:27:56.908870] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:31.249 EAL: No free 2048 kB hugepages reported on node 1 00:18:31.508 [2024-07-24 22:27:56.974356] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.508 [2024-07-24 22:27:57.092580] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:31.508 [2024-07-24 22:27:57.092646] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:31.508 [2024-07-24 22:27:57.092662] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:31.508 [2024-07-24 22:27:57.092675] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:31.508 [2024-07-24 22:27:57.092688] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:31.508 [2024-07-24 22:27:57.092717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:31.508 22:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:31.508 22:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:31.508 22:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:31.508 22:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:31.508 22:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:31.767 22:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:31.767 22:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:18:31.767 22:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.767 22:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:31.767 [2024-07-24 22:27:57.228685] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:31.767 malloc0 00:18:31.767 [2024-07-24 22:27:57.259024] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:31.767 [2024-07-24 22:27:57.273713] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:31.767 22:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.767 22:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=3866905 00:18:31.767 22:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 3866905 /var/tmp/bdevperf.sock 00:18:31.767 22:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:31.767 22:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3866905 ']' 00:18:31.767 22:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:31.767 22:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:31.767 22:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:31.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:31.767 22:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:31.767 22:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:31.767 [2024-07-24 22:27:57.345907] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:18:31.767 [2024-07-24 22:27:57.345996] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3866905 ] 00:18:31.767 EAL: No free 2048 kB hugepages reported on node 1 00:18:31.767 [2024-07-24 22:27:57.407106] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.025 [2024-07-24 22:27:57.527590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:32.025 22:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:32.025 22:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:32.025 22:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.SlthOY8JbD 00:18:32.283 22:27:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:32.541 [2024-07-24 22:27:58.204920] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:32.800 nvme0n1 00:18:32.800 22:27:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:32.800 Running I/O for 1 seconds... 00:18:34.177 00:18:34.177 Latency(us) 00:18:34.177 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:34.177 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:34.177 Verification LBA range: start 0x0 length 0x2000 00:18:34.177 nvme0n1 : 1.03 3037.94 11.87 0.00 0.00 41596.94 7573.05 61749.48 00:18:34.177 =================================================================================================================== 00:18:34.177 Total : 3037.94 11.87 0.00 0.00 41596.94 7573.05 61749.48 00:18:34.177 0 00:18:34.177 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:18:34.177 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.177 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:34.177 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.177 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:18:34.177 "subsystems": [ 00:18:34.177 { 00:18:34.177 "subsystem": "keyring", 00:18:34.177 "config": [ 00:18:34.177 { 00:18:34.177 "method": "keyring_file_add_key", 00:18:34.177 "params": { 00:18:34.177 "name": "key0", 00:18:34.177 "path": "/tmp/tmp.SlthOY8JbD" 00:18:34.177 } 00:18:34.177 } 00:18:34.177 ] 00:18:34.177 }, 00:18:34.177 { 00:18:34.177 "subsystem": "iobuf", 00:18:34.177 "config": [ 00:18:34.177 { 00:18:34.177 "method": "iobuf_set_options", 00:18:34.177 "params": { 00:18:34.177 "small_pool_count": 8192, 00:18:34.177 "large_pool_count": 1024, 00:18:34.177 "small_bufsize": 8192, 00:18:34.177 "large_bufsize": 135168 00:18:34.177 } 00:18:34.177 } 00:18:34.177 ] 00:18:34.177 }, 00:18:34.177 { 00:18:34.177 "subsystem": "sock", 00:18:34.177 "config": [ 00:18:34.177 { 00:18:34.177 "method": "sock_set_default_impl", 00:18:34.177 "params": { 00:18:34.177 "impl_name": "posix" 00:18:34.177 } 00:18:34.177 }, 00:18:34.177 { 00:18:34.177 "method": "sock_impl_set_options", 00:18:34.177 "params": { 00:18:34.177 "impl_name": "ssl", 00:18:34.177 "recv_buf_size": 4096, 00:18:34.177 "send_buf_size": 4096, 00:18:34.177 "enable_recv_pipe": true, 00:18:34.177 "enable_quickack": false, 00:18:34.177 "enable_placement_id": 0, 00:18:34.177 "enable_zerocopy_send_server": true, 00:18:34.177 "enable_zerocopy_send_client": false, 00:18:34.177 "zerocopy_threshold": 0, 00:18:34.177 "tls_version": 0, 00:18:34.177 "enable_ktls": false 00:18:34.177 } 00:18:34.177 }, 00:18:34.177 { 00:18:34.177 "method": "sock_impl_set_options", 00:18:34.177 "params": { 00:18:34.177 "impl_name": "posix", 00:18:34.177 "recv_buf_size": 2097152, 00:18:34.177 "send_buf_size": 2097152, 00:18:34.177 "enable_recv_pipe": true, 00:18:34.177 "enable_quickack": false, 00:18:34.177 "enable_placement_id": 0, 00:18:34.177 "enable_zerocopy_send_server": true, 00:18:34.177 "enable_zerocopy_send_client": false, 00:18:34.177 "zerocopy_threshold": 0, 00:18:34.177 "tls_version": 0, 00:18:34.177 "enable_ktls": false 00:18:34.177 } 00:18:34.177 } 00:18:34.177 ] 00:18:34.177 }, 00:18:34.177 { 00:18:34.177 "subsystem": "vmd", 00:18:34.177 "config": [] 00:18:34.177 }, 00:18:34.177 { 00:18:34.177 "subsystem": "accel", 00:18:34.177 "config": [ 00:18:34.177 { 00:18:34.177 "method": "accel_set_options", 00:18:34.177 "params": { 00:18:34.177 "small_cache_size": 128, 00:18:34.177 "large_cache_size": 16, 00:18:34.177 "task_count": 2048, 00:18:34.177 "sequence_count": 2048, 00:18:34.177 "buf_count": 2048 00:18:34.177 } 00:18:34.177 } 00:18:34.177 ] 00:18:34.177 }, 00:18:34.177 { 00:18:34.177 "subsystem": "bdev", 00:18:34.177 "config": [ 00:18:34.177 { 00:18:34.177 "method": "bdev_set_options", 00:18:34.177 "params": { 00:18:34.177 "bdev_io_pool_size": 65535, 00:18:34.177 "bdev_io_cache_size": 256, 00:18:34.177 "bdev_auto_examine": true, 00:18:34.177 "iobuf_small_cache_size": 128, 00:18:34.177 "iobuf_large_cache_size": 16 00:18:34.177 } 00:18:34.177 }, 00:18:34.177 { 00:18:34.177 "method": "bdev_raid_set_options", 00:18:34.177 "params": { 00:18:34.177 "process_window_size_kb": 1024, 00:18:34.177 "process_max_bandwidth_mb_sec": 0 00:18:34.177 } 00:18:34.177 }, 00:18:34.177 { 00:18:34.177 "method": "bdev_iscsi_set_options", 00:18:34.177 "params": { 00:18:34.177 "timeout_sec": 30 00:18:34.177 } 00:18:34.177 }, 00:18:34.177 { 00:18:34.177 "method": "bdev_nvme_set_options", 00:18:34.177 "params": { 00:18:34.177 "action_on_timeout": "none", 00:18:34.177 "timeout_us": 0, 00:18:34.177 "timeout_admin_us": 0, 00:18:34.178 "keep_alive_timeout_ms": 10000, 00:18:34.178 "arbitration_burst": 0, 00:18:34.178 "low_priority_weight": 0, 00:18:34.178 "medium_priority_weight": 0, 00:18:34.178 "high_priority_weight": 0, 00:18:34.178 "nvme_adminq_poll_period_us": 10000, 00:18:34.178 "nvme_ioq_poll_period_us": 0, 00:18:34.178 "io_queue_requests": 0, 00:18:34.178 "delay_cmd_submit": true, 00:18:34.178 "transport_retry_count": 4, 00:18:34.178 "bdev_retry_count": 3, 00:18:34.178 "transport_ack_timeout": 0, 00:18:34.178 "ctrlr_loss_timeout_sec": 0, 00:18:34.178 "reconnect_delay_sec": 0, 00:18:34.178 "fast_io_fail_timeout_sec": 0, 00:18:34.178 "disable_auto_failback": false, 00:18:34.178 "generate_uuids": false, 00:18:34.178 "transport_tos": 0, 00:18:34.178 "nvme_error_stat": false, 00:18:34.178 "rdma_srq_size": 0, 00:18:34.178 "io_path_stat": false, 00:18:34.178 "allow_accel_sequence": false, 00:18:34.178 "rdma_max_cq_size": 0, 00:18:34.178 "rdma_cm_event_timeout_ms": 0, 00:18:34.178 "dhchap_digests": [ 00:18:34.178 "sha256", 00:18:34.178 "sha384", 00:18:34.178 "sha512" 00:18:34.178 ], 00:18:34.178 "dhchap_dhgroups": [ 00:18:34.178 "null", 00:18:34.178 "ffdhe2048", 00:18:34.178 "ffdhe3072", 00:18:34.178 "ffdhe4096", 00:18:34.178 "ffdhe6144", 00:18:34.178 "ffdhe8192" 00:18:34.178 ] 00:18:34.178 } 00:18:34.178 }, 00:18:34.178 { 00:18:34.178 "method": "bdev_nvme_set_hotplug", 00:18:34.178 "params": { 00:18:34.178 "period_us": 100000, 00:18:34.178 "enable": false 00:18:34.178 } 00:18:34.178 }, 00:18:34.178 { 00:18:34.178 "method": "bdev_malloc_create", 00:18:34.178 "params": { 00:18:34.178 "name": "malloc0", 00:18:34.178 "num_blocks": 8192, 00:18:34.178 "block_size": 4096, 00:18:34.178 "physical_block_size": 4096, 00:18:34.178 "uuid": "5420195a-0c59-4caa-b13b-2890075b956f", 00:18:34.178 "optimal_io_boundary": 0, 00:18:34.178 "md_size": 0, 00:18:34.178 "dif_type": 0, 00:18:34.178 "dif_is_head_of_md": false, 00:18:34.178 "dif_pi_format": 0 00:18:34.178 } 00:18:34.178 }, 00:18:34.178 { 00:18:34.178 "method": "bdev_wait_for_examine" 00:18:34.178 } 00:18:34.178 ] 00:18:34.178 }, 00:18:34.178 { 00:18:34.178 "subsystem": "nbd", 00:18:34.178 "config": [] 00:18:34.178 }, 00:18:34.178 { 00:18:34.178 "subsystem": "scheduler", 00:18:34.178 "config": [ 00:18:34.178 { 00:18:34.178 "method": "framework_set_scheduler", 00:18:34.178 "params": { 00:18:34.178 "name": "static" 00:18:34.178 } 00:18:34.178 } 00:18:34.178 ] 00:18:34.178 }, 00:18:34.178 { 00:18:34.178 "subsystem": "nvmf", 00:18:34.178 "config": [ 00:18:34.178 { 00:18:34.178 "method": "nvmf_set_config", 00:18:34.178 "params": { 00:18:34.178 "discovery_filter": "match_any", 00:18:34.178 "admin_cmd_passthru": { 00:18:34.178 "identify_ctrlr": false 00:18:34.178 } 00:18:34.178 } 00:18:34.178 }, 00:18:34.178 { 00:18:34.178 "method": "nvmf_set_max_subsystems", 00:18:34.178 "params": { 00:18:34.178 "max_subsystems": 1024 00:18:34.178 } 00:18:34.178 }, 00:18:34.178 { 00:18:34.178 "method": "nvmf_set_crdt", 00:18:34.178 "params": { 00:18:34.178 "crdt1": 0, 00:18:34.178 "crdt2": 0, 00:18:34.178 "crdt3": 0 00:18:34.178 } 00:18:34.178 }, 00:18:34.178 { 00:18:34.178 "method": "nvmf_create_transport", 00:18:34.178 "params": { 00:18:34.178 "trtype": "TCP", 00:18:34.178 "max_queue_depth": 128, 00:18:34.178 "max_io_qpairs_per_ctrlr": 127, 00:18:34.178 "in_capsule_data_size": 4096, 00:18:34.178 "max_io_size": 131072, 00:18:34.178 "io_unit_size": 131072, 00:18:34.178 "max_aq_depth": 128, 00:18:34.178 "num_shared_buffers": 511, 00:18:34.178 "buf_cache_size": 4294967295, 00:18:34.178 "dif_insert_or_strip": false, 00:18:34.178 "zcopy": false, 00:18:34.178 "c2h_success": false, 00:18:34.178 "sock_priority": 0, 00:18:34.178 "abort_timeout_sec": 1, 00:18:34.178 "ack_timeout": 0, 00:18:34.178 "data_wr_pool_size": 0 00:18:34.178 } 00:18:34.178 }, 00:18:34.178 { 00:18:34.178 "method": "nvmf_create_subsystem", 00:18:34.178 "params": { 00:18:34.178 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:34.178 "allow_any_host": false, 00:18:34.178 "serial_number": "00000000000000000000", 00:18:34.178 "model_number": "SPDK bdev Controller", 00:18:34.178 "max_namespaces": 32, 00:18:34.178 "min_cntlid": 1, 00:18:34.178 "max_cntlid": 65519, 00:18:34.178 "ana_reporting": false 00:18:34.178 } 00:18:34.178 }, 00:18:34.178 { 00:18:34.178 "method": "nvmf_subsystem_add_host", 00:18:34.178 "params": { 00:18:34.178 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:34.178 "host": "nqn.2016-06.io.spdk:host1", 00:18:34.178 "psk": "key0" 00:18:34.178 } 00:18:34.178 }, 00:18:34.178 { 00:18:34.178 "method": "nvmf_subsystem_add_ns", 00:18:34.178 "params": { 00:18:34.178 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:34.178 "namespace": { 00:18:34.178 "nsid": 1, 00:18:34.178 "bdev_name": "malloc0", 00:18:34.178 "nguid": "5420195A0C594CAAB13B2890075B956F", 00:18:34.178 "uuid": "5420195a-0c59-4caa-b13b-2890075b956f", 00:18:34.178 "no_auto_visible": false 00:18:34.178 } 00:18:34.178 } 00:18:34.178 }, 00:18:34.178 { 00:18:34.178 "method": "nvmf_subsystem_add_listener", 00:18:34.178 "params": { 00:18:34.178 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:34.178 "listen_address": { 00:18:34.178 "trtype": "TCP", 00:18:34.178 "adrfam": "IPv4", 00:18:34.178 "traddr": "10.0.0.2", 00:18:34.178 "trsvcid": "4420" 00:18:34.178 }, 00:18:34.178 "secure_channel": false, 00:18:34.178 "sock_impl": "ssl" 00:18:34.178 } 00:18:34.178 } 00:18:34.178 ] 00:18:34.178 } 00:18:34.178 ] 00:18:34.178 }' 00:18:34.178 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:34.437 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:18:34.437 "subsystems": [ 00:18:34.437 { 00:18:34.437 "subsystem": "keyring", 00:18:34.437 "config": [ 00:18:34.437 { 00:18:34.437 "method": "keyring_file_add_key", 00:18:34.437 "params": { 00:18:34.437 "name": "key0", 00:18:34.437 "path": "/tmp/tmp.SlthOY8JbD" 00:18:34.437 } 00:18:34.437 } 00:18:34.437 ] 00:18:34.437 }, 00:18:34.437 { 00:18:34.437 "subsystem": "iobuf", 00:18:34.437 "config": [ 00:18:34.437 { 00:18:34.437 "method": "iobuf_set_options", 00:18:34.437 "params": { 00:18:34.437 "small_pool_count": 8192, 00:18:34.437 "large_pool_count": 1024, 00:18:34.437 "small_bufsize": 8192, 00:18:34.437 "large_bufsize": 135168 00:18:34.437 } 00:18:34.437 } 00:18:34.437 ] 00:18:34.437 }, 00:18:34.437 { 00:18:34.437 "subsystem": "sock", 00:18:34.437 "config": [ 00:18:34.437 { 00:18:34.437 "method": "sock_set_default_impl", 00:18:34.437 "params": { 00:18:34.437 "impl_name": "posix" 00:18:34.437 } 00:18:34.437 }, 00:18:34.437 { 00:18:34.437 "method": "sock_impl_set_options", 00:18:34.437 "params": { 00:18:34.437 "impl_name": "ssl", 00:18:34.437 "recv_buf_size": 4096, 00:18:34.437 "send_buf_size": 4096, 00:18:34.437 "enable_recv_pipe": true, 00:18:34.437 "enable_quickack": false, 00:18:34.437 "enable_placement_id": 0, 00:18:34.437 "enable_zerocopy_send_server": true, 00:18:34.437 "enable_zerocopy_send_client": false, 00:18:34.437 "zerocopy_threshold": 0, 00:18:34.437 "tls_version": 0, 00:18:34.437 "enable_ktls": false 00:18:34.437 } 00:18:34.437 }, 00:18:34.437 { 00:18:34.437 "method": "sock_impl_set_options", 00:18:34.437 "params": { 00:18:34.437 "impl_name": "posix", 00:18:34.437 "recv_buf_size": 2097152, 00:18:34.437 "send_buf_size": 2097152, 00:18:34.437 "enable_recv_pipe": true, 00:18:34.437 "enable_quickack": false, 00:18:34.437 "enable_placement_id": 0, 00:18:34.437 "enable_zerocopy_send_server": true, 00:18:34.437 "enable_zerocopy_send_client": false, 00:18:34.437 "zerocopy_threshold": 0, 00:18:34.437 "tls_version": 0, 00:18:34.437 "enable_ktls": false 00:18:34.437 } 00:18:34.437 } 00:18:34.437 ] 00:18:34.437 }, 00:18:34.437 { 00:18:34.437 "subsystem": "vmd", 00:18:34.437 "config": [] 00:18:34.437 }, 00:18:34.437 { 00:18:34.437 "subsystem": "accel", 00:18:34.437 "config": [ 00:18:34.437 { 00:18:34.437 "method": "accel_set_options", 00:18:34.437 "params": { 00:18:34.437 "small_cache_size": 128, 00:18:34.437 "large_cache_size": 16, 00:18:34.437 "task_count": 2048, 00:18:34.437 "sequence_count": 2048, 00:18:34.437 "buf_count": 2048 00:18:34.437 } 00:18:34.437 } 00:18:34.437 ] 00:18:34.437 }, 00:18:34.437 { 00:18:34.437 "subsystem": "bdev", 00:18:34.437 "config": [ 00:18:34.437 { 00:18:34.437 "method": "bdev_set_options", 00:18:34.437 "params": { 00:18:34.437 "bdev_io_pool_size": 65535, 00:18:34.437 "bdev_io_cache_size": 256, 00:18:34.437 "bdev_auto_examine": true, 00:18:34.437 "iobuf_small_cache_size": 128, 00:18:34.437 "iobuf_large_cache_size": 16 00:18:34.437 } 00:18:34.437 }, 00:18:34.437 { 00:18:34.437 "method": "bdev_raid_set_options", 00:18:34.437 "params": { 00:18:34.437 "process_window_size_kb": 1024, 00:18:34.437 "process_max_bandwidth_mb_sec": 0 00:18:34.437 } 00:18:34.437 }, 00:18:34.437 { 00:18:34.437 "method": "bdev_iscsi_set_options", 00:18:34.437 "params": { 00:18:34.437 "timeout_sec": 30 00:18:34.437 } 00:18:34.437 }, 00:18:34.437 { 00:18:34.437 "method": "bdev_nvme_set_options", 00:18:34.437 "params": { 00:18:34.437 "action_on_timeout": "none", 00:18:34.437 "timeout_us": 0, 00:18:34.437 "timeout_admin_us": 0, 00:18:34.437 "keep_alive_timeout_ms": 10000, 00:18:34.437 "arbitration_burst": 0, 00:18:34.437 "low_priority_weight": 0, 00:18:34.437 "medium_priority_weight": 0, 00:18:34.437 "high_priority_weight": 0, 00:18:34.437 "nvme_adminq_poll_period_us": 10000, 00:18:34.437 "nvme_ioq_poll_period_us": 0, 00:18:34.437 "io_queue_requests": 512, 00:18:34.437 "delay_cmd_submit": true, 00:18:34.437 "transport_retry_count": 4, 00:18:34.437 "bdev_retry_count": 3, 00:18:34.437 "transport_ack_timeout": 0, 00:18:34.437 "ctrlr_loss_timeout_sec": 0, 00:18:34.437 "reconnect_delay_sec": 0, 00:18:34.437 "fast_io_fail_timeout_sec": 0, 00:18:34.437 "disable_auto_failback": false, 00:18:34.437 "generate_uuids": false, 00:18:34.437 "transport_tos": 0, 00:18:34.437 "nvme_error_stat": false, 00:18:34.437 "rdma_srq_size": 0, 00:18:34.438 "io_path_stat": false, 00:18:34.438 "allow_accel_sequence": false, 00:18:34.438 "rdma_max_cq_size": 0, 00:18:34.438 "rdma_cm_event_timeout_ms": 0, 00:18:34.438 "dhchap_digests": [ 00:18:34.438 "sha256", 00:18:34.438 "sha384", 00:18:34.438 "sha512" 00:18:34.438 ], 00:18:34.438 "dhchap_dhgroups": [ 00:18:34.438 "null", 00:18:34.438 "ffdhe2048", 00:18:34.438 "ffdhe3072", 00:18:34.438 "ffdhe4096", 00:18:34.438 "ffdhe6144", 00:18:34.438 "ffdhe8192" 00:18:34.438 ] 00:18:34.438 } 00:18:34.438 }, 00:18:34.438 { 00:18:34.438 "method": "bdev_nvme_attach_controller", 00:18:34.438 "params": { 00:18:34.438 "name": "nvme0", 00:18:34.438 "trtype": "TCP", 00:18:34.438 "adrfam": "IPv4", 00:18:34.438 "traddr": "10.0.0.2", 00:18:34.438 "trsvcid": "4420", 00:18:34.438 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:34.438 "prchk_reftag": false, 00:18:34.438 "prchk_guard": false, 00:18:34.438 "ctrlr_loss_timeout_sec": 0, 00:18:34.438 "reconnect_delay_sec": 0, 00:18:34.438 "fast_io_fail_timeout_sec": 0, 00:18:34.438 "psk": "key0", 00:18:34.438 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:34.438 "hdgst": false, 00:18:34.438 "ddgst": false 00:18:34.438 } 00:18:34.438 }, 00:18:34.438 { 00:18:34.438 "method": "bdev_nvme_set_hotplug", 00:18:34.438 "params": { 00:18:34.438 "period_us": 100000, 00:18:34.438 "enable": false 00:18:34.438 } 00:18:34.438 }, 00:18:34.438 { 00:18:34.438 "method": "bdev_enable_histogram", 00:18:34.438 "params": { 00:18:34.438 "name": "nvme0n1", 00:18:34.438 "enable": true 00:18:34.438 } 00:18:34.438 }, 00:18:34.438 { 00:18:34.438 "method": "bdev_wait_for_examine" 00:18:34.438 } 00:18:34.438 ] 00:18:34.438 }, 00:18:34.438 { 00:18:34.438 "subsystem": "nbd", 00:18:34.438 "config": [] 00:18:34.438 } 00:18:34.438 ] 00:18:34.438 }' 00:18:34.438 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 3866905 00:18:34.438 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3866905 ']' 00:18:34.438 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3866905 00:18:34.438 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:34.438 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:34.438 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3866905 00:18:34.438 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:34.438 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:34.438 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3866905' 00:18:34.438 killing process with pid 3866905 00:18:34.438 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3866905 00:18:34.438 Received shutdown signal, test time was about 1.000000 seconds 00:18:34.438 00:18:34.438 Latency(us) 00:18:34.438 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:34.438 =================================================================================================================== 00:18:34.438 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:34.438 22:27:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3866905 00:18:34.696 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 3866805 00:18:34.696 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3866805 ']' 00:18:34.696 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3866805 00:18:34.696 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:34.697 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:34.697 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3866805 00:18:34.697 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:34.697 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:34.697 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3866805' 00:18:34.697 killing process with pid 3866805 00:18:34.697 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3866805 00:18:34.697 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3866805 00:18:34.955 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:18:34.955 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:34.955 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:18:34.955 "subsystems": [ 00:18:34.955 { 00:18:34.955 "subsystem": "keyring", 00:18:34.955 "config": [ 00:18:34.955 { 00:18:34.955 "method": "keyring_file_add_key", 00:18:34.955 "params": { 00:18:34.955 "name": "key0", 00:18:34.955 "path": "/tmp/tmp.SlthOY8JbD" 00:18:34.955 } 00:18:34.955 } 00:18:34.955 ] 00:18:34.955 }, 00:18:34.955 { 00:18:34.955 "subsystem": "iobuf", 00:18:34.955 "config": [ 00:18:34.955 { 00:18:34.955 "method": "iobuf_set_options", 00:18:34.955 "params": { 00:18:34.955 "small_pool_count": 8192, 00:18:34.955 "large_pool_count": 1024, 00:18:34.955 "small_bufsize": 8192, 00:18:34.955 "large_bufsize": 135168 00:18:34.955 } 00:18:34.955 } 00:18:34.955 ] 00:18:34.955 }, 00:18:34.955 { 00:18:34.955 "subsystem": "sock", 00:18:34.955 "config": [ 00:18:34.955 { 00:18:34.955 "method": "sock_set_default_impl", 00:18:34.955 "params": { 00:18:34.955 "impl_name": "posix" 00:18:34.955 } 00:18:34.955 }, 00:18:34.955 { 00:18:34.955 "method": "sock_impl_set_options", 00:18:34.955 "params": { 00:18:34.955 "impl_name": "ssl", 00:18:34.955 "recv_buf_size": 4096, 00:18:34.955 "send_buf_size": 4096, 00:18:34.955 "enable_recv_pipe": true, 00:18:34.955 "enable_quickack": false, 00:18:34.955 "enable_placement_id": 0, 00:18:34.955 "enable_zerocopy_send_server": true, 00:18:34.955 "enable_zerocopy_send_client": false, 00:18:34.955 "zerocopy_threshold": 0, 00:18:34.955 "tls_version": 0, 00:18:34.955 "enable_ktls": false 00:18:34.955 } 00:18:34.955 }, 00:18:34.955 { 00:18:34.955 "method": "sock_impl_set_options", 00:18:34.955 "params": { 00:18:34.955 "impl_name": "posix", 00:18:34.955 "recv_buf_size": 2097152, 00:18:34.955 "send_buf_size": 2097152, 00:18:34.955 "enable_recv_pipe": true, 00:18:34.955 "enable_quickack": false, 00:18:34.955 "enable_placement_id": 0, 00:18:34.955 "enable_zerocopy_send_server": true, 00:18:34.955 "enable_zerocopy_send_client": false, 00:18:34.955 "zerocopy_threshold": 0, 00:18:34.955 "tls_version": 0, 00:18:34.955 "enable_ktls": false 00:18:34.955 } 00:18:34.955 } 00:18:34.955 ] 00:18:34.955 }, 00:18:34.955 { 00:18:34.955 "subsystem": "vmd", 00:18:34.955 "config": [] 00:18:34.955 }, 00:18:34.955 { 00:18:34.955 "subsystem": "accel", 00:18:34.955 "config": [ 00:18:34.955 { 00:18:34.955 "method": "accel_set_options", 00:18:34.955 "params": { 00:18:34.955 "small_cache_size": 128, 00:18:34.955 "large_cache_size": 16, 00:18:34.955 "task_count": 2048, 00:18:34.955 "sequence_count": 2048, 00:18:34.955 "buf_count": 2048 00:18:34.955 } 00:18:34.955 } 00:18:34.955 ] 00:18:34.955 }, 00:18:34.955 { 00:18:34.955 "subsystem": "bdev", 00:18:34.955 "config": [ 00:18:34.955 { 00:18:34.955 "method": "bdev_set_options", 00:18:34.955 "params": { 00:18:34.955 "bdev_io_pool_size": 65535, 00:18:34.955 "bdev_io_cache_size": 256, 00:18:34.955 "bdev_auto_examine": true, 00:18:34.955 "iobuf_small_cache_size": 128, 00:18:34.955 "iobuf_large_cache_size": 16 00:18:34.955 } 00:18:34.955 }, 00:18:34.955 { 00:18:34.955 "method": "bdev_raid_set_options", 00:18:34.955 "params": { 00:18:34.955 "process_window_size_kb": 1024, 00:18:34.955 "process_max_bandwidth_mb_sec": 0 00:18:34.955 } 00:18:34.955 }, 00:18:34.955 { 00:18:34.955 "method": "bdev_iscsi_set_options", 00:18:34.955 "params": { 00:18:34.955 "timeout_sec": 30 00:18:34.955 } 00:18:34.955 }, 00:18:34.955 { 00:18:34.955 "method": "bdev_nvme_set_options", 00:18:34.955 "params": { 00:18:34.955 "action_on_timeout": "none", 00:18:34.955 "timeout_us": 0, 00:18:34.955 "timeout_admin_us": 0, 00:18:34.955 "keep_alive_timeout_ms": 10000, 00:18:34.955 "arbitration_burst": 0, 00:18:34.955 "low_priority_weight": 0, 00:18:34.955 "medium_priority_weight": 0, 00:18:34.955 "high_priority_weight": 0, 00:18:34.955 "nvme_adminq_poll_period_us": 10000, 00:18:34.955 "nvme_ioq_poll_period_us": 0, 00:18:34.955 "io_queue_requests": 0, 00:18:34.955 "delay_cmd_submit": true, 00:18:34.955 "transport_retry_count": 4, 00:18:34.956 "bdev_retry_count": 3, 00:18:34.956 "transport_ack_timeout": 0, 00:18:34.956 "ctrlr_loss_timeout_sec": 0, 00:18:34.956 "reconnect_delay_sec": 0, 00:18:34.956 "fast_io_fail_timeout_sec": 0, 00:18:34.956 "disable_auto_failback": false, 00:18:34.956 "generate_uuids": false, 00:18:34.956 "transport_tos": 0, 00:18:34.956 "nvme_error_stat": false, 00:18:34.956 "rdma_srq_size": 0, 00:18:34.956 "io_path_stat": false, 00:18:34.956 "allow_accel_sequence": false, 00:18:34.956 "rdma_max_cq_size": 0, 00:18:34.956 "rdma_cm_event_timeout_ms": 0, 00:18:34.956 "dhchap_digests": [ 00:18:34.956 "sha256", 00:18:34.956 "sha384", 00:18:34.956 "sha512" 00:18:34.956 ], 00:18:34.956 "dhchap_dhgroups": [ 00:18:34.956 "null", 00:18:34.956 "ffdhe2048", 00:18:34.956 "ffdhe3072", 00:18:34.956 "ffdhe4096", 00:18:34.956 "ffdhe6144", 00:18:34.956 "ffdhe8192" 00:18:34.956 ] 00:18:34.956 } 00:18:34.956 }, 00:18:34.956 { 00:18:34.956 "method": "bdev_nvme_set_hotplug", 00:18:34.956 "params": { 00:18:34.956 "period_us": 100000, 00:18:34.956 "enable": false 00:18:34.956 } 00:18:34.956 }, 00:18:34.956 { 00:18:34.956 "method": "bdev_malloc_create", 00:18:34.956 "params": { 00:18:34.956 "name": "malloc0", 00:18:34.956 "num_blocks": 8192, 00:18:34.956 "block_size": 4096, 00:18:34.956 "physical_block_size": 4096, 00:18:34.956 "uuid": "5420195a-0c59-4caa-b13b-2890075b956f", 00:18:34.956 "optimal_io_boundary": 0, 00:18:34.956 "md_size": 0, 00:18:34.956 "dif_type": 0, 00:18:34.956 "dif_is_head_of_md": false, 00:18:34.956 "dif_pi_format": 0 00:18:34.956 } 00:18:34.956 }, 00:18:34.956 { 00:18:34.956 "method": "bdev_wait_for_examine" 00:18:34.956 } 00:18:34.956 ] 00:18:34.956 }, 00:18:34.956 { 00:18:34.956 "subsystem": "nbd", 00:18:34.956 "config": [] 00:18:34.956 }, 00:18:34.956 { 00:18:34.956 "subsystem": "scheduler", 00:18:34.956 "config": [ 00:18:34.956 { 00:18:34.956 "method": "framework_set_scheduler", 00:18:34.956 "params": { 00:18:34.956 "name": "static" 00:18:34.956 } 00:18:34.956 } 00:18:34.956 ] 00:18:34.956 }, 00:18:34.956 { 00:18:34.956 "subsystem": "nvmf", 00:18:34.956 "config": [ 00:18:34.956 { 00:18:34.956 "method": "nvmf_set_config", 00:18:34.956 "params": { 00:18:34.956 "discovery_filter": "match_any", 00:18:34.956 "admin_cmd_passthru": { 00:18:34.956 "identify_ctrlr": false 00:18:34.956 } 00:18:34.956 } 00:18:34.956 }, 00:18:34.956 { 00:18:34.956 "method": "nvmf_set_max_subsystems", 00:18:34.956 "params": { 00:18:34.956 "max_subsystems": 1024 00:18:34.956 } 00:18:34.956 }, 00:18:34.956 { 00:18:34.956 "method": "nvmf_set_crdt", 00:18:34.956 "params": { 00:18:34.956 "crdt1": 0, 00:18:34.956 "crdt2": 0, 00:18:34.956 "crdt3": 0 00:18:34.956 } 00:18:34.956 }, 00:18:34.956 { 00:18:34.956 "method": "nvmf_create_transport", 00:18:34.956 "params": { 00:18:34.956 "trtype": "TCP", 00:18:34.956 "max_queue_depth": 128, 00:18:34.956 "max_io_qpairs_per_ctrlr": 127, 00:18:34.956 "in_capsule_data_size": 4096, 00:18:34.956 "max_io_size": 131072, 00:18:34.956 "io_unit_size": 131072, 00:18:34.956 "max_aq_depth": 128, 00:18:34.956 "num_shared_buffers": 511, 00:18:34.956 "buf_cache_size": 4294967295, 00:18:34.956 "dif_insert_or_strip": false, 00:18:34.956 "zcopy": false, 00:18:34.956 "c2h_success": false, 00:18:34.956 "sock_priority": 0, 00:18:34.956 "abort_timeout_sec": 1, 00:18:34.956 "ack_timeout": 0, 00:18:34.956 "data_wr_pool_size": 0 00:18:34.956 } 00:18:34.956 }, 00:18:34.956 { 00:18:34.956 "method": "nvmf_create_subsystem", 00:18:34.956 "params": { 00:18:34.956 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:34.956 "allow_any_host": false, 00:18:34.956 "serial_number": "00000000000000000000", 00:18:34.956 "model_number": "SPDK bdev Controller", 00:18:34.956 "max_namespaces": 32, 00:18:34.956 "min_cntlid": 1, 00:18:34.956 "max_cntlid": 65519, 00:18:34.956 "ana_reporting": false 00:18:34.956 } 00:18:34.956 }, 00:18:34.956 { 00:18:34.956 "method": "nvmf_subsystem_add_host", 00:18:34.956 "params": { 00:18:34.956 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:34.956 "host": "nqn.2016-06.io.spdk:host1", 00:18:34.956 "psk": "key0" 00:18:34.956 } 00:18:34.956 }, 00:18:34.956 { 00:18:34.956 "method": "nvmf_subsystem_add_ns", 00:18:34.956 "params": { 00:18:34.956 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:34.956 "namespace": { 00:18:34.956 "nsid": 1, 00:18:34.956 "bdev_name": "malloc0", 00:18:34.956 "nguid": "5420195A0C594CAAB13B2890075B956F", 00:18:34.956 "uuid": "5420195a-0c59-4caa-b13b-2890075b956f", 00:18:34.956 "no_auto_visible": false 00:18:34.956 } 00:18:34.956 } 00:18:34.956 }, 00:18:34.956 { 00:18:34.956 "method": "nvmf_subsystem_add_listener", 00:18:34.956 "params": { 00:18:34.956 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:34.956 "listen_address": { 00:18:34.956 "trtype": "TCP", 00:18:34.956 "adrfam": "IPv4", 00:18:34.956 "traddr": "10.0.0.2", 00:18:34.956 "trsvcid": "4420" 00:18:34.956 }, 00:18:34.956 "secure_channel": false, 00:18:34.956 "sock_impl": "ssl" 00:18:34.956 } 00:18:34.956 } 00:18:34.956 ] 00:18:34.956 } 00:18:34.956 ] 00:18:34.956 }' 00:18:34.956 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:34.956 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:34.956 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3867141 00:18:34.956 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:18:34.956 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3867141 00:18:34.956 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3867141 ']' 00:18:34.956 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:34.956 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:34.956 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:34.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:34.956 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:34.956 22:28:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:34.956 [2024-07-24 22:28:00.482893] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:18:34.956 [2024-07-24 22:28:00.482991] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:34.956 EAL: No free 2048 kB hugepages reported on node 1 00:18:34.956 [2024-07-24 22:28:00.548814] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.214 [2024-07-24 22:28:00.668922] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:35.214 [2024-07-24 22:28:00.668984] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:35.214 [2024-07-24 22:28:00.669000] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:35.214 [2024-07-24 22:28:00.669014] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:35.214 [2024-07-24 22:28:00.669025] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:35.214 [2024-07-24 22:28:00.669123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:35.214 [2024-07-24 22:28:00.901190] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:35.471 [2024-07-24 22:28:00.939231] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:35.471 [2024-07-24 22:28:00.939488] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:36.036 22:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:36.036 22:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:36.036 22:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:36.036 22:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:36.036 22:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:36.036 22:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:36.036 22:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=3867260 00:18:36.036 22:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 3867260 /var/tmp/bdevperf.sock 00:18:36.036 22:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3867260 ']' 00:18:36.036 22:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:36.036 22:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:36.036 22:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:18:36.036 22:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:36.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:36.036 22:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:18:36.036 "subsystems": [ 00:18:36.036 { 00:18:36.036 "subsystem": "keyring", 00:18:36.036 "config": [ 00:18:36.036 { 00:18:36.036 "method": "keyring_file_add_key", 00:18:36.036 "params": { 00:18:36.036 "name": "key0", 00:18:36.036 "path": "/tmp/tmp.SlthOY8JbD" 00:18:36.036 } 00:18:36.036 } 00:18:36.036 ] 00:18:36.036 }, 00:18:36.036 { 00:18:36.036 "subsystem": "iobuf", 00:18:36.036 "config": [ 00:18:36.036 { 00:18:36.036 "method": "iobuf_set_options", 00:18:36.036 "params": { 00:18:36.036 "small_pool_count": 8192, 00:18:36.037 "large_pool_count": 1024, 00:18:36.037 "small_bufsize": 8192, 00:18:36.037 "large_bufsize": 135168 00:18:36.037 } 00:18:36.037 } 00:18:36.037 ] 00:18:36.037 }, 00:18:36.037 { 00:18:36.037 "subsystem": "sock", 00:18:36.037 "config": [ 00:18:36.037 { 00:18:36.037 "method": "sock_set_default_impl", 00:18:36.037 "params": { 00:18:36.037 "impl_name": "posix" 00:18:36.037 } 00:18:36.037 }, 00:18:36.037 { 00:18:36.037 "method": "sock_impl_set_options", 00:18:36.037 "params": { 00:18:36.037 "impl_name": "ssl", 00:18:36.037 "recv_buf_size": 4096, 00:18:36.037 "send_buf_size": 4096, 00:18:36.037 "enable_recv_pipe": true, 00:18:36.037 "enable_quickack": false, 00:18:36.037 "enable_placement_id": 0, 00:18:36.037 "enable_zerocopy_send_server": true, 00:18:36.037 "enable_zerocopy_send_client": false, 00:18:36.037 "zerocopy_threshold": 0, 00:18:36.037 "tls_version": 0, 00:18:36.037 "enable_ktls": false 00:18:36.037 } 00:18:36.037 }, 00:18:36.037 { 00:18:36.037 "method": "sock_impl_set_options", 00:18:36.037 "params": { 00:18:36.037 "impl_name": "posix", 00:18:36.037 "recv_buf_size": 2097152, 00:18:36.037 "send_buf_size": 2097152, 00:18:36.037 "enable_recv_pipe": true, 00:18:36.037 "enable_quickack": false, 00:18:36.037 "enable_placement_id": 0, 00:18:36.037 "enable_zerocopy_send_server": true, 00:18:36.037 "enable_zerocopy_send_client": false, 00:18:36.037 "zerocopy_threshold": 0, 00:18:36.037 "tls_version": 0, 00:18:36.037 "enable_ktls": false 00:18:36.037 } 00:18:36.037 } 00:18:36.037 ] 00:18:36.037 }, 00:18:36.037 { 00:18:36.037 "subsystem": "vmd", 00:18:36.037 "config": [] 00:18:36.037 }, 00:18:36.037 { 00:18:36.037 "subsystem": "accel", 00:18:36.037 "config": [ 00:18:36.037 { 00:18:36.037 "method": "accel_set_options", 00:18:36.037 "params": { 00:18:36.037 "small_cache_size": 128, 00:18:36.037 "large_cache_size": 16, 00:18:36.037 "task_count": 2048, 00:18:36.037 "sequence_count": 2048, 00:18:36.037 "buf_count": 2048 00:18:36.037 } 00:18:36.037 } 00:18:36.037 ] 00:18:36.037 }, 00:18:36.037 { 00:18:36.037 "subsystem": "bdev", 00:18:36.037 "config": [ 00:18:36.037 { 00:18:36.037 "method": "bdev_set_options", 00:18:36.037 "params": { 00:18:36.037 "bdev_io_pool_size": 65535, 00:18:36.037 "bdev_io_cache_size": 256, 00:18:36.037 "bdev_auto_examine": true, 00:18:36.037 "iobuf_small_cache_size": 128, 00:18:36.037 "iobuf_large_cache_size": 16 00:18:36.037 } 00:18:36.037 }, 00:18:36.037 { 00:18:36.037 "method": "bdev_raid_set_options", 00:18:36.037 "params": { 00:18:36.037 "process_window_size_kb": 1024, 00:18:36.037 "process_max_bandwidth_mb_sec": 0 00:18:36.037 } 00:18:36.037 }, 00:18:36.037 { 00:18:36.037 "method": "bdev_iscsi_set_options", 00:18:36.037 "params": { 00:18:36.037 "timeout_sec": 30 00:18:36.037 } 00:18:36.037 }, 00:18:36.037 { 00:18:36.037 "method": "bdev_nvme_set_options", 00:18:36.037 "params": { 00:18:36.037 "action_on_timeout": "none", 00:18:36.037 "timeout_us": 0, 00:18:36.037 "timeout_admin_us": 0, 00:18:36.037 "keep_alive_timeout_ms": 10000, 00:18:36.037 "arbitration_burst": 0, 00:18:36.037 "low_priority_weight": 0, 00:18:36.037 "medium_priority_weight": 0, 00:18:36.037 "high_priority_weight": 0, 00:18:36.037 "nvme_adminq_poll_period_us": 10000, 00:18:36.037 "nvme_ioq_poll_period_us": 0, 00:18:36.037 "io_queue_requests": 512, 00:18:36.037 "delay_cmd_submit": true, 00:18:36.037 "transport_retry_count": 4, 00:18:36.037 "bdev_retry_count": 3, 00:18:36.037 "transport_ack_timeout": 0, 00:18:36.037 "ctrlr_loss_timeout_sec": 0, 00:18:36.037 "reconnect_delay_sec": 0, 00:18:36.037 "fast_io_fail_timeout_sec": 0, 00:18:36.037 "disable_auto_failback": false, 00:18:36.037 "generate_uuids": false, 00:18:36.037 "transport_tos": 0, 00:18:36.037 "nvme_error_stat": false, 00:18:36.037 "rdma_srq_size": 0, 00:18:36.037 "io_path_stat": false, 00:18:36.037 "allow_accel_sequence": false, 00:18:36.037 "rdma_max_cq_size": 0, 00:18:36.037 "rdma_cm_event_timeout_ms": 0, 00:18:36.037 "dhchap_digests": [ 00:18:36.037 "sha256", 00:18:36.037 "sha384", 00:18:36.037 "sha512" 00:18:36.037 ], 00:18:36.037 "dhchap_dhgroups": [ 00:18:36.037 "null", 00:18:36.037 "ffdhe2048", 00:18:36.037 "ffdhe3072", 00:18:36.037 "ffdhe4096", 00:18:36.037 "ffdhe6144", 00:18:36.037 "ffdhe8192" 00:18:36.037 ] 00:18:36.037 } 00:18:36.037 }, 00:18:36.037 { 00:18:36.037 "method": "bdev_nvme_attach_controller", 00:18:36.037 "params": { 00:18:36.037 "name": "nvme0", 00:18:36.037 "trtype": "TCP", 00:18:36.037 "adrfam": "IPv4", 00:18:36.037 "traddr": "10.0.0.2", 00:18:36.037 "trsvcid": "4420", 00:18:36.037 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:36.037 "prchk_reftag": false, 00:18:36.037 "prchk_guard": false, 00:18:36.037 "ctrlr_loss_timeout_sec": 0, 00:18:36.037 "reconnect_delay_sec": 0, 00:18:36.037 "fast_io_fail_timeout_sec": 0, 00:18:36.037 "psk": "key0", 00:18:36.037 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:36.037 "hdgst": false, 00:18:36.037 "ddgst": false 00:18:36.037 } 00:18:36.037 }, 00:18:36.037 { 00:18:36.037 "method": "bdev_nvme_set_hotplug", 00:18:36.037 "params": { 00:18:36.037 "period_us": 100000, 00:18:36.037 "enable": false 00:18:36.037 } 00:18:36.037 }, 00:18:36.037 { 00:18:36.037 "method": "bdev_enable_histogram", 00:18:36.037 "params": { 00:18:36.037 "name": "nvme0n1", 00:18:36.037 "enable": true 00:18:36.037 } 00:18:36.037 }, 00:18:36.037 { 00:18:36.037 "method": "bdev_wait_for_examine" 00:18:36.037 } 00:18:36.037 ] 00:18:36.037 }, 00:18:36.037 { 00:18:36.037 "subsystem": "nbd", 00:18:36.037 "config": [] 00:18:36.037 } 00:18:36.037 ] 00:18:36.037 }' 00:18:36.037 22:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:36.037 22:28:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:36.037 [2024-07-24 22:28:01.598318] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:18:36.037 [2024-07-24 22:28:01.598408] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3867260 ] 00:18:36.037 EAL: No free 2048 kB hugepages reported on node 1 00:18:36.037 [2024-07-24 22:28:01.659582] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.296 [2024-07-24 22:28:01.780470] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:36.296 [2024-07-24 22:28:01.950893] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:37.228 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:37.228 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:37.228 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:37.228 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:18:37.486 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.486 22:28:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:37.486 Running I/O for 1 seconds... 00:18:38.418 00:18:38.418 Latency(us) 00:18:38.418 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:38.418 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:38.418 Verification LBA range: start 0x0 length 0x2000 00:18:38.418 nvme0n1 : 1.04 2052.76 8.02 0.00 0.00 61100.54 8349.77 54370.61 00:18:38.418 =================================================================================================================== 00:18:38.418 Total : 2052.76 8.02 0.00 0.00 61100.54 8349.77 54370.61 00:18:38.418 0 00:18:38.418 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:18:38.418 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:18:38.418 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:18:38.418 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:18:38.418 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:18:38.418 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:18:38.418 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:38.676 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:18:38.677 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:18:38.677 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:18:38.677 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:38.677 nvmf_trace.0 00:18:38.677 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:18:38.677 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 3867260 00:18:38.677 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3867260 ']' 00:18:38.677 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3867260 00:18:38.677 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:38.677 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:38.677 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3867260 00:18:38.677 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:38.677 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:38.677 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3867260' 00:18:38.677 killing process with pid 3867260 00:18:38.677 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3867260 00:18:38.677 Received shutdown signal, test time was about 1.000000 seconds 00:18:38.677 00:18:38.677 Latency(us) 00:18:38.677 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:38.677 =================================================================================================================== 00:18:38.677 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:38.677 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3867260 00:18:38.935 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:18:38.935 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:38.935 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:18:38.935 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:38.935 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:18:38.935 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:38.935 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:38.935 rmmod nvme_tcp 00:18:38.935 rmmod nvme_fabrics 00:18:38.935 rmmod nvme_keyring 00:18:38.935 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:38.935 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:18:38.935 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:18:38.935 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 3867141 ']' 00:18:38.935 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 3867141 00:18:38.935 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3867141 ']' 00:18:38.935 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3867141 00:18:38.935 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:38.935 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:38.935 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3867141 00:18:38.935 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:38.935 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:38.935 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3867141' 00:18:38.935 killing process with pid 3867141 00:18:38.935 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3867141 00:18:38.935 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3867141 00:18:39.194 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:39.195 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:39.195 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:39.195 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:39.195 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:39.195 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:39.195 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:39.195 22:28:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:41.099 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:41.358 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.m1xUyMKlNQ /tmp/tmp.tpcpZIGqnF /tmp/tmp.SlthOY8JbD 00:18:41.358 00:18:41.358 real 1m20.317s 00:18:41.358 user 2m13.016s 00:18:41.358 sys 0m24.776s 00:18:41.358 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:41.358 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:41.358 ************************************ 00:18:41.358 END TEST nvmf_tls 00:18:41.358 ************************************ 00:18:41.358 22:28:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:41.358 22:28:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:41.358 22:28:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:41.358 22:28:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:41.358 ************************************ 00:18:41.358 START TEST nvmf_fips 00:18:41.358 ************************************ 00:18:41.358 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:41.358 * Looking for test storage... 00:18:41.358 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:18:41.358 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:41.358 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:18:41.358 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:41.358 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:41.358 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:41.358 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:41.358 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:41.358 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:41.358 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:41.358 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:41.358 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:41.358 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:41.358 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:18:41.358 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:18:41.358 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:41.358 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:41.358 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:41.358 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:41.358 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:41.358 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:41.358 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:41.358 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:41.358 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:18:41.359 22:28:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:18:41.359 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:18:41.359 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:18:41.359 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:41.359 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:18:41.359 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:18:41.359 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:18:41.359 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:41.359 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:18:41.359 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:41.359 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:18:41.360 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:41.360 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:18:41.360 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:41.360 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:18:41.360 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:18:41.360 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:18:41.617 Error setting digest 00:18:41.617 008284840E7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:18:41.617 008284840E7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:18:41.617 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:18:41.617 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:41.617 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:41.617 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:41.617 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:18:41.618 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:41.618 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:41.618 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:41.618 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:41.618 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:41.618 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:41.618 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:41.618 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:41.618 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:41.618 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:41.618 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:18:41.618 22:28:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:43.522 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:43.522 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:18:43.522 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:43.522 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:43.522 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:43.522 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:43.522 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:43.522 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:18:43.522 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:43.522 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:18:43.522 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:18:43.522 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:18:43.522 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:18:43.522 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:18:43.522 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:18:43.522 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:43.522 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:43.522 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:43.522 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:43.522 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:43.522 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:43.522 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:43.522 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:43.522 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:43.522 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:43.522 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:43.522 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:43.522 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:43.522 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:43.522 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:43.522 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:43.522 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:43.522 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:43.522 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:18:43.522 Found 0000:08:00.0 (0x8086 - 0x159b) 00:18:43.522 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:43.522 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:43.522 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:43.522 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:43.522 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:43.522 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:43.522 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:18:43.522 Found 0000:08:00.1 (0x8086 - 0x159b) 00:18:43.522 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:43.522 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:43.522 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:43.522 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:43.522 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:43.522 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:43.522 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:43.522 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:43.522 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:43.522 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:43.522 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:43.522 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:43.522 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:43.522 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:43.522 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:43.522 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:18:43.522 Found net devices under 0000:08:00.0: cvl_0_0 00:18:43.522 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:43.522 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:43.522 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:43.522 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:43.522 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:43.522 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:43.522 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:43.523 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:43.523 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:18:43.523 Found net devices under 0000:08:00.1: cvl_0_1 00:18:43.523 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:43.523 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:43.523 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:18:43.523 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:43.523 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:43.523 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:43.523 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:43.523 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:43.523 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:43.523 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:43.523 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:43.523 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:43.523 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:43.523 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:43.523 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:43.523 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:43.523 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:43.523 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:43.523 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:43.523 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:43.523 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:43.523 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:43.523 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:43.523 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:43.523 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:43.523 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:43.523 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:43.523 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.399 ms 00:18:43.523 00:18:43.523 --- 10.0.0.2 ping statistics --- 00:18:43.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:43.523 rtt min/avg/max/mdev = 0.399/0.399/0.399/0.000 ms 00:18:43.523 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:43.523 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:43.523 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:18:43.523 00:18:43.523 --- 10.0.0.1 ping statistics --- 00:18:43.523 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:43.523 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:18:43.523 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:43.523 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:18:43.523 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:43.523 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:43.523 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:43.523 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:43.523 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:43.523 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:43.523 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:43.523 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:18:43.523 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:43.523 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:43.523 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:43.523 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=3869091 00:18:43.523 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:43.523 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 3869091 00:18:43.523 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 3869091 ']' 00:18:43.523 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:43.523 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:43.523 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:43.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:43.523 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:43.523 22:28:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:43.523 [2024-07-24 22:28:08.954894] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:18:43.523 [2024-07-24 22:28:08.954992] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:43.523 EAL: No free 2048 kB hugepages reported on node 1 00:18:43.523 [2024-07-24 22:28:09.022604] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.523 [2024-07-24 22:28:09.137689] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:43.523 [2024-07-24 22:28:09.137755] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:43.523 [2024-07-24 22:28:09.137771] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:43.523 [2024-07-24 22:28:09.137785] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:43.523 [2024-07-24 22:28:09.137797] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:43.523 [2024-07-24 22:28:09.137833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:43.782 22:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:43.782 22:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:18:43.782 22:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:43.782 22:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:43.782 22:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:43.782 22:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:43.782 22:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:18:43.782 22:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:43.782 22:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:43.782 22:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:43.782 22:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:43.782 22:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:43.782 22:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:43.782 22:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:44.040 [2024-07-24 22:28:09.540285] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:44.041 [2024-07-24 22:28:09.556270] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:44.041 [2024-07-24 22:28:09.556506] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:44.041 [2024-07-24 22:28:09.586924] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:44.041 malloc0 00:18:44.041 22:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:44.041 22:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=3869201 00:18:44.041 22:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 3869201 /var/tmp/bdevperf.sock 00:18:44.041 22:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 3869201 ']' 00:18:44.041 22:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:44.041 22:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:44.041 22:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:44.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:44.041 22:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:44.041 22:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:44.041 22:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:44.041 [2024-07-24 22:28:09.696147] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:18:44.041 [2024-07-24 22:28:09.696249] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3869201 ] 00:18:44.041 EAL: No free 2048 kB hugepages reported on node 1 00:18:44.300 [2024-07-24 22:28:09.757814] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.300 [2024-07-24 22:28:09.864998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:44.300 22:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:44.300 22:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:18:44.300 22:28:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:44.557 [2024-07-24 22:28:10.232325] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:44.557 [2024-07-24 22:28:10.232450] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:44.815 TLSTESTn1 00:18:44.815 22:28:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:44.815 Running I/O for 10 seconds... 00:18:57.016 00:18:57.017 Latency(us) 00:18:57.017 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:57.017 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:57.017 Verification LBA range: start 0x0 length 0x2000 00:18:57.017 TLSTESTn1 : 10.03 3256.06 12.72 0.00 0.00 39235.38 7670.14 53982.25 00:18:57.017 =================================================================================================================== 00:18:57.017 Total : 3256.06 12.72 0.00 0.00 39235.38 7670.14 53982.25 00:18:57.017 0 00:18:57.017 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:18:57.017 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:18:57.017 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:18:57.017 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:18:57.017 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:18:57.017 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:57.017 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:18:57.017 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:18:57.017 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:18:57.017 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:57.017 nvmf_trace.0 00:18:57.017 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:18:57.017 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3869201 00:18:57.017 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 3869201 ']' 00:18:57.017 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 3869201 00:18:57.017 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:18:57.017 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:57.017 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3869201 00:18:57.017 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:57.017 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:57.017 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3869201' 00:18:57.017 killing process with pid 3869201 00:18:57.017 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@967 -- # kill 3869201 00:18:57.017 Received shutdown signal, test time was about 10.000000 seconds 00:18:57.017 00:18:57.017 Latency(us) 00:18:57.017 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:57.017 =================================================================================================================== 00:18:57.017 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:57.017 [2024-07-24 22:28:20.616260] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:57.017 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # wait 3869201 00:18:57.017 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:18:57.017 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:57.017 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:18:57.017 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:57.017 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:18:57.017 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:57.017 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:57.017 rmmod nvme_tcp 00:18:57.017 rmmod nvme_fabrics 00:18:57.017 rmmod nvme_keyring 00:18:57.017 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:57.017 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:18:57.017 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:18:57.017 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 3869091 ']' 00:18:57.017 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 3869091 00:18:57.017 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 3869091 ']' 00:18:57.017 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 3869091 00:18:57.017 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:18:57.017 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:57.017 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3869091 00:18:57.017 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:57.017 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:57.017 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3869091' 00:18:57.017 killing process with pid 3869091 00:18:57.017 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@967 -- # kill 3869091 00:18:57.017 [2024-07-24 22:28:20.933322] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:57.017 22:28:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # wait 3869091 00:18:57.017 22:28:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:57.017 22:28:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:57.017 22:28:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:57.017 22:28:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:57.017 22:28:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:57.017 22:28:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:57.017 22:28:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:57.017 22:28:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:57.584 22:28:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:57.584 22:28:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:57.584 00:18:57.584 real 0m16.340s 00:18:57.584 user 0m21.339s 00:18:57.584 sys 0m5.288s 00:18:57.584 22:28:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:57.584 22:28:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:57.584 ************************************ 00:18:57.584 END TEST nvmf_fips 00:18:57.584 ************************************ 00:18:57.584 22:28:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 0 -eq 1 ']' 00:18:57.584 22:28:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ phy == phy ]] 00:18:57.584 22:28:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@52 -- # '[' tcp = tcp ']' 00:18:57.584 22:28:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # gather_supported_nvmf_pci_devs 00:18:57.584 22:28:23 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@285 -- # xtrace_disable 00:18:57.584 22:28:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:59.545 22:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:59.545 22:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # pci_devs=() 00:18:59.545 22:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:59.545 22:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:59.545 22:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:59.545 22:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:59.545 22:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:59.545 22:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # net_devs=() 00:18:59.545 22:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:59.545 22:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # e810=() 00:18:59.545 22:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # local -ga e810 00:18:59.545 22:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # x722=() 00:18:59.545 22:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # local -ga x722 00:18:59.545 22:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # mlx=() 00:18:59.545 22:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # local -ga mlx 00:18:59.545 22:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:59.545 22:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:59.545 22:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:59.545 22:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:59.545 22:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:59.545 22:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:59.545 22:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:59.545 22:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:59.545 22:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:59.545 22:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:59.545 22:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:59.545 22:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:59.545 22:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:59.545 22:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:59.545 22:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:59.545 22:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:59.545 22:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:59.545 22:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:59.545 22:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:18:59.545 Found 0000:08:00.0 (0x8086 - 0x159b) 00:18:59.545 22:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:59.545 22:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:59.545 22:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:59.545 22:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:59.545 22:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:59.545 22:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:59.545 22:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:18:59.545 Found 0000:08:00.1 (0x8086 - 0x159b) 00:18:59.545 22:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:59.545 22:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:59.545 22:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:59.545 22:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:59.545 22:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:59.545 22:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:59.545 22:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:59.545 22:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:59.545 22:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:59.545 22:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:59.545 22:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:59.545 22:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:59.545 22:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:59.545 22:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:59.545 22:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:59.545 22:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:18:59.545 Found net devices under 0000:08:00.0: cvl_0_0 00:18:59.545 22:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:59.545 22:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:59.545 22:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:59.545 22:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:59.545 22:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:59.546 22:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:59.546 22:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:59.546 22:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:59.546 22:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:18:59.546 Found net devices under 0000:08:00.1: cvl_0_1 00:18:59.546 22:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:59.546 22:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:59.546 22:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:59.546 22:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # (( 2 > 0 )) 00:18:59.546 22:28:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:18:59.546 22:28:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:59.546 22:28:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:59.546 22:28:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:59.546 ************************************ 00:18:59.546 START TEST nvmf_perf_adq 00:18:59.546 ************************************ 00:18:59.546 22:28:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:18:59.546 * Looking for test storage... 00:18:59.546 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:59.546 22:28:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:59.546 22:28:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:18:59.546 22:28:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:59.546 22:28:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:59.546 22:28:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:59.546 22:28:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:59.546 22:28:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:59.546 22:28:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:59.546 22:28:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:59.546 22:28:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:59.546 22:28:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:59.546 22:28:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:59.546 22:28:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:18:59.546 22:28:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:18:59.546 22:28:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:59.546 22:28:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:59.546 22:28:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:59.546 22:28:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:59.546 22:28:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:59.546 22:28:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:59.546 22:28:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:59.546 22:28:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:59.546 22:28:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.546 22:28:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.546 22:28:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.546 22:28:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:18:59.546 22:28:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.546 22:28:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:18:59.546 22:28:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:59.546 22:28:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:59.546 22:28:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:59.546 22:28:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:59.546 22:28:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:59.546 22:28:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:59.546 22:28:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:59.546 22:28:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:59.546 22:28:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:18:59.546 22:28:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:18:59.546 22:28:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:00.922 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:00.922 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:19:00.922 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:00.922 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:00.922 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:00.922 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:00.922 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:00.922 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:19:00.922 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:00.922 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:19:00.922 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:19:00.922 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:19:00.922 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:19:00.922 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:19:00.922 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:19:00.922 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:00.922 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:00.922 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:00.922 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:00.922 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:00.922 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:00.922 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:00.922 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:00.922 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:00.922 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:00.922 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:00.922 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:00.922 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:00.922 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:00.922 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:00.922 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:00.922 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:00.922 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:00.922 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:19:00.922 Found 0000:08:00.0 (0x8086 - 0x159b) 00:19:00.922 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:00.922 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:00.922 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:00.922 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:00.922 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:00.922 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:00.922 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:19:00.922 Found 0000:08:00.1 (0x8086 - 0x159b) 00:19:00.922 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:00.922 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:00.922 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:00.922 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:00.922 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:00.922 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:00.922 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:00.922 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:00.922 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:00.922 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:00.922 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:00.923 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:00.923 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:00.923 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:00.923 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:00.923 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:19:00.923 Found net devices under 0000:08:00.0: cvl_0_0 00:19:00.923 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:00.923 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:00.923 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:00.923 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:00.923 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:00.923 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:00.923 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:00.923 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:00.923 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:19:00.923 Found net devices under 0000:08:00.1: cvl_0_1 00:19:00.923 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:00.923 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:00.923 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:00.923 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:19:00.923 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:00.923 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:19:00.923 22:28:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:19:01.492 22:28:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:19:03.396 22:28:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:19:08.677 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:19:08.677 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:08.677 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:08.677 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:08.677 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:08.677 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:08.677 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:08.677 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:08.677 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:08.677 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:08.677 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:08.677 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:19:08.677 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:08.677 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:08.677 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:19:08.677 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:08.677 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:08.677 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:08.677 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:08.677 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:08.677 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:19:08.677 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:08.677 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:19:08.677 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:19:08.677 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:19:08.677 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:19:08.677 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:19:08.677 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:19:08.677 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:08.677 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:08.677 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:08.678 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:08.678 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:08.678 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:08.678 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:08.678 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:08.678 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:08.678 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:08.678 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:08.678 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:08.678 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:08.678 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:08.678 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:08.678 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:08.678 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:08.678 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:08.678 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:19:08.678 Found 0000:08:00.0 (0x8086 - 0x159b) 00:19:08.678 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:08.678 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:08.678 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:08.678 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:08.678 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:08.678 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:08.678 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:19:08.678 Found 0000:08:00.1 (0x8086 - 0x159b) 00:19:08.678 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:08.678 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:08.678 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:08.678 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:08.678 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:08.678 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:08.678 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:08.678 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:08.678 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:08.678 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:08.678 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:08.678 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:08.678 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:08.678 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:08.678 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:08.678 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:19:08.678 Found net devices under 0000:08:00.0: cvl_0_0 00:19:08.678 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:08.678 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:08.678 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:08.678 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:08.678 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:08.678 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:08.678 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:08.678 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:08.678 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:19:08.678 Found net devices under 0000:08:00.1: cvl_0_1 00:19:08.678 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:08.678 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:08.678 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:19:08.678 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:08.678 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:08.678 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:08.678 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:08.678 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:08.678 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:08.678 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:08.678 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:08.678 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:08.678 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:08.678 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:08.678 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:08.678 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:08.678 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:08.678 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:08.678 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:08.678 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:08.678 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:08.678 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:08.678 22:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:08.678 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:08.678 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:08.678 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:08.678 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:08.678 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:19:08.678 00:19:08.678 --- 10.0.0.2 ping statistics --- 00:19:08.678 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:08.678 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:19:08.678 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:08.678 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:08.678 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:19:08.678 00:19:08.678 --- 10.0.0.1 ping statistics --- 00:19:08.678 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:08.678 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:19:08.678 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:08.678 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:19:08.678 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:08.678 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:08.678 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:08.678 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:08.678 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:08.678 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:08.678 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:08.678 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:08.678 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:08.678 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:08.678 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:08.678 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3873603 00:19:08.678 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:08.678 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3873603 00:19:08.678 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 3873603 ']' 00:19:08.679 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:08.679 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:08.679 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:08.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:08.679 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:08.679 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:08.679 [2024-07-24 22:28:34.128499] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:19:08.679 [2024-07-24 22:28:34.128613] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:08.679 EAL: No free 2048 kB hugepages reported on node 1 00:19:08.679 [2024-07-24 22:28:34.195586] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:08.679 [2024-07-24 22:28:34.313966] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:08.679 [2024-07-24 22:28:34.314030] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:08.679 [2024-07-24 22:28:34.314047] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:08.679 [2024-07-24 22:28:34.314061] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:08.679 [2024-07-24 22:28:34.314073] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:08.679 [2024-07-24 22:28:34.314176] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:08.679 [2024-07-24 22:28:34.314236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:08.679 [2024-07-24 22:28:34.314287] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:08.679 [2024-07-24 22:28:34.314290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:08.679 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:08.679 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:19:08.679 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:08.679 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:08.679 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:08.939 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:08.939 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:19:08.939 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:08.939 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.939 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:08.939 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:08.939 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.939 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:08.939 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:19:08.939 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.939 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:08.939 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.940 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:08.940 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.940 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:08.940 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.940 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:19:08.940 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.940 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:08.940 [2024-07-24 22:28:34.544152] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:08.940 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.940 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:08.940 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.940 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:08.940 Malloc1 00:19:08.940 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.940 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:08.940 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.940 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:08.940 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.940 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:08.940 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.940 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:08.940 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.940 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:08.940 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.940 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:08.940 [2024-07-24 22:28:34.594344] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:08.940 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.940 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=3873634 00:19:08.940 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:19:08.940 22:28:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:08.940 EAL: No free 2048 kB hugepages reported on node 1 00:19:11.476 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:19:11.476 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.476 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:11.476 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.476 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:19:11.476 "tick_rate": 2700000000, 00:19:11.476 "poll_groups": [ 00:19:11.476 { 00:19:11.476 "name": "nvmf_tgt_poll_group_000", 00:19:11.476 "admin_qpairs": 1, 00:19:11.476 "io_qpairs": 1, 00:19:11.476 "current_admin_qpairs": 1, 00:19:11.476 "current_io_qpairs": 1, 00:19:11.476 "pending_bdev_io": 0, 00:19:11.476 "completed_nvme_io": 19794, 00:19:11.476 "transports": [ 00:19:11.476 { 00:19:11.476 "trtype": "TCP" 00:19:11.476 } 00:19:11.476 ] 00:19:11.476 }, 00:19:11.476 { 00:19:11.476 "name": "nvmf_tgt_poll_group_001", 00:19:11.476 "admin_qpairs": 0, 00:19:11.476 "io_qpairs": 1, 00:19:11.476 "current_admin_qpairs": 0, 00:19:11.476 "current_io_qpairs": 1, 00:19:11.476 "pending_bdev_io": 0, 00:19:11.476 "completed_nvme_io": 17540, 00:19:11.476 "transports": [ 00:19:11.476 { 00:19:11.476 "trtype": "TCP" 00:19:11.476 } 00:19:11.476 ] 00:19:11.476 }, 00:19:11.476 { 00:19:11.476 "name": "nvmf_tgt_poll_group_002", 00:19:11.476 "admin_qpairs": 0, 00:19:11.476 "io_qpairs": 1, 00:19:11.476 "current_admin_qpairs": 0, 00:19:11.476 "current_io_qpairs": 1, 00:19:11.477 "pending_bdev_io": 0, 00:19:11.477 "completed_nvme_io": 18731, 00:19:11.477 "transports": [ 00:19:11.477 { 00:19:11.477 "trtype": "TCP" 00:19:11.477 } 00:19:11.477 ] 00:19:11.477 }, 00:19:11.477 { 00:19:11.477 "name": "nvmf_tgt_poll_group_003", 00:19:11.477 "admin_qpairs": 0, 00:19:11.477 "io_qpairs": 1, 00:19:11.477 "current_admin_qpairs": 0, 00:19:11.477 "current_io_qpairs": 1, 00:19:11.477 "pending_bdev_io": 0, 00:19:11.477 "completed_nvme_io": 19713, 00:19:11.477 "transports": [ 00:19:11.477 { 00:19:11.477 "trtype": "TCP" 00:19:11.477 } 00:19:11.477 ] 00:19:11.477 } 00:19:11.477 ] 00:19:11.477 }' 00:19:11.477 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:19:11.477 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:19:11.477 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:19:11.477 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:19:11.477 22:28:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 3873634 00:19:19.603 Initializing NVMe Controllers 00:19:19.603 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:19.603 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:19.603 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:19.603 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:19.603 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:19.603 Initialization complete. Launching workers. 00:19:19.603 ======================================================== 00:19:19.603 Latency(us) 00:19:19.603 Device Information : IOPS MiB/s Average min max 00:19:19.603 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 9814.10 38.34 6521.23 2135.01 10911.14 00:19:19.603 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 9203.40 35.95 6955.20 3124.45 10727.85 00:19:19.603 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10380.80 40.55 6165.38 3807.76 9648.33 00:19:19.603 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10419.20 40.70 6142.96 5487.52 7674.07 00:19:19.603 ======================================================== 00:19:19.603 Total : 39817.48 155.54 6429.78 2135.01 10911.14 00:19:19.603 00:19:19.603 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:19:19.603 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:19.603 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:19:19.603 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:19.603 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:19:19.603 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:19.603 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:19.603 rmmod nvme_tcp 00:19:19.603 rmmod nvme_fabrics 00:19:19.603 rmmod nvme_keyring 00:19:19.603 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:19.603 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:19:19.603 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:19:19.603 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3873603 ']' 00:19:19.603 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3873603 00:19:19.603 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 3873603 ']' 00:19:19.603 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 3873603 00:19:19.603 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:19:19.603 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:19.603 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3873603 00:19:19.603 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:19.603 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:19.603 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3873603' 00:19:19.603 killing process with pid 3873603 00:19:19.603 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 3873603 00:19:19.603 22:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 3873603 00:19:19.603 22:28:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:19.603 22:28:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:19.603 22:28:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:19.603 22:28:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:19.603 22:28:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:19.603 22:28:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:19.603 22:28:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:19.603 22:28:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:21.513 22:28:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:21.513 22:28:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:19:21.513 22:28:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:19:22.084 22:28:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:19:23.991 22:28:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:19:29.270 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:19:29.270 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:29.270 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:29.270 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:29.270 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:29.270 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:29.270 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:29.270 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:29.270 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:29.270 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:29.270 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:29.270 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:19:29.270 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:29.270 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:29.270 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:19:29.270 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:29.270 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:29.270 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:29.270 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:29.270 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:29.270 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:19:29.270 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:29.270 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:19:29.270 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:19:29.270 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:19:29.270 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:19:29.270 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:19:29.270 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:19:29.270 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:29.270 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:29.270 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:29.270 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:29.270 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:29.270 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:29.270 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:29.270 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:29.270 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:29.270 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:29.270 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:29.270 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:29.270 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:29.270 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:29.270 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:29.270 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:29.270 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:29.270 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:29.270 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:19:29.271 Found 0000:08:00.0 (0x8086 - 0x159b) 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:19:29.271 Found 0000:08:00.1 (0x8086 - 0x159b) 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:19:29.271 Found net devices under 0000:08:00.0: cvl_0_0 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:19:29.271 Found net devices under 0000:08:00.1: cvl_0_1 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:29.271 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:29.271 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.337 ms 00:19:29.271 00:19:29.271 --- 10.0.0.2 ping statistics --- 00:19:29.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:29.271 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:29.271 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:29.271 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:19:29.271 00:19:29.271 --- 10.0.0.1 ping statistics --- 00:19:29.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:29.271 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:19:29.271 net.core.busy_poll = 1 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:19:29.271 net.core.busy_read = 1 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3875737 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3875737 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 3875737 ']' 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:29.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:29.271 22:28:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:29.272 [2024-07-24 22:28:54.899808] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:19:29.272 [2024-07-24 22:28:54.899906] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:29.272 EAL: No free 2048 kB hugepages reported on node 1 00:19:29.272 [2024-07-24 22:28:54.964995] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:29.531 [2024-07-24 22:28:55.082018] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:29.531 [2024-07-24 22:28:55.082081] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:29.531 [2024-07-24 22:28:55.082097] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:29.531 [2024-07-24 22:28:55.082111] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:29.531 [2024-07-24 22:28:55.082124] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:29.531 [2024-07-24 22:28:55.082232] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:29.531 [2024-07-24 22:28:55.082305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:29.531 [2024-07-24 22:28:55.082352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:29.531 [2024-07-24 22:28:55.082355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:29.531 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:29.531 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:19:29.531 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:29.531 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:29.531 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:29.531 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:29.531 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:19:29.531 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:29.531 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:29.531 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.531 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:29.531 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.531 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:29.531 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:19:29.531 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.531 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:29.531 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.531 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:29.531 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.531 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:29.791 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.791 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:19:29.791 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.791 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:29.791 [2024-07-24 22:28:55.328155] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:29.791 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.791 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:29.791 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.791 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:29.791 Malloc1 00:19:29.791 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.791 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:29.791 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.791 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:29.791 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.791 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:29.791 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.791 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:29.791 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.791 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:29.791 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.791 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:29.791 [2024-07-24 22:28:55.378383] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:29.791 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.791 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=3875774 00:19:29.791 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:29.791 22:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:19:29.791 EAL: No free 2048 kB hugepages reported on node 1 00:19:31.695 22:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:19:31.695 22:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.695 22:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:31.955 22:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.955 22:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:19:31.955 "tick_rate": 2700000000, 00:19:31.955 "poll_groups": [ 00:19:31.955 { 00:19:31.955 "name": "nvmf_tgt_poll_group_000", 00:19:31.955 "admin_qpairs": 1, 00:19:31.955 "io_qpairs": 1, 00:19:31.955 "current_admin_qpairs": 1, 00:19:31.955 "current_io_qpairs": 1, 00:19:31.955 "pending_bdev_io": 0, 00:19:31.955 "completed_nvme_io": 21934, 00:19:31.955 "transports": [ 00:19:31.955 { 00:19:31.955 "trtype": "TCP" 00:19:31.955 } 00:19:31.955 ] 00:19:31.955 }, 00:19:31.955 { 00:19:31.955 "name": "nvmf_tgt_poll_group_001", 00:19:31.955 "admin_qpairs": 0, 00:19:31.955 "io_qpairs": 3, 00:19:31.955 "current_admin_qpairs": 0, 00:19:31.955 "current_io_qpairs": 3, 00:19:31.955 "pending_bdev_io": 0, 00:19:31.955 "completed_nvme_io": 23118, 00:19:31.955 "transports": [ 00:19:31.955 { 00:19:31.955 "trtype": "TCP" 00:19:31.955 } 00:19:31.955 ] 00:19:31.955 }, 00:19:31.955 { 00:19:31.955 "name": "nvmf_tgt_poll_group_002", 00:19:31.955 "admin_qpairs": 0, 00:19:31.955 "io_qpairs": 0, 00:19:31.955 "current_admin_qpairs": 0, 00:19:31.955 "current_io_qpairs": 0, 00:19:31.955 "pending_bdev_io": 0, 00:19:31.955 "completed_nvme_io": 0, 00:19:31.955 "transports": [ 00:19:31.955 { 00:19:31.955 "trtype": "TCP" 00:19:31.955 } 00:19:31.955 ] 00:19:31.955 }, 00:19:31.955 { 00:19:31.955 "name": "nvmf_tgt_poll_group_003", 00:19:31.955 "admin_qpairs": 0, 00:19:31.955 "io_qpairs": 0, 00:19:31.955 "current_admin_qpairs": 0, 00:19:31.955 "current_io_qpairs": 0, 00:19:31.955 "pending_bdev_io": 0, 00:19:31.955 "completed_nvme_io": 0, 00:19:31.955 "transports": [ 00:19:31.955 { 00:19:31.955 "trtype": "TCP" 00:19:31.955 } 00:19:31.955 ] 00:19:31.955 } 00:19:31.955 ] 00:19:31.955 }' 00:19:31.955 22:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:19:31.955 22:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:19:31.955 22:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:19:31.955 22:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:19:31.955 22:28:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 3875774 00:19:40.098 Initializing NVMe Controllers 00:19:40.098 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:40.098 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:40.098 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:40.098 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:40.098 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:40.098 Initialization complete. Launching workers. 00:19:40.098 ======================================================== 00:19:40.098 Latency(us) 00:19:40.098 Device Information : IOPS MiB/s Average min max 00:19:40.098 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 12106.70 47.29 5286.57 1975.69 7819.36 00:19:40.098 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4616.80 18.03 13866.99 2085.32 62183.49 00:19:40.098 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4273.80 16.69 14977.08 1917.28 64837.49 00:19:40.098 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 3925.20 15.33 16306.95 2101.84 64491.99 00:19:40.098 ======================================================== 00:19:40.098 Total : 24922.50 97.35 10273.49 1917.28 64837.49 00:19:40.098 00:19:40.098 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:19:40.098 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:40.098 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:19:40.098 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:40.098 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:19:40.098 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:40.098 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:40.098 rmmod nvme_tcp 00:19:40.098 rmmod nvme_fabrics 00:19:40.098 rmmod nvme_keyring 00:19:40.098 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:40.098 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:19:40.098 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:19:40.098 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3875737 ']' 00:19:40.098 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3875737 00:19:40.098 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 3875737 ']' 00:19:40.098 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 3875737 00:19:40.098 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:19:40.098 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:40.098 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3875737 00:19:40.098 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:40.098 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:40.098 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3875737' 00:19:40.098 killing process with pid 3875737 00:19:40.098 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 3875737 00:19:40.098 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 3875737 00:19:40.361 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:40.361 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:40.361 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:40.361 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:40.361 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:40.361 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:40.361 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:40.361 22:29:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:43.660 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:43.660 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:19:43.660 00:19:43.660 real 0m44.097s 00:19:43.660 user 2m32.832s 00:19:43.660 sys 0m12.310s 00:19:43.660 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:43.660 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:43.660 ************************************ 00:19:43.660 END TEST nvmf_perf_adq 00:19:43.660 ************************************ 00:19:43.660 22:29:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:19:43.660 22:29:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:43.660 22:29:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:43.660 22:29:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:43.660 ************************************ 00:19:43.660 START TEST nvmf_shutdown 00:19:43.660 ************************************ 00:19:43.660 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:19:43.660 * Looking for test storage... 00:19:43.660 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:43.660 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:43.660 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:19:43.660 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:43.660 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:43.660 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:43.660 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:43.660 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:43.660 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:43.660 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:43.660 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:43.660 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:43.660 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:43.660 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:19:43.660 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:19:43.660 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:43.660 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:43.660 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:43.660 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:43.660 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:43.660 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:43.660 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:43.660 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:43.660 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.660 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.660 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.660 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:19:43.660 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.660 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:19:43.660 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:43.660 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:43.660 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:43.660 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:43.660 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:43.660 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:43.660 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:43.660 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:43.660 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:43.660 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:43.660 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:19:43.660 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:43.660 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:43.660 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:43.660 ************************************ 00:19:43.660 START TEST nvmf_shutdown_tc1 00:19:43.660 ************************************ 00:19:43.660 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:19:43.660 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:19:43.660 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:43.660 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:43.660 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:43.660 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:43.660 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:43.660 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:43.660 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:43.660 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:43.660 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:43.660 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:43.660 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:43.660 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:43.660 22:29:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:45.566 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:45.566 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:45.566 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:45.566 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:45.566 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:45.566 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:45.566 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:45.566 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:19:45.566 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:45.566 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:19:45.566 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:19:45.566 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:19:45.566 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:19:45.566 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:19:45.566 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:45.566 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:45.566 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:45.566 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:45.566 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:45.566 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:45.566 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:45.566 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:45.566 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:45.566 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:45.566 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:45.566 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:45.566 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:45.566 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:45.566 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:45.566 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:45.566 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:45.566 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:45.566 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:45.566 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:19:45.566 Found 0000:08:00.0 (0x8086 - 0x159b) 00:19:45.566 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:19:45.567 Found 0000:08:00.1 (0x8086 - 0x159b) 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:19:45.567 Found net devices under 0000:08:00.0: cvl_0_0 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:19:45.567 Found net devices under 0000:08:00.1: cvl_0_1 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:45.567 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:45.567 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:19:45.567 00:19:45.567 --- 10.0.0.2 ping statistics --- 00:19:45.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:45.567 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:45.567 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:45.567 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:19:45.567 00:19:45.567 --- 10.0.0.1 ping statistics --- 00:19:45.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:45.567 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=3878297 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 3878297 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 3878297 ']' 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:45.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:45.567 22:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:45.567 [2024-07-24 22:29:10.992336] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:19:45.567 [2024-07-24 22:29:10.992432] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:45.567 EAL: No free 2048 kB hugepages reported on node 1 00:19:45.567 [2024-07-24 22:29:11.058867] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:45.567 [2024-07-24 22:29:11.175926] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:45.567 [2024-07-24 22:29:11.175990] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:45.567 [2024-07-24 22:29:11.176006] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:45.568 [2024-07-24 22:29:11.176019] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:45.568 [2024-07-24 22:29:11.176031] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:45.568 [2024-07-24 22:29:11.176114] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:45.568 [2024-07-24 22:29:11.176167] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:45.568 [2024-07-24 22:29:11.176221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:45.568 [2024-07-24 22:29:11.176225] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:45.828 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:45.828 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:19:45.828 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:45.828 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:45.828 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:45.828 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:45.828 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:45.828 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.828 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:45.828 [2024-07-24 22:29:11.310619] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:45.828 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.828 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:19:45.828 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:19:45.828 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:45.828 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:45.828 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:45.828 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:45.828 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:45.828 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:45.828 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:45.828 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:45.828 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:45.828 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:45.828 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:45.828 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:45.828 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:45.828 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:45.828 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:45.828 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:45.828 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:45.828 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:45.828 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:45.828 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:45.828 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:45.828 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:45.828 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:45.828 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:19:45.828 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.828 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:45.828 Malloc1 00:19:45.828 [2024-07-24 22:29:11.387733] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:45.828 Malloc2 00:19:45.828 Malloc3 00:19:45.828 Malloc4 00:19:46.089 Malloc5 00:19:46.089 Malloc6 00:19:46.089 Malloc7 00:19:46.089 Malloc8 00:19:46.089 Malloc9 00:19:46.089 Malloc10 00:19:46.349 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.349 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:19:46.349 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:46.349 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:46.349 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=3878437 00:19:46.349 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 3878437 /var/tmp/bdevperf.sock 00:19:46.349 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 3878437 ']' 00:19:46.349 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:19:46.349 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:46.349 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:46.349 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:46.349 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:19:46.349 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:46.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:46.349 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:19:46.349 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:46.349 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:46.349 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:46.349 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:46.349 { 00:19:46.349 "params": { 00:19:46.349 "name": "Nvme$subsystem", 00:19:46.349 "trtype": "$TEST_TRANSPORT", 00:19:46.349 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:46.349 "adrfam": "ipv4", 00:19:46.349 "trsvcid": "$NVMF_PORT", 00:19:46.349 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:46.349 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:46.349 "hdgst": ${hdgst:-false}, 00:19:46.349 "ddgst": ${ddgst:-false} 00:19:46.349 }, 00:19:46.349 "method": "bdev_nvme_attach_controller" 00:19:46.349 } 00:19:46.349 EOF 00:19:46.349 )") 00:19:46.349 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:46.349 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:46.349 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:46.349 { 00:19:46.349 "params": { 00:19:46.349 "name": "Nvme$subsystem", 00:19:46.349 "trtype": "$TEST_TRANSPORT", 00:19:46.349 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:46.349 "adrfam": "ipv4", 00:19:46.349 "trsvcid": "$NVMF_PORT", 00:19:46.349 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:46.349 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:46.349 "hdgst": ${hdgst:-false}, 00:19:46.349 "ddgst": ${ddgst:-false} 00:19:46.349 }, 00:19:46.349 "method": "bdev_nvme_attach_controller" 00:19:46.349 } 00:19:46.349 EOF 00:19:46.349 )") 00:19:46.349 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:46.349 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:46.349 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:46.349 { 00:19:46.349 "params": { 00:19:46.349 "name": "Nvme$subsystem", 00:19:46.349 "trtype": "$TEST_TRANSPORT", 00:19:46.349 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:46.349 "adrfam": "ipv4", 00:19:46.349 "trsvcid": "$NVMF_PORT", 00:19:46.349 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:46.349 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:46.349 "hdgst": ${hdgst:-false}, 00:19:46.349 "ddgst": ${ddgst:-false} 00:19:46.349 }, 00:19:46.349 "method": "bdev_nvme_attach_controller" 00:19:46.349 } 00:19:46.349 EOF 00:19:46.349 )") 00:19:46.349 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:46.349 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:46.349 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:46.349 { 00:19:46.349 "params": { 00:19:46.349 "name": "Nvme$subsystem", 00:19:46.349 "trtype": "$TEST_TRANSPORT", 00:19:46.349 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:46.349 "adrfam": "ipv4", 00:19:46.349 "trsvcid": "$NVMF_PORT", 00:19:46.349 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:46.349 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:46.349 "hdgst": ${hdgst:-false}, 00:19:46.349 "ddgst": ${ddgst:-false} 00:19:46.349 }, 00:19:46.349 "method": "bdev_nvme_attach_controller" 00:19:46.349 } 00:19:46.349 EOF 00:19:46.349 )") 00:19:46.349 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:46.349 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:46.350 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:46.350 { 00:19:46.350 "params": { 00:19:46.350 "name": "Nvme$subsystem", 00:19:46.350 "trtype": "$TEST_TRANSPORT", 00:19:46.350 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:46.350 "adrfam": "ipv4", 00:19:46.350 "trsvcid": "$NVMF_PORT", 00:19:46.350 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:46.350 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:46.350 "hdgst": ${hdgst:-false}, 00:19:46.350 "ddgst": ${ddgst:-false} 00:19:46.350 }, 00:19:46.350 "method": "bdev_nvme_attach_controller" 00:19:46.350 } 00:19:46.350 EOF 00:19:46.350 )") 00:19:46.350 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:46.350 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:46.350 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:46.350 { 00:19:46.350 "params": { 00:19:46.350 "name": "Nvme$subsystem", 00:19:46.350 "trtype": "$TEST_TRANSPORT", 00:19:46.350 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:46.350 "adrfam": "ipv4", 00:19:46.350 "trsvcid": "$NVMF_PORT", 00:19:46.350 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:46.350 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:46.350 "hdgst": ${hdgst:-false}, 00:19:46.350 "ddgst": ${ddgst:-false} 00:19:46.350 }, 00:19:46.350 "method": "bdev_nvme_attach_controller" 00:19:46.350 } 00:19:46.350 EOF 00:19:46.350 )") 00:19:46.350 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:46.350 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:46.350 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:46.350 { 00:19:46.350 "params": { 00:19:46.350 "name": "Nvme$subsystem", 00:19:46.350 "trtype": "$TEST_TRANSPORT", 00:19:46.350 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:46.350 "adrfam": "ipv4", 00:19:46.350 "trsvcid": "$NVMF_PORT", 00:19:46.350 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:46.350 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:46.350 "hdgst": ${hdgst:-false}, 00:19:46.350 "ddgst": ${ddgst:-false} 00:19:46.350 }, 00:19:46.350 "method": "bdev_nvme_attach_controller" 00:19:46.350 } 00:19:46.350 EOF 00:19:46.350 )") 00:19:46.350 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:46.350 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:46.350 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:46.350 { 00:19:46.350 "params": { 00:19:46.350 "name": "Nvme$subsystem", 00:19:46.350 "trtype": "$TEST_TRANSPORT", 00:19:46.350 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:46.350 "adrfam": "ipv4", 00:19:46.350 "trsvcid": "$NVMF_PORT", 00:19:46.350 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:46.350 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:46.350 "hdgst": ${hdgst:-false}, 00:19:46.350 "ddgst": ${ddgst:-false} 00:19:46.350 }, 00:19:46.350 "method": "bdev_nvme_attach_controller" 00:19:46.350 } 00:19:46.350 EOF 00:19:46.350 )") 00:19:46.350 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:46.350 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:46.350 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:46.350 { 00:19:46.350 "params": { 00:19:46.350 "name": "Nvme$subsystem", 00:19:46.350 "trtype": "$TEST_TRANSPORT", 00:19:46.350 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:46.350 "adrfam": "ipv4", 00:19:46.350 "trsvcid": "$NVMF_PORT", 00:19:46.350 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:46.350 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:46.350 "hdgst": ${hdgst:-false}, 00:19:46.350 "ddgst": ${ddgst:-false} 00:19:46.350 }, 00:19:46.350 "method": "bdev_nvme_attach_controller" 00:19:46.350 } 00:19:46.350 EOF 00:19:46.350 )") 00:19:46.350 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:46.350 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:46.350 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:46.350 { 00:19:46.350 "params": { 00:19:46.350 "name": "Nvme$subsystem", 00:19:46.350 "trtype": "$TEST_TRANSPORT", 00:19:46.350 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:46.350 "adrfam": "ipv4", 00:19:46.350 "trsvcid": "$NVMF_PORT", 00:19:46.350 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:46.350 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:46.350 "hdgst": ${hdgst:-false}, 00:19:46.350 "ddgst": ${ddgst:-false} 00:19:46.350 }, 00:19:46.350 "method": "bdev_nvme_attach_controller" 00:19:46.350 } 00:19:46.350 EOF 00:19:46.350 )") 00:19:46.350 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:46.350 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:19:46.350 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:19:46.350 22:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:46.350 "params": { 00:19:46.350 "name": "Nvme1", 00:19:46.350 "trtype": "tcp", 00:19:46.350 "traddr": "10.0.0.2", 00:19:46.350 "adrfam": "ipv4", 00:19:46.350 "trsvcid": "4420", 00:19:46.350 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:46.350 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:46.350 "hdgst": false, 00:19:46.350 "ddgst": false 00:19:46.350 }, 00:19:46.350 "method": "bdev_nvme_attach_controller" 00:19:46.350 },{ 00:19:46.350 "params": { 00:19:46.350 "name": "Nvme2", 00:19:46.350 "trtype": "tcp", 00:19:46.350 "traddr": "10.0.0.2", 00:19:46.350 "adrfam": "ipv4", 00:19:46.350 "trsvcid": "4420", 00:19:46.350 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:46.350 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:46.350 "hdgst": false, 00:19:46.350 "ddgst": false 00:19:46.350 }, 00:19:46.350 "method": "bdev_nvme_attach_controller" 00:19:46.350 },{ 00:19:46.350 "params": { 00:19:46.350 "name": "Nvme3", 00:19:46.350 "trtype": "tcp", 00:19:46.350 "traddr": "10.0.0.2", 00:19:46.350 "adrfam": "ipv4", 00:19:46.350 "trsvcid": "4420", 00:19:46.350 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:46.350 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:46.350 "hdgst": false, 00:19:46.350 "ddgst": false 00:19:46.350 }, 00:19:46.350 "method": "bdev_nvme_attach_controller" 00:19:46.350 },{ 00:19:46.350 "params": { 00:19:46.350 "name": "Nvme4", 00:19:46.350 "trtype": "tcp", 00:19:46.350 "traddr": "10.0.0.2", 00:19:46.350 "adrfam": "ipv4", 00:19:46.350 "trsvcid": "4420", 00:19:46.350 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:46.350 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:46.350 "hdgst": false, 00:19:46.350 "ddgst": false 00:19:46.350 }, 00:19:46.350 "method": "bdev_nvme_attach_controller" 00:19:46.350 },{ 00:19:46.350 "params": { 00:19:46.350 "name": "Nvme5", 00:19:46.350 "trtype": "tcp", 00:19:46.350 "traddr": "10.0.0.2", 00:19:46.350 "adrfam": "ipv4", 00:19:46.350 "trsvcid": "4420", 00:19:46.350 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:46.350 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:46.350 "hdgst": false, 00:19:46.350 "ddgst": false 00:19:46.350 }, 00:19:46.350 "method": "bdev_nvme_attach_controller" 00:19:46.350 },{ 00:19:46.350 "params": { 00:19:46.350 "name": "Nvme6", 00:19:46.350 "trtype": "tcp", 00:19:46.350 "traddr": "10.0.0.2", 00:19:46.350 "adrfam": "ipv4", 00:19:46.350 "trsvcid": "4420", 00:19:46.350 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:46.350 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:46.350 "hdgst": false, 00:19:46.350 "ddgst": false 00:19:46.350 }, 00:19:46.350 "method": "bdev_nvme_attach_controller" 00:19:46.350 },{ 00:19:46.350 "params": { 00:19:46.350 "name": "Nvme7", 00:19:46.350 "trtype": "tcp", 00:19:46.350 "traddr": "10.0.0.2", 00:19:46.350 "adrfam": "ipv4", 00:19:46.350 "trsvcid": "4420", 00:19:46.350 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:46.350 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:46.350 "hdgst": false, 00:19:46.350 "ddgst": false 00:19:46.350 }, 00:19:46.350 "method": "bdev_nvme_attach_controller" 00:19:46.350 },{ 00:19:46.350 "params": { 00:19:46.350 "name": "Nvme8", 00:19:46.350 "trtype": "tcp", 00:19:46.350 "traddr": "10.0.0.2", 00:19:46.350 "adrfam": "ipv4", 00:19:46.350 "trsvcid": "4420", 00:19:46.350 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:46.350 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:46.350 "hdgst": false, 00:19:46.350 "ddgst": false 00:19:46.350 }, 00:19:46.351 "method": "bdev_nvme_attach_controller" 00:19:46.351 },{ 00:19:46.351 "params": { 00:19:46.351 "name": "Nvme9", 00:19:46.351 "trtype": "tcp", 00:19:46.351 "traddr": "10.0.0.2", 00:19:46.351 "adrfam": "ipv4", 00:19:46.351 "trsvcid": "4420", 00:19:46.351 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:46.351 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:46.351 "hdgst": false, 00:19:46.351 "ddgst": false 00:19:46.351 }, 00:19:46.351 "method": "bdev_nvme_attach_controller" 00:19:46.351 },{ 00:19:46.351 "params": { 00:19:46.351 "name": "Nvme10", 00:19:46.351 "trtype": "tcp", 00:19:46.351 "traddr": "10.0.0.2", 00:19:46.351 "adrfam": "ipv4", 00:19:46.351 "trsvcid": "4420", 00:19:46.351 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:46.351 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:46.351 "hdgst": false, 00:19:46.351 "ddgst": false 00:19:46.351 }, 00:19:46.351 "method": "bdev_nvme_attach_controller" 00:19:46.351 }' 00:19:46.351 [2024-07-24 22:29:11.879845] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:19:46.351 [2024-07-24 22:29:11.879933] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:19:46.351 EAL: No free 2048 kB hugepages reported on node 1 00:19:46.351 [2024-07-24 22:29:11.943187] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:46.611 [2024-07-24 22:29:12.060151] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:48.513 22:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:48.513 22:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:19:48.513 22:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:48.513 22:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.513 22:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:48.513 22:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.513 22:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 3878437 00:19:48.513 22:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:19:48.513 22:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:19:49.451 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 3878437 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:19:49.451 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 3878297 00:19:49.451 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:19:49.451 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:49.451 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:19:49.451 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:19:49.451 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:49.451 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:49.451 { 00:19:49.451 "params": { 00:19:49.451 "name": "Nvme$subsystem", 00:19:49.451 "trtype": "$TEST_TRANSPORT", 00:19:49.451 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:49.451 "adrfam": "ipv4", 00:19:49.451 "trsvcid": "$NVMF_PORT", 00:19:49.451 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:49.451 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:49.451 "hdgst": ${hdgst:-false}, 00:19:49.451 "ddgst": ${ddgst:-false} 00:19:49.451 }, 00:19:49.451 "method": "bdev_nvme_attach_controller" 00:19:49.451 } 00:19:49.451 EOF 00:19:49.451 )") 00:19:49.451 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:49.451 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:49.451 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:49.451 { 00:19:49.451 "params": { 00:19:49.451 "name": "Nvme$subsystem", 00:19:49.451 "trtype": "$TEST_TRANSPORT", 00:19:49.451 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:49.451 "adrfam": "ipv4", 00:19:49.451 "trsvcid": "$NVMF_PORT", 00:19:49.451 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:49.451 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:49.451 "hdgst": ${hdgst:-false}, 00:19:49.451 "ddgst": ${ddgst:-false} 00:19:49.451 }, 00:19:49.451 "method": "bdev_nvme_attach_controller" 00:19:49.451 } 00:19:49.451 EOF 00:19:49.451 )") 00:19:49.451 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:49.451 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:49.451 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:49.451 { 00:19:49.451 "params": { 00:19:49.451 "name": "Nvme$subsystem", 00:19:49.451 "trtype": "$TEST_TRANSPORT", 00:19:49.451 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:49.451 "adrfam": "ipv4", 00:19:49.451 "trsvcid": "$NVMF_PORT", 00:19:49.451 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:49.451 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:49.451 "hdgst": ${hdgst:-false}, 00:19:49.451 "ddgst": ${ddgst:-false} 00:19:49.451 }, 00:19:49.451 "method": "bdev_nvme_attach_controller" 00:19:49.451 } 00:19:49.451 EOF 00:19:49.451 )") 00:19:49.451 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:49.451 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:49.451 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:49.451 { 00:19:49.451 "params": { 00:19:49.451 "name": "Nvme$subsystem", 00:19:49.451 "trtype": "$TEST_TRANSPORT", 00:19:49.451 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:49.451 "adrfam": "ipv4", 00:19:49.451 "trsvcid": "$NVMF_PORT", 00:19:49.451 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:49.451 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:49.451 "hdgst": ${hdgst:-false}, 00:19:49.451 "ddgst": ${ddgst:-false} 00:19:49.451 }, 00:19:49.451 "method": "bdev_nvme_attach_controller" 00:19:49.451 } 00:19:49.451 EOF 00:19:49.451 )") 00:19:49.451 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:49.451 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:49.451 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:49.451 { 00:19:49.451 "params": { 00:19:49.451 "name": "Nvme$subsystem", 00:19:49.451 "trtype": "$TEST_TRANSPORT", 00:19:49.451 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:49.451 "adrfam": "ipv4", 00:19:49.451 "trsvcid": "$NVMF_PORT", 00:19:49.451 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:49.451 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:49.451 "hdgst": ${hdgst:-false}, 00:19:49.451 "ddgst": ${ddgst:-false} 00:19:49.451 }, 00:19:49.451 "method": "bdev_nvme_attach_controller" 00:19:49.451 } 00:19:49.451 EOF 00:19:49.451 )") 00:19:49.451 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:49.451 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:49.451 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:49.451 { 00:19:49.451 "params": { 00:19:49.451 "name": "Nvme$subsystem", 00:19:49.451 "trtype": "$TEST_TRANSPORT", 00:19:49.451 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:49.451 "adrfam": "ipv4", 00:19:49.451 "trsvcid": "$NVMF_PORT", 00:19:49.451 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:49.451 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:49.451 "hdgst": ${hdgst:-false}, 00:19:49.451 "ddgst": ${ddgst:-false} 00:19:49.451 }, 00:19:49.451 "method": "bdev_nvme_attach_controller" 00:19:49.451 } 00:19:49.451 EOF 00:19:49.451 )") 00:19:49.451 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:49.451 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:49.451 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:49.451 { 00:19:49.451 "params": { 00:19:49.451 "name": "Nvme$subsystem", 00:19:49.451 "trtype": "$TEST_TRANSPORT", 00:19:49.451 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:49.451 "adrfam": "ipv4", 00:19:49.451 "trsvcid": "$NVMF_PORT", 00:19:49.451 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:49.451 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:49.451 "hdgst": ${hdgst:-false}, 00:19:49.451 "ddgst": ${ddgst:-false} 00:19:49.451 }, 00:19:49.451 "method": "bdev_nvme_attach_controller" 00:19:49.451 } 00:19:49.451 EOF 00:19:49.451 )") 00:19:49.451 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:49.451 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:49.451 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:49.451 { 00:19:49.451 "params": { 00:19:49.451 "name": "Nvme$subsystem", 00:19:49.451 "trtype": "$TEST_TRANSPORT", 00:19:49.452 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:49.452 "adrfam": "ipv4", 00:19:49.452 "trsvcid": "$NVMF_PORT", 00:19:49.452 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:49.452 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:49.452 "hdgst": ${hdgst:-false}, 00:19:49.452 "ddgst": ${ddgst:-false} 00:19:49.452 }, 00:19:49.452 "method": "bdev_nvme_attach_controller" 00:19:49.452 } 00:19:49.452 EOF 00:19:49.452 )") 00:19:49.452 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:49.452 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:49.452 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:49.452 { 00:19:49.452 "params": { 00:19:49.452 "name": "Nvme$subsystem", 00:19:49.452 "trtype": "$TEST_TRANSPORT", 00:19:49.452 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:49.452 "adrfam": "ipv4", 00:19:49.452 "trsvcid": "$NVMF_PORT", 00:19:49.452 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:49.452 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:49.452 "hdgst": ${hdgst:-false}, 00:19:49.452 "ddgst": ${ddgst:-false} 00:19:49.452 }, 00:19:49.452 "method": "bdev_nvme_attach_controller" 00:19:49.452 } 00:19:49.452 EOF 00:19:49.452 )") 00:19:49.452 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:49.452 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:49.452 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:49.452 { 00:19:49.452 "params": { 00:19:49.452 "name": "Nvme$subsystem", 00:19:49.452 "trtype": "$TEST_TRANSPORT", 00:19:49.452 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:49.452 "adrfam": "ipv4", 00:19:49.452 "trsvcid": "$NVMF_PORT", 00:19:49.452 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:49.452 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:49.452 "hdgst": ${hdgst:-false}, 00:19:49.452 "ddgst": ${ddgst:-false} 00:19:49.452 }, 00:19:49.452 "method": "bdev_nvme_attach_controller" 00:19:49.452 } 00:19:49.452 EOF 00:19:49.452 )") 00:19:49.452 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:49.452 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:19:49.452 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:19:49.452 22:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:49.452 "params": { 00:19:49.452 "name": "Nvme1", 00:19:49.452 "trtype": "tcp", 00:19:49.452 "traddr": "10.0.0.2", 00:19:49.452 "adrfam": "ipv4", 00:19:49.452 "trsvcid": "4420", 00:19:49.452 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:49.452 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:49.452 "hdgst": false, 00:19:49.452 "ddgst": false 00:19:49.452 }, 00:19:49.452 "method": "bdev_nvme_attach_controller" 00:19:49.452 },{ 00:19:49.452 "params": { 00:19:49.452 "name": "Nvme2", 00:19:49.452 "trtype": "tcp", 00:19:49.452 "traddr": "10.0.0.2", 00:19:49.452 "adrfam": "ipv4", 00:19:49.452 "trsvcid": "4420", 00:19:49.452 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:49.452 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:49.452 "hdgst": false, 00:19:49.452 "ddgst": false 00:19:49.452 }, 00:19:49.452 "method": "bdev_nvme_attach_controller" 00:19:49.452 },{ 00:19:49.452 "params": { 00:19:49.452 "name": "Nvme3", 00:19:49.452 "trtype": "tcp", 00:19:49.452 "traddr": "10.0.0.2", 00:19:49.452 "adrfam": "ipv4", 00:19:49.452 "trsvcid": "4420", 00:19:49.452 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:49.452 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:49.452 "hdgst": false, 00:19:49.452 "ddgst": false 00:19:49.452 }, 00:19:49.452 "method": "bdev_nvme_attach_controller" 00:19:49.452 },{ 00:19:49.452 "params": { 00:19:49.452 "name": "Nvme4", 00:19:49.452 "trtype": "tcp", 00:19:49.452 "traddr": "10.0.0.2", 00:19:49.452 "adrfam": "ipv4", 00:19:49.452 "trsvcid": "4420", 00:19:49.452 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:49.452 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:49.452 "hdgst": false, 00:19:49.452 "ddgst": false 00:19:49.452 }, 00:19:49.452 "method": "bdev_nvme_attach_controller" 00:19:49.452 },{ 00:19:49.452 "params": { 00:19:49.452 "name": "Nvme5", 00:19:49.452 "trtype": "tcp", 00:19:49.452 "traddr": "10.0.0.2", 00:19:49.452 "adrfam": "ipv4", 00:19:49.452 "trsvcid": "4420", 00:19:49.452 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:49.452 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:49.452 "hdgst": false, 00:19:49.452 "ddgst": false 00:19:49.452 }, 00:19:49.452 "method": "bdev_nvme_attach_controller" 00:19:49.452 },{ 00:19:49.452 "params": { 00:19:49.452 "name": "Nvme6", 00:19:49.452 "trtype": "tcp", 00:19:49.452 "traddr": "10.0.0.2", 00:19:49.452 "adrfam": "ipv4", 00:19:49.452 "trsvcid": "4420", 00:19:49.452 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:49.452 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:49.452 "hdgst": false, 00:19:49.452 "ddgst": false 00:19:49.452 }, 00:19:49.452 "method": "bdev_nvme_attach_controller" 00:19:49.452 },{ 00:19:49.452 "params": { 00:19:49.452 "name": "Nvme7", 00:19:49.452 "trtype": "tcp", 00:19:49.452 "traddr": "10.0.0.2", 00:19:49.452 "adrfam": "ipv4", 00:19:49.452 "trsvcid": "4420", 00:19:49.452 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:49.452 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:49.452 "hdgst": false, 00:19:49.452 "ddgst": false 00:19:49.452 }, 00:19:49.452 "method": "bdev_nvme_attach_controller" 00:19:49.452 },{ 00:19:49.452 "params": { 00:19:49.452 "name": "Nvme8", 00:19:49.452 "trtype": "tcp", 00:19:49.452 "traddr": "10.0.0.2", 00:19:49.452 "adrfam": "ipv4", 00:19:49.452 "trsvcid": "4420", 00:19:49.452 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:49.452 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:49.452 "hdgst": false, 00:19:49.452 "ddgst": false 00:19:49.452 }, 00:19:49.452 "method": "bdev_nvme_attach_controller" 00:19:49.452 },{ 00:19:49.452 "params": { 00:19:49.452 "name": "Nvme9", 00:19:49.452 "trtype": "tcp", 00:19:49.452 "traddr": "10.0.0.2", 00:19:49.452 "adrfam": "ipv4", 00:19:49.452 "trsvcid": "4420", 00:19:49.452 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:49.452 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:49.452 "hdgst": false, 00:19:49.452 "ddgst": false 00:19:49.452 }, 00:19:49.452 "method": "bdev_nvme_attach_controller" 00:19:49.452 },{ 00:19:49.452 "params": { 00:19:49.452 "name": "Nvme10", 00:19:49.452 "trtype": "tcp", 00:19:49.452 "traddr": "10.0.0.2", 00:19:49.452 "adrfam": "ipv4", 00:19:49.452 "trsvcid": "4420", 00:19:49.452 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:49.452 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:49.452 "hdgst": false, 00:19:49.452 "ddgst": false 00:19:49.452 }, 00:19:49.452 "method": "bdev_nvme_attach_controller" 00:19:49.452 }' 00:19:49.452 [2024-07-24 22:29:14.989193] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:19:49.452 [2024-07-24 22:29:14.989284] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3878765 ] 00:19:49.452 EAL: No free 2048 kB hugepages reported on node 1 00:19:49.452 [2024-07-24 22:29:15.054038] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.713 [2024-07-24 22:29:15.173975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:51.617 Running I/O for 1 seconds... 00:19:52.557 00:19:52.557 Latency(us) 00:19:52.557 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:52.557 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:52.557 Verification LBA range: start 0x0 length 0x400 00:19:52.557 Nvme1n1 : 1.19 214.49 13.41 0.00 0.00 289839.41 36894.34 279620.27 00:19:52.557 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:52.557 Verification LBA range: start 0x0 length 0x400 00:19:52.557 Nvme2n1 : 1.22 210.13 13.13 0.00 0.00 293834.15 20971.52 302921.96 00:19:52.557 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:52.557 Verification LBA range: start 0x0 length 0x400 00:19:52.557 Nvme3n1 : 1.09 187.85 11.74 0.00 0.00 312574.83 27767.85 273406.48 00:19:52.557 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:52.557 Verification LBA range: start 0x0 length 0x400 00:19:52.557 Nvme4n1 : 1.20 215.87 13.49 0.00 0.00 274800.57 5461.33 298261.62 00:19:52.557 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:52.557 Verification LBA range: start 0x0 length 0x400 00:19:52.557 Nvme5n1 : 1.16 165.15 10.32 0.00 0.00 352471.92 23787.14 320009.86 00:19:52.557 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:52.557 Verification LBA range: start 0x0 length 0x400 00:19:52.557 Nvme6n1 : 1.23 208.39 13.02 0.00 0.00 275142.73 23592.96 301368.51 00:19:52.557 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:52.557 Verification LBA range: start 0x0 length 0x400 00:19:52.557 Nvme7n1 : 1.16 166.18 10.39 0.00 0.00 335029.03 25631.86 307582.29 00:19:52.557 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:52.557 Verification LBA range: start 0x0 length 0x400 00:19:52.557 Nvme8n1 : 1.22 209.04 13.06 0.00 0.00 262950.68 21942.42 279620.27 00:19:52.557 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:52.557 Verification LBA range: start 0x0 length 0x400 00:19:52.557 Nvme9n1 : 1.22 210.32 13.14 0.00 0.00 255507.72 25049.32 301368.51 00:19:52.557 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:52.557 Verification LBA range: start 0x0 length 0x400 00:19:52.557 Nvme10n1 : 1.23 210.64 13.17 0.00 0.00 250272.13 1565.58 330883.98 00:19:52.557 =================================================================================================================== 00:19:52.557 Total : 1998.05 124.88 0.00 0.00 286795.09 1565.58 330883.98 00:19:52.819 22:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:19:52.819 22:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:19:52.819 22:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:52.819 22:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:52.819 22:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:19:52.819 22:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:52.819 22:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:19:52.819 22:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:52.819 22:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:19:52.819 22:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:52.819 22:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:52.819 rmmod nvme_tcp 00:19:52.819 rmmod nvme_fabrics 00:19:52.819 rmmod nvme_keyring 00:19:52.819 22:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:52.819 22:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:19:52.819 22:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:19:52.819 22:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 3878297 ']' 00:19:52.819 22:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 3878297 00:19:52.819 22:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 3878297 ']' 00:19:52.819 22:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 3878297 00:19:52.819 22:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:19:52.819 22:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:52.819 22:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3878297 00:19:52.819 22:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:52.819 22:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:52.819 22:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3878297' 00:19:52.819 killing process with pid 3878297 00:19:52.819 22:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 3878297 00:19:52.819 22:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 3878297 00:19:53.386 22:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:53.386 22:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:53.386 22:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:53.386 22:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:53.386 22:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:53.386 22:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:53.386 22:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:53.386 22:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:55.299 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:55.299 00:19:55.299 real 0m11.697s 00:19:55.299 user 0m35.368s 00:19:55.299 sys 0m2.983s 00:19:55.299 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:55.299 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:55.299 ************************************ 00:19:55.299 END TEST nvmf_shutdown_tc1 00:19:55.299 ************************************ 00:19:55.299 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:19:55.299 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:55.299 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:55.299 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:55.299 ************************************ 00:19:55.299 START TEST nvmf_shutdown_tc2 00:19:55.299 ************************************ 00:19:55.299 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:19:55.299 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:19:55.299 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:55.299 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:55.299 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:55.299 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:55.299 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:55.299 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:55.299 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:55.299 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:55.299 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:55.299 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:55.299 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:55.299 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:55.299 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:55.299 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:55.299 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:55.299 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:55.299 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:55.299 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:55.299 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:55.299 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:55.299 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:19:55.299 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:55.299 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:19:55.299 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:19:55.299 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:19:55.299 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:19:55.299 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:19:55.299 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:55.299 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:55.299 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:55.299 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:55.299 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:55.299 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:55.299 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:55.299 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:55.299 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:55.299 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:55.299 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:55.299 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:55.299 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:55.299 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:55.299 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:55.299 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:55.299 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:55.299 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:55.299 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:55.300 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:19:55.300 Found 0000:08:00.0 (0x8086 - 0x159b) 00:19:55.300 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:55.300 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:55.300 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:55.300 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:55.300 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:55.300 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:55.300 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:19:55.300 Found 0000:08:00.1 (0x8086 - 0x159b) 00:19:55.300 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:55.300 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:55.300 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:55.300 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:55.300 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:55.300 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:55.300 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:55.300 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:55.300 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:55.300 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:55.300 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:55.300 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:55.300 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:55.300 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:55.300 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:55.300 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:19:55.300 Found net devices under 0000:08:00.0: cvl_0_0 00:19:55.300 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:55.300 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:55.300 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:55.300 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:55.300 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:55.300 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:55.300 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:55.300 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:55.300 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:19:55.300 Found net devices under 0000:08:00.1: cvl_0_1 00:19:55.300 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:55.300 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:55.300 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:19:55.300 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:55.300 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:55.300 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:55.300 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:55.300 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:55.300 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:55.300 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:55.300 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:55.300 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:55.300 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:55.300 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:55.300 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:55.300 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:55.300 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:55.300 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:55.300 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:55.300 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:55.300 22:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:55.560 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:55.560 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:55.560 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:55.560 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:55.560 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:55.560 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:55.560 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:19:55.560 00:19:55.560 --- 10.0.0.2 ping statistics --- 00:19:55.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:55.560 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:19:55.560 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:55.560 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:55.560 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:19:55.560 00:19:55.560 --- 10.0.0.1 ping statistics --- 00:19:55.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:55.560 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:19:55.560 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:55.560 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:19:55.560 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:55.560 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:55.560 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:55.560 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:55.560 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:55.560 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:55.560 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:55.560 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:55.560 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:55.560 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:55.560 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:55.560 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3879376 00:19:55.560 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:55.560 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3879376 00:19:55.560 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3879376 ']' 00:19:55.560 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:55.560 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:55.560 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:55.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:55.560 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:55.560 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:55.560 [2024-07-24 22:29:21.144216] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:19:55.560 [2024-07-24 22:29:21.144312] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:55.560 EAL: No free 2048 kB hugepages reported on node 1 00:19:55.560 [2024-07-24 22:29:21.230597] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:55.820 [2024-07-24 22:29:21.385280] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:55.820 [2024-07-24 22:29:21.385358] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:55.820 [2024-07-24 22:29:21.385389] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:55.820 [2024-07-24 22:29:21.385414] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:55.820 [2024-07-24 22:29:21.385436] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:55.820 [2024-07-24 22:29:21.385563] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:55.820 [2024-07-24 22:29:21.385623] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:55.820 [2024-07-24 22:29:21.385693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:55.820 [2024-07-24 22:29:21.385682] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:55.820 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:56.080 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:19:56.080 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:56.080 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:56.080 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:56.080 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:56.080 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:56.080 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.080 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:56.080 [2024-07-24 22:29:21.553801] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:56.080 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.080 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:19:56.080 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:19:56.080 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:56.080 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:56.080 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:56.080 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:56.080 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:56.080 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:56.080 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:56.080 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:56.080 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:56.080 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:56.080 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:56.080 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:56.080 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:56.080 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:56.080 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:56.080 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:56.080 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:56.080 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:56.080 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:56.080 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:56.080 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:56.080 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:56.080 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:56.080 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:19:56.080 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.080 22:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:56.080 Malloc1 00:19:56.080 [2024-07-24 22:29:21.644315] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:56.080 Malloc2 00:19:56.080 Malloc3 00:19:56.080 Malloc4 00:19:56.338 Malloc5 00:19:56.338 Malloc6 00:19:56.338 Malloc7 00:19:56.338 Malloc8 00:19:56.338 Malloc9 00:19:56.338 Malloc10 00:19:56.597 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.597 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:19:56.597 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:56.597 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:56.597 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=3879525 00:19:56.597 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 3879525 /var/tmp/bdevperf.sock 00:19:56.597 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3879525 ']' 00:19:56.597 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:56.597 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:19:56.597 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:56.597 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:56.597 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:56.597 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:19:56.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:56.597 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:56.598 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:19:56.598 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:56.598 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:56.598 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:56.598 { 00:19:56.598 "params": { 00:19:56.598 "name": "Nvme$subsystem", 00:19:56.598 "trtype": "$TEST_TRANSPORT", 00:19:56.598 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:56.598 "adrfam": "ipv4", 00:19:56.598 "trsvcid": "$NVMF_PORT", 00:19:56.598 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:56.598 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:56.598 "hdgst": ${hdgst:-false}, 00:19:56.598 "ddgst": ${ddgst:-false} 00:19:56.598 }, 00:19:56.598 "method": "bdev_nvme_attach_controller" 00:19:56.598 } 00:19:56.598 EOF 00:19:56.598 )") 00:19:56.598 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:56.598 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:56.598 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:56.598 { 00:19:56.598 "params": { 00:19:56.598 "name": "Nvme$subsystem", 00:19:56.598 "trtype": "$TEST_TRANSPORT", 00:19:56.598 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:56.598 "adrfam": "ipv4", 00:19:56.598 "trsvcid": "$NVMF_PORT", 00:19:56.598 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:56.598 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:56.598 "hdgst": ${hdgst:-false}, 00:19:56.598 "ddgst": ${ddgst:-false} 00:19:56.598 }, 00:19:56.598 "method": "bdev_nvme_attach_controller" 00:19:56.598 } 00:19:56.598 EOF 00:19:56.598 )") 00:19:56.598 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:56.598 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:56.598 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:56.598 { 00:19:56.598 "params": { 00:19:56.598 "name": "Nvme$subsystem", 00:19:56.598 "trtype": "$TEST_TRANSPORT", 00:19:56.598 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:56.598 "adrfam": "ipv4", 00:19:56.598 "trsvcid": "$NVMF_PORT", 00:19:56.598 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:56.598 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:56.598 "hdgst": ${hdgst:-false}, 00:19:56.598 "ddgst": ${ddgst:-false} 00:19:56.598 }, 00:19:56.598 "method": "bdev_nvme_attach_controller" 00:19:56.598 } 00:19:56.598 EOF 00:19:56.598 )") 00:19:56.598 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:56.598 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:56.598 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:56.598 { 00:19:56.598 "params": { 00:19:56.598 "name": "Nvme$subsystem", 00:19:56.598 "trtype": "$TEST_TRANSPORT", 00:19:56.598 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:56.598 "adrfam": "ipv4", 00:19:56.598 "trsvcid": "$NVMF_PORT", 00:19:56.598 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:56.598 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:56.598 "hdgst": ${hdgst:-false}, 00:19:56.598 "ddgst": ${ddgst:-false} 00:19:56.598 }, 00:19:56.598 "method": "bdev_nvme_attach_controller" 00:19:56.598 } 00:19:56.598 EOF 00:19:56.598 )") 00:19:56.598 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:56.598 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:56.598 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:56.598 { 00:19:56.598 "params": { 00:19:56.598 "name": "Nvme$subsystem", 00:19:56.598 "trtype": "$TEST_TRANSPORT", 00:19:56.598 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:56.598 "adrfam": "ipv4", 00:19:56.598 "trsvcid": "$NVMF_PORT", 00:19:56.598 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:56.598 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:56.598 "hdgst": ${hdgst:-false}, 00:19:56.598 "ddgst": ${ddgst:-false} 00:19:56.598 }, 00:19:56.598 "method": "bdev_nvme_attach_controller" 00:19:56.598 } 00:19:56.598 EOF 00:19:56.598 )") 00:19:56.598 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:56.598 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:56.598 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:56.598 { 00:19:56.598 "params": { 00:19:56.598 "name": "Nvme$subsystem", 00:19:56.598 "trtype": "$TEST_TRANSPORT", 00:19:56.598 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:56.598 "adrfam": "ipv4", 00:19:56.598 "trsvcid": "$NVMF_PORT", 00:19:56.598 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:56.598 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:56.598 "hdgst": ${hdgst:-false}, 00:19:56.598 "ddgst": ${ddgst:-false} 00:19:56.598 }, 00:19:56.598 "method": "bdev_nvme_attach_controller" 00:19:56.598 } 00:19:56.598 EOF 00:19:56.598 )") 00:19:56.598 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:56.598 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:56.598 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:56.598 { 00:19:56.598 "params": { 00:19:56.598 "name": "Nvme$subsystem", 00:19:56.598 "trtype": "$TEST_TRANSPORT", 00:19:56.598 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:56.598 "adrfam": "ipv4", 00:19:56.598 "trsvcid": "$NVMF_PORT", 00:19:56.598 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:56.598 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:56.598 "hdgst": ${hdgst:-false}, 00:19:56.598 "ddgst": ${ddgst:-false} 00:19:56.598 }, 00:19:56.598 "method": "bdev_nvme_attach_controller" 00:19:56.598 } 00:19:56.598 EOF 00:19:56.598 )") 00:19:56.598 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:56.598 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:56.598 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:56.598 { 00:19:56.598 "params": { 00:19:56.598 "name": "Nvme$subsystem", 00:19:56.598 "trtype": "$TEST_TRANSPORT", 00:19:56.598 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:56.598 "adrfam": "ipv4", 00:19:56.598 "trsvcid": "$NVMF_PORT", 00:19:56.598 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:56.598 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:56.598 "hdgst": ${hdgst:-false}, 00:19:56.598 "ddgst": ${ddgst:-false} 00:19:56.598 }, 00:19:56.598 "method": "bdev_nvme_attach_controller" 00:19:56.598 } 00:19:56.598 EOF 00:19:56.598 )") 00:19:56.598 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:56.598 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:56.598 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:56.598 { 00:19:56.598 "params": { 00:19:56.598 "name": "Nvme$subsystem", 00:19:56.598 "trtype": "$TEST_TRANSPORT", 00:19:56.598 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:56.598 "adrfam": "ipv4", 00:19:56.598 "trsvcid": "$NVMF_PORT", 00:19:56.598 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:56.598 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:56.598 "hdgst": ${hdgst:-false}, 00:19:56.598 "ddgst": ${ddgst:-false} 00:19:56.598 }, 00:19:56.598 "method": "bdev_nvme_attach_controller" 00:19:56.598 } 00:19:56.598 EOF 00:19:56.598 )") 00:19:56.598 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:56.598 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:56.598 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:56.598 { 00:19:56.598 "params": { 00:19:56.598 "name": "Nvme$subsystem", 00:19:56.598 "trtype": "$TEST_TRANSPORT", 00:19:56.598 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:56.598 "adrfam": "ipv4", 00:19:56.598 "trsvcid": "$NVMF_PORT", 00:19:56.598 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:56.598 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:56.598 "hdgst": ${hdgst:-false}, 00:19:56.598 "ddgst": ${ddgst:-false} 00:19:56.598 }, 00:19:56.598 "method": "bdev_nvme_attach_controller" 00:19:56.598 } 00:19:56.598 EOF 00:19:56.598 )") 00:19:56.598 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:56.598 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:19:56.598 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:19:56.598 22:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:56.598 "params": { 00:19:56.598 "name": "Nvme1", 00:19:56.598 "trtype": "tcp", 00:19:56.599 "traddr": "10.0.0.2", 00:19:56.599 "adrfam": "ipv4", 00:19:56.599 "trsvcid": "4420", 00:19:56.599 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:56.599 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:56.599 "hdgst": false, 00:19:56.599 "ddgst": false 00:19:56.599 }, 00:19:56.599 "method": "bdev_nvme_attach_controller" 00:19:56.599 },{ 00:19:56.599 "params": { 00:19:56.599 "name": "Nvme2", 00:19:56.599 "trtype": "tcp", 00:19:56.599 "traddr": "10.0.0.2", 00:19:56.599 "adrfam": "ipv4", 00:19:56.599 "trsvcid": "4420", 00:19:56.599 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:56.599 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:56.599 "hdgst": false, 00:19:56.599 "ddgst": false 00:19:56.599 }, 00:19:56.599 "method": "bdev_nvme_attach_controller" 00:19:56.599 },{ 00:19:56.599 "params": { 00:19:56.599 "name": "Nvme3", 00:19:56.599 "trtype": "tcp", 00:19:56.599 "traddr": "10.0.0.2", 00:19:56.599 "adrfam": "ipv4", 00:19:56.599 "trsvcid": "4420", 00:19:56.599 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:56.599 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:56.599 "hdgst": false, 00:19:56.599 "ddgst": false 00:19:56.599 }, 00:19:56.599 "method": "bdev_nvme_attach_controller" 00:19:56.599 },{ 00:19:56.599 "params": { 00:19:56.599 "name": "Nvme4", 00:19:56.599 "trtype": "tcp", 00:19:56.599 "traddr": "10.0.0.2", 00:19:56.599 "adrfam": "ipv4", 00:19:56.599 "trsvcid": "4420", 00:19:56.599 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:56.599 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:56.599 "hdgst": false, 00:19:56.599 "ddgst": false 00:19:56.599 }, 00:19:56.599 "method": "bdev_nvme_attach_controller" 00:19:56.599 },{ 00:19:56.599 "params": { 00:19:56.599 "name": "Nvme5", 00:19:56.599 "trtype": "tcp", 00:19:56.599 "traddr": "10.0.0.2", 00:19:56.599 "adrfam": "ipv4", 00:19:56.599 "trsvcid": "4420", 00:19:56.599 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:56.599 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:56.599 "hdgst": false, 00:19:56.599 "ddgst": false 00:19:56.599 }, 00:19:56.599 "method": "bdev_nvme_attach_controller" 00:19:56.599 },{ 00:19:56.599 "params": { 00:19:56.599 "name": "Nvme6", 00:19:56.599 "trtype": "tcp", 00:19:56.599 "traddr": "10.0.0.2", 00:19:56.599 "adrfam": "ipv4", 00:19:56.599 "trsvcid": "4420", 00:19:56.599 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:56.599 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:56.599 "hdgst": false, 00:19:56.599 "ddgst": false 00:19:56.599 }, 00:19:56.599 "method": "bdev_nvme_attach_controller" 00:19:56.599 },{ 00:19:56.599 "params": { 00:19:56.599 "name": "Nvme7", 00:19:56.599 "trtype": "tcp", 00:19:56.599 "traddr": "10.0.0.2", 00:19:56.599 "adrfam": "ipv4", 00:19:56.599 "trsvcid": "4420", 00:19:56.599 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:56.599 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:56.599 "hdgst": false, 00:19:56.599 "ddgst": false 00:19:56.599 }, 00:19:56.599 "method": "bdev_nvme_attach_controller" 00:19:56.599 },{ 00:19:56.599 "params": { 00:19:56.599 "name": "Nvme8", 00:19:56.599 "trtype": "tcp", 00:19:56.599 "traddr": "10.0.0.2", 00:19:56.599 "adrfam": "ipv4", 00:19:56.599 "trsvcid": "4420", 00:19:56.599 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:56.599 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:56.599 "hdgst": false, 00:19:56.599 "ddgst": false 00:19:56.599 }, 00:19:56.599 "method": "bdev_nvme_attach_controller" 00:19:56.599 },{ 00:19:56.599 "params": { 00:19:56.599 "name": "Nvme9", 00:19:56.599 "trtype": "tcp", 00:19:56.599 "traddr": "10.0.0.2", 00:19:56.599 "adrfam": "ipv4", 00:19:56.599 "trsvcid": "4420", 00:19:56.599 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:56.599 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:56.599 "hdgst": false, 00:19:56.599 "ddgst": false 00:19:56.599 }, 00:19:56.599 "method": "bdev_nvme_attach_controller" 00:19:56.599 },{ 00:19:56.599 "params": { 00:19:56.599 "name": "Nvme10", 00:19:56.599 "trtype": "tcp", 00:19:56.599 "traddr": "10.0.0.2", 00:19:56.599 "adrfam": "ipv4", 00:19:56.599 "trsvcid": "4420", 00:19:56.599 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:56.599 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:56.599 "hdgst": false, 00:19:56.599 "ddgst": false 00:19:56.599 }, 00:19:56.599 "method": "bdev_nvme_attach_controller" 00:19:56.599 }' 00:19:56.599 [2024-07-24 22:29:22.119139] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:19:56.599 [2024-07-24 22:29:22.119233] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3879525 ] 00:19:56.599 EAL: No free 2048 kB hugepages reported on node 1 00:19:56.599 [2024-07-24 22:29:22.181791] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.599 [2024-07-24 22:29:22.298734] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:57.978 Running I/O for 10 seconds... 00:19:58.545 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:58.545 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:19:58.545 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:58.545 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.545 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:58.545 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.545 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:19:58.545 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:19:58.545 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:19:58.545 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:19:58.545 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:19:58.545 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:19:58.545 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:58.545 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:58.545 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:58.545 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.545 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:58.545 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.545 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:19:58.545 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:19:58.545 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:19:58.804 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:19:58.804 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:58.804 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:58.804 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:58.804 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.804 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:59.065 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.065 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:19:59.065 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:19:59.065 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:19:59.065 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:19:59.065 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:19:59.065 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 3879525 00:19:59.065 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 3879525 ']' 00:19:59.065 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 3879525 00:19:59.065 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:19:59.065 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:59.065 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3879525 00:19:59.065 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:59.065 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:59.065 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3879525' 00:19:59.065 killing process with pid 3879525 00:19:59.065 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 3879525 00:19:59.065 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 3879525 00:19:59.065 Received shutdown signal, test time was about 0.947051 seconds 00:19:59.065 00:19:59.065 Latency(us) 00:19:59.065 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.065 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:59.065 Verification LBA range: start 0x0 length 0x400 00:19:59.065 Nvme1n1 : 0.95 202.93 12.68 0.00 0.00 310929.89 42525.58 295154.73 00:19:59.065 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:59.065 Verification LBA range: start 0x0 length 0x400 00:19:59.065 Nvme2n1 : 0.94 204.19 12.76 0.00 0.00 301358.65 23981.32 312242.63 00:19:59.065 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:59.065 Verification LBA range: start 0x0 length 0x400 00:19:59.065 Nvme3n1 : 0.91 210.33 13.15 0.00 0.00 284635.59 20388.98 327777.09 00:19:59.065 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:59.065 Verification LBA range: start 0x0 length 0x400 00:19:59.065 Nvme4n1 : 0.92 209.56 13.10 0.00 0.00 277386.68 21845.33 290494.39 00:19:59.065 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:59.065 Verification LBA range: start 0x0 length 0x400 00:19:59.065 Nvme5n1 : 0.94 211.42 13.21 0.00 0.00 267639.52 2524.35 304475.40 00:19:59.065 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:59.065 Verification LBA range: start 0x0 length 0x400 00:19:59.065 Nvme6n1 : 0.90 142.80 8.93 0.00 0.00 384853.33 28544.57 324670.20 00:19:59.065 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:59.065 Verification LBA range: start 0x0 length 0x400 00:19:59.065 Nvme7n1 : 0.92 208.15 13.01 0.00 0.00 257727.40 39807.05 321563.31 00:19:59.065 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:59.065 Verification LBA range: start 0x0 length 0x400 00:19:59.065 Nvme8n1 : 0.93 206.02 12.88 0.00 0.00 253036.22 29515.47 312242.63 00:19:59.065 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:59.065 Verification LBA range: start 0x0 length 0x400 00:19:59.065 Nvme9n1 : 0.91 140.94 8.81 0.00 0.00 357233.40 25437.68 335544.32 00:19:59.065 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:59.065 Verification LBA range: start 0x0 length 0x400 00:19:59.065 Nvme10n1 : 0.93 138.33 8.65 0.00 0.00 354591.48 25826.04 347971.89 00:19:59.065 =================================================================================================================== 00:19:59.065 Total : 1874.68 117.17 0.00 0.00 298080.33 2524.35 347971.89 00:19:59.324 22:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:20:00.262 22:29:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 3879376 00:20:00.262 22:29:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:20:00.262 22:29:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:00.262 22:29:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:00.262 22:29:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:00.262 22:29:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:00.262 22:29:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:00.262 22:29:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:20:00.262 22:29:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:00.262 22:29:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:20:00.262 22:29:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:00.262 22:29:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:00.262 rmmod nvme_tcp 00:20:00.262 rmmod nvme_fabrics 00:20:00.262 rmmod nvme_keyring 00:20:00.262 22:29:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:00.262 22:29:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:20:00.262 22:29:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:20:00.262 22:29:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 3879376 ']' 00:20:00.262 22:29:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 3879376 00:20:00.262 22:29:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 3879376 ']' 00:20:00.262 22:29:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 3879376 00:20:00.262 22:29:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:20:00.262 22:29:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:00.262 22:29:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3879376 00:20:00.262 22:29:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:00.262 22:29:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:00.262 22:29:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3879376' 00:20:00.262 killing process with pid 3879376 00:20:00.262 22:29:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 3879376 00:20:00.262 22:29:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 3879376 00:20:00.830 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:00.830 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:00.830 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:00.830 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:00.830 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:00.830 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:00.830 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:00.830 22:29:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:02.735 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:02.735 00:20:02.735 real 0m7.483s 00:20:02.735 user 0m22.082s 00:20:02.735 sys 0m1.470s 00:20:02.735 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:02.735 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:02.735 ************************************ 00:20:02.735 END TEST nvmf_shutdown_tc2 00:20:02.735 ************************************ 00:20:02.735 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:20:02.735 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:02.735 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:02.735 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:02.995 ************************************ 00:20:02.995 START TEST nvmf_shutdown_tc3 00:20:02.995 ************************************ 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:20:02.995 Found 0000:08:00.0 (0x8086 - 0x159b) 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:20:02.995 Found 0000:08:00.1 (0x8086 - 0x159b) 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:20:02.995 Found net devices under 0000:08:00.0: cvl_0_0 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:02.995 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:02.996 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:02.996 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:02.996 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:02.996 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:20:02.996 Found net devices under 0000:08:00.1: cvl_0_1 00:20:02.996 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:02.996 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:02.996 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:20:02.996 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:02.996 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:02.996 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:02.996 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:02.996 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:02.996 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:02.996 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:02.996 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:02.996 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:02.996 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:02.996 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:02.996 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:02.996 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:02.996 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:02.996 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:02.996 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:02.996 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:02.996 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:02.996 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:02.996 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:02.996 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:02.996 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:02.996 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:02.996 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:02.996 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:20:02.996 00:20:02.996 --- 10.0.0.2 ping statistics --- 00:20:02.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:02.996 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:20:02.996 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:02.996 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:02.996 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:20:02.996 00:20:02.996 --- 10.0.0.1 ping statistics --- 00:20:02.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:02.996 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:20:02.996 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:02.996 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:20:02.996 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:02.996 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:02.996 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:02.996 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:02.996 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:02.996 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:02.996 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:02.996 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:02.996 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:02.996 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:02.996 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:02.996 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=3880246 00:20:02.996 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:02.996 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 3880246 00:20:02.996 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 3880246 ']' 00:20:02.996 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:02.996 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:02.996 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:02.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:02.996 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:02.996 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:02.996 [2024-07-24 22:29:28.663416] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:20:02.996 [2024-07-24 22:29:28.663506] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:02.996 EAL: No free 2048 kB hugepages reported on node 1 00:20:03.255 [2024-07-24 22:29:28.723415] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:03.255 [2024-07-24 22:29:28.843319] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:03.255 [2024-07-24 22:29:28.843383] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:03.255 [2024-07-24 22:29:28.843399] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:03.255 [2024-07-24 22:29:28.843412] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:03.255 [2024-07-24 22:29:28.843424] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:03.255 [2024-07-24 22:29:28.843516] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:03.255 [2024-07-24 22:29:28.843602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:03.255 [2024-07-24 22:29:28.843605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:03.255 [2024-07-24 22:29:28.843549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:03.255 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:03.255 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:20:03.255 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:03.255 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:03.255 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:03.515 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:03.515 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:03.515 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.515 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:03.515 [2024-07-24 22:29:28.982750] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:03.515 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.515 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:03.515 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:03.515 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:03.515 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:03.515 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:03.515 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:03.515 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:03.515 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:03.515 22:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:03.515 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:03.515 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:03.515 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:03.515 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:03.515 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:03.515 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:03.515 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:03.515 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:03.515 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:03.515 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:03.515 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:03.515 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:03.515 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:03.515 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:03.515 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:03.515 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:03.515 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:03.515 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.515 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:03.515 Malloc1 00:20:03.515 [2024-07-24 22:29:29.060629] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:03.515 Malloc2 00:20:03.515 Malloc3 00:20:03.515 Malloc4 00:20:03.774 Malloc5 00:20:03.774 Malloc6 00:20:03.774 Malloc7 00:20:03.774 Malloc8 00:20:03.774 Malloc9 00:20:03.774 Malloc10 00:20:04.032 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.032 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:04.032 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:04.032 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:04.032 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=3880386 00:20:04.032 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 3880386 /var/tmp/bdevperf.sock 00:20:04.032 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 3880386 ']' 00:20:04.032 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:04.032 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:04.032 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:04.032 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:04.033 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:04.033 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:20:04.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:04.033 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:04.033 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:20:04.033 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:04.033 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:04.033 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:04.033 { 00:20:04.033 "params": { 00:20:04.033 "name": "Nvme$subsystem", 00:20:04.033 "trtype": "$TEST_TRANSPORT", 00:20:04.033 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:04.033 "adrfam": "ipv4", 00:20:04.033 "trsvcid": "$NVMF_PORT", 00:20:04.033 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:04.033 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:04.033 "hdgst": ${hdgst:-false}, 00:20:04.033 "ddgst": ${ddgst:-false} 00:20:04.033 }, 00:20:04.033 "method": "bdev_nvme_attach_controller" 00:20:04.033 } 00:20:04.033 EOF 00:20:04.033 )") 00:20:04.033 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:04.033 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:04.033 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:04.033 { 00:20:04.033 "params": { 00:20:04.033 "name": "Nvme$subsystem", 00:20:04.033 "trtype": "$TEST_TRANSPORT", 00:20:04.033 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:04.033 "adrfam": "ipv4", 00:20:04.033 "trsvcid": "$NVMF_PORT", 00:20:04.033 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:04.033 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:04.033 "hdgst": ${hdgst:-false}, 00:20:04.033 "ddgst": ${ddgst:-false} 00:20:04.033 }, 00:20:04.033 "method": "bdev_nvme_attach_controller" 00:20:04.033 } 00:20:04.033 EOF 00:20:04.033 )") 00:20:04.033 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:04.033 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:04.033 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:04.033 { 00:20:04.033 "params": { 00:20:04.033 "name": "Nvme$subsystem", 00:20:04.033 "trtype": "$TEST_TRANSPORT", 00:20:04.033 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:04.033 "adrfam": "ipv4", 00:20:04.033 "trsvcid": "$NVMF_PORT", 00:20:04.033 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:04.033 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:04.033 "hdgst": ${hdgst:-false}, 00:20:04.033 "ddgst": ${ddgst:-false} 00:20:04.033 }, 00:20:04.033 "method": "bdev_nvme_attach_controller" 00:20:04.033 } 00:20:04.033 EOF 00:20:04.033 )") 00:20:04.033 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:04.033 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:04.033 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:04.033 { 00:20:04.033 "params": { 00:20:04.033 "name": "Nvme$subsystem", 00:20:04.033 "trtype": "$TEST_TRANSPORT", 00:20:04.033 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:04.033 "adrfam": "ipv4", 00:20:04.033 "trsvcid": "$NVMF_PORT", 00:20:04.033 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:04.033 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:04.033 "hdgst": ${hdgst:-false}, 00:20:04.033 "ddgst": ${ddgst:-false} 00:20:04.033 }, 00:20:04.033 "method": "bdev_nvme_attach_controller" 00:20:04.033 } 00:20:04.033 EOF 00:20:04.033 )") 00:20:04.033 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:04.033 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:04.033 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:04.033 { 00:20:04.033 "params": { 00:20:04.033 "name": "Nvme$subsystem", 00:20:04.033 "trtype": "$TEST_TRANSPORT", 00:20:04.033 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:04.033 "adrfam": "ipv4", 00:20:04.033 "trsvcid": "$NVMF_PORT", 00:20:04.033 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:04.033 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:04.033 "hdgst": ${hdgst:-false}, 00:20:04.033 "ddgst": ${ddgst:-false} 00:20:04.033 }, 00:20:04.033 "method": "bdev_nvme_attach_controller" 00:20:04.033 } 00:20:04.033 EOF 00:20:04.033 )") 00:20:04.033 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:04.033 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:04.033 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:04.033 { 00:20:04.033 "params": { 00:20:04.033 "name": "Nvme$subsystem", 00:20:04.033 "trtype": "$TEST_TRANSPORT", 00:20:04.033 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:04.033 "adrfam": "ipv4", 00:20:04.033 "trsvcid": "$NVMF_PORT", 00:20:04.033 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:04.033 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:04.033 "hdgst": ${hdgst:-false}, 00:20:04.033 "ddgst": ${ddgst:-false} 00:20:04.033 }, 00:20:04.033 "method": "bdev_nvme_attach_controller" 00:20:04.033 } 00:20:04.033 EOF 00:20:04.033 )") 00:20:04.033 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:04.033 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:04.033 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:04.033 { 00:20:04.033 "params": { 00:20:04.033 "name": "Nvme$subsystem", 00:20:04.033 "trtype": "$TEST_TRANSPORT", 00:20:04.033 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:04.033 "adrfam": "ipv4", 00:20:04.033 "trsvcid": "$NVMF_PORT", 00:20:04.033 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:04.033 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:04.033 "hdgst": ${hdgst:-false}, 00:20:04.033 "ddgst": ${ddgst:-false} 00:20:04.033 }, 00:20:04.033 "method": "bdev_nvme_attach_controller" 00:20:04.033 } 00:20:04.033 EOF 00:20:04.033 )") 00:20:04.033 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:04.033 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:04.033 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:04.033 { 00:20:04.033 "params": { 00:20:04.033 "name": "Nvme$subsystem", 00:20:04.033 "trtype": "$TEST_TRANSPORT", 00:20:04.033 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:04.033 "adrfam": "ipv4", 00:20:04.033 "trsvcid": "$NVMF_PORT", 00:20:04.033 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:04.033 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:04.033 "hdgst": ${hdgst:-false}, 00:20:04.033 "ddgst": ${ddgst:-false} 00:20:04.033 }, 00:20:04.033 "method": "bdev_nvme_attach_controller" 00:20:04.033 } 00:20:04.033 EOF 00:20:04.033 )") 00:20:04.033 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:04.033 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:04.033 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:04.033 { 00:20:04.033 "params": { 00:20:04.033 "name": "Nvme$subsystem", 00:20:04.033 "trtype": "$TEST_TRANSPORT", 00:20:04.033 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:04.033 "adrfam": "ipv4", 00:20:04.033 "trsvcid": "$NVMF_PORT", 00:20:04.033 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:04.033 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:04.033 "hdgst": ${hdgst:-false}, 00:20:04.033 "ddgst": ${ddgst:-false} 00:20:04.033 }, 00:20:04.033 "method": "bdev_nvme_attach_controller" 00:20:04.033 } 00:20:04.033 EOF 00:20:04.033 )") 00:20:04.033 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:04.033 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:04.033 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:04.033 { 00:20:04.033 "params": { 00:20:04.033 "name": "Nvme$subsystem", 00:20:04.033 "trtype": "$TEST_TRANSPORT", 00:20:04.033 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:04.033 "adrfam": "ipv4", 00:20:04.033 "trsvcid": "$NVMF_PORT", 00:20:04.033 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:04.033 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:04.033 "hdgst": ${hdgst:-false}, 00:20:04.034 "ddgst": ${ddgst:-false} 00:20:04.034 }, 00:20:04.034 "method": "bdev_nvme_attach_controller" 00:20:04.034 } 00:20:04.034 EOF 00:20:04.034 )") 00:20:04.034 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:04.034 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:20:04.034 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:20:04.034 22:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:04.034 "params": { 00:20:04.034 "name": "Nvme1", 00:20:04.034 "trtype": "tcp", 00:20:04.034 "traddr": "10.0.0.2", 00:20:04.034 "adrfam": "ipv4", 00:20:04.034 "trsvcid": "4420", 00:20:04.034 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:04.034 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:04.034 "hdgst": false, 00:20:04.034 "ddgst": false 00:20:04.034 }, 00:20:04.034 "method": "bdev_nvme_attach_controller" 00:20:04.034 },{ 00:20:04.034 "params": { 00:20:04.034 "name": "Nvme2", 00:20:04.034 "trtype": "tcp", 00:20:04.034 "traddr": "10.0.0.2", 00:20:04.034 "adrfam": "ipv4", 00:20:04.034 "trsvcid": "4420", 00:20:04.034 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:04.034 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:04.034 "hdgst": false, 00:20:04.034 "ddgst": false 00:20:04.034 }, 00:20:04.034 "method": "bdev_nvme_attach_controller" 00:20:04.034 },{ 00:20:04.034 "params": { 00:20:04.034 "name": "Nvme3", 00:20:04.034 "trtype": "tcp", 00:20:04.034 "traddr": "10.0.0.2", 00:20:04.034 "adrfam": "ipv4", 00:20:04.034 "trsvcid": "4420", 00:20:04.034 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:04.034 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:04.034 "hdgst": false, 00:20:04.034 "ddgst": false 00:20:04.034 }, 00:20:04.034 "method": "bdev_nvme_attach_controller" 00:20:04.034 },{ 00:20:04.034 "params": { 00:20:04.034 "name": "Nvme4", 00:20:04.034 "trtype": "tcp", 00:20:04.034 "traddr": "10.0.0.2", 00:20:04.034 "adrfam": "ipv4", 00:20:04.034 "trsvcid": "4420", 00:20:04.034 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:04.034 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:04.034 "hdgst": false, 00:20:04.034 "ddgst": false 00:20:04.034 }, 00:20:04.034 "method": "bdev_nvme_attach_controller" 00:20:04.034 },{ 00:20:04.034 "params": { 00:20:04.034 "name": "Nvme5", 00:20:04.034 "trtype": "tcp", 00:20:04.034 "traddr": "10.0.0.2", 00:20:04.034 "adrfam": "ipv4", 00:20:04.034 "trsvcid": "4420", 00:20:04.034 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:04.034 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:04.034 "hdgst": false, 00:20:04.034 "ddgst": false 00:20:04.034 }, 00:20:04.034 "method": "bdev_nvme_attach_controller" 00:20:04.034 },{ 00:20:04.034 "params": { 00:20:04.034 "name": "Nvme6", 00:20:04.034 "trtype": "tcp", 00:20:04.034 "traddr": "10.0.0.2", 00:20:04.034 "adrfam": "ipv4", 00:20:04.034 "trsvcid": "4420", 00:20:04.034 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:04.034 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:04.034 "hdgst": false, 00:20:04.034 "ddgst": false 00:20:04.034 }, 00:20:04.034 "method": "bdev_nvme_attach_controller" 00:20:04.034 },{ 00:20:04.034 "params": { 00:20:04.034 "name": "Nvme7", 00:20:04.034 "trtype": "tcp", 00:20:04.034 "traddr": "10.0.0.2", 00:20:04.034 "adrfam": "ipv4", 00:20:04.034 "trsvcid": "4420", 00:20:04.034 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:04.034 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:04.034 "hdgst": false, 00:20:04.034 "ddgst": false 00:20:04.034 }, 00:20:04.034 "method": "bdev_nvme_attach_controller" 00:20:04.034 },{ 00:20:04.034 "params": { 00:20:04.034 "name": "Nvme8", 00:20:04.034 "trtype": "tcp", 00:20:04.034 "traddr": "10.0.0.2", 00:20:04.034 "adrfam": "ipv4", 00:20:04.034 "trsvcid": "4420", 00:20:04.034 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:04.034 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:04.034 "hdgst": false, 00:20:04.034 "ddgst": false 00:20:04.034 }, 00:20:04.034 "method": "bdev_nvme_attach_controller" 00:20:04.034 },{ 00:20:04.034 "params": { 00:20:04.034 "name": "Nvme9", 00:20:04.034 "trtype": "tcp", 00:20:04.034 "traddr": "10.0.0.2", 00:20:04.034 "adrfam": "ipv4", 00:20:04.034 "trsvcid": "4420", 00:20:04.034 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:04.034 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:04.034 "hdgst": false, 00:20:04.034 "ddgst": false 00:20:04.034 }, 00:20:04.034 "method": "bdev_nvme_attach_controller" 00:20:04.034 },{ 00:20:04.034 "params": { 00:20:04.034 "name": "Nvme10", 00:20:04.034 "trtype": "tcp", 00:20:04.034 "traddr": "10.0.0.2", 00:20:04.034 "adrfam": "ipv4", 00:20:04.034 "trsvcid": "4420", 00:20:04.034 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:04.034 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:04.034 "hdgst": false, 00:20:04.034 "ddgst": false 00:20:04.034 }, 00:20:04.034 "method": "bdev_nvme_attach_controller" 00:20:04.034 }' 00:20:04.034 [2024-07-24 22:29:29.545545] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:20:04.034 [2024-07-24 22:29:29.545635] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3880386 ] 00:20:04.034 EAL: No free 2048 kB hugepages reported on node 1 00:20:04.034 [2024-07-24 22:29:29.609412] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:04.034 [2024-07-24 22:29:29.726183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:05.934 Running I/O for 10 seconds... 00:20:05.934 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:05.934 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:20:05.934 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:05.934 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.934 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:05.934 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.934 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:05.934 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:05.934 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:05.934 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:20:05.934 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:20:05.934 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:20:05.934 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:20:05.934 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:05.934 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:05.934 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:05.934 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.934 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:06.194 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.194 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:20:06.194 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:20:06.194 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:06.454 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:06.454 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:06.454 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:06.454 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:06.454 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.454 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:06.454 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.454 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:20:06.454 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:20:06.454 22:29:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:06.754 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:06.754 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:06.754 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:06.754 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:06.754 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.754 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:06.754 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.754 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:20:06.754 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:20:06.754 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:20:06.754 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:20:06.754 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:20:06.754 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 3880246 00:20:06.754 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 3880246 ']' 00:20:06.754 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 3880246 00:20:06.754 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:20:06.754 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:06.754 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3880246 00:20:06.754 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:06.754 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:06.754 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3880246' 00:20:06.754 killing process with pid 3880246 00:20:06.754 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 3880246 00:20:06.754 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 3880246 00:20:06.754 [2024-07-24 22:29:32.281959] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110d40 is same with the state(5) to be set 00:20:06.754 [2024-07-24 22:29:32.282104] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110d40 is same with the state(5) to be set 00:20:06.754 [2024-07-24 22:29:32.282123] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110d40 is same with the state(5) to be set 00:20:06.754 [2024-07-24 22:29:32.282137] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110d40 is same with the state(5) to be set 00:20:06.754 [2024-07-24 22:29:32.282150] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110d40 is same with the state(5) to be set 00:20:06.754 [2024-07-24 22:29:32.282164] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110d40 is same with the state(5) to be set 00:20:06.754 [2024-07-24 22:29:32.282177] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110d40 is same with the state(5) to be set 00:20:06.754 [2024-07-24 22:29:32.282190] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110d40 is same with the state(5) to be set 00:20:06.754 [2024-07-24 22:29:32.282203] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110d40 is same with the state(5) to be set 00:20:06.754 [2024-07-24 22:29:32.282217] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110d40 is same with the state(5) to be set 00:20:06.754 [2024-07-24 22:29:32.282230] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110d40 is same with the state(5) to be set 00:20:06.754 [2024-07-24 22:29:32.282243] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110d40 is same with the state(5) to be set 00:20:06.754 [2024-07-24 22:29:32.282256] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110d40 is same with the state(5) to be set 00:20:06.754 [2024-07-24 22:29:32.282269] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110d40 is same with the state(5) to be set 00:20:06.754 [2024-07-24 22:29:32.282294] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110d40 is same with the state(5) to be set 00:20:06.754 [2024-07-24 22:29:32.282308] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110d40 is same with the state(5) to be set 00:20:06.754 [2024-07-24 22:29:32.282322] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110d40 is same with the state(5) to be set 00:20:06.754 [2024-07-24 22:29:32.282335] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110d40 is same with the state(5) to be set 00:20:06.754 [2024-07-24 22:29:32.282348] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110d40 is same with the state(5) to be set 00:20:06.754 [2024-07-24 22:29:32.282369] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110d40 is same with the state(5) to be set 00:20:06.754 [2024-07-24 22:29:32.282383] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110d40 is same with the state(5) to be set 00:20:06.754 [2024-07-24 22:29:32.282396] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110d40 is same with the state(5) to be set 00:20:06.754 [2024-07-24 22:29:32.282409] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110d40 is same with the state(5) to be set 00:20:06.754 [2024-07-24 22:29:32.282422] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110d40 is same with the state(5) to be set 00:20:06.754 [2024-07-24 22:29:32.282436] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110d40 is same with the state(5) to be set 00:20:06.754 [2024-07-24 22:29:32.282449] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110d40 is same with the state(5) to be set 00:20:06.754 [2024-07-24 22:29:32.282462] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110d40 is same with the state(5) to be set 00:20:06.754 [2024-07-24 22:29:32.282475] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110d40 is same with the state(5) to be set 00:20:06.754 [2024-07-24 22:29:32.282498] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110d40 is same with the state(5) to be set 00:20:06.754 [2024-07-24 22:29:32.282512] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110d40 is same with the state(5) to be set 00:20:06.755 [2024-07-24 22:29:32.282525] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110d40 is same with the state(5) to be set 00:20:06.755 [2024-07-24 22:29:32.282538] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110d40 is same with the state(5) to be set 00:20:06.755 [2024-07-24 22:29:32.282551] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110d40 is same with the state(5) to be set 00:20:06.755 [2024-07-24 22:29:32.282564] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110d40 is same with the state(5) to be set 00:20:06.755 [2024-07-24 22:29:32.282578] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110d40 is same with the state(5) to be set 00:20:06.755 [2024-07-24 22:29:32.282591] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110d40 is same with the state(5) to be set 00:20:06.755 [2024-07-24 22:29:32.282604] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110d40 is same with the state(5) to be set 00:20:06.755 [2024-07-24 22:29:32.282617] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110d40 is same with the state(5) to be set 00:20:06.755 [2024-07-24 22:29:32.282631] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110d40 is same with the state(5) to be set 00:20:06.755 [2024-07-24 22:29:32.282644] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110d40 is same with the state(5) to be set 00:20:06.755 [2024-07-24 22:29:32.282657] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110d40 is same with the state(5) to be set 00:20:06.755 [2024-07-24 22:29:32.282674] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110d40 is same with the state(5) to be set 00:20:06.755 [2024-07-24 22:29:32.282688] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110d40 is same with the state(5) to be set 00:20:06.755 [2024-07-24 22:29:32.282701] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110d40 is same with the state(5) to be set 00:20:06.755 [2024-07-24 22:29:32.282715] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110d40 is same with the state(5) to be set 00:20:06.755 [2024-07-24 22:29:32.282728] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110d40 is same with the state(5) to be set 00:20:06.755 [2024-07-24 22:29:32.282742] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110d40 is same with the state(5) to be set 00:20:06.755 [2024-07-24 22:29:32.282755] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110d40 is same with the state(5) to be set 00:20:06.755 [2024-07-24 22:29:32.282768] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110d40 is same with the state(5) to be set 00:20:06.755 [2024-07-24 22:29:32.282782] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110d40 is same with the state(5) to be set 00:20:06.755 [2024-07-24 22:29:32.282795] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110d40 is same with the state(5) to be set 00:20:06.755 [2024-07-24 22:29:32.282808] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110d40 is same with the state(5) to be set 00:20:06.755 [2024-07-24 22:29:32.282821] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110d40 is same with the state(5) to be set 00:20:06.755 [2024-07-24 22:29:32.282835] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110d40 is same with the state(5) to be set 00:20:06.755 [2024-07-24 22:29:32.282849] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110d40 is same with the state(5) to be set 00:20:06.755 [2024-07-24 22:29:32.282862] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110d40 is same with the state(5) to be set 00:20:06.755 [2024-07-24 22:29:32.282875] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110d40 is same with the state(5) to be set 00:20:06.755 [2024-07-24 22:29:32.282889] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110d40 is same with the state(5) to be set 00:20:06.755 [2024-07-24 22:29:32.282902] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110d40 is same with the state(5) to be set 00:20:06.755 [2024-07-24 22:29:32.282915] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110d40 is same with the state(5) to be set 00:20:06.755 [2024-07-24 22:29:32.282928] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110d40 is same with the state(5) to be set 00:20:06.755 [2024-07-24 22:29:32.282941] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110d40 is same with the state(5) to be set 00:20:06.755 [2024-07-24 22:29:32.282954] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110d40 is same with the state(5) to be set 00:20:06.755 [2024-07-24 22:29:32.285656] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0700 is same with the state(5) to be set 00:20:06.755 [2024-07-24 22:29:32.285696] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0700 is same with the state(5) to be set 00:20:06.755 [2024-07-24 22:29:32.285713] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0700 is same with the state(5) to be set 00:20:06.755 [2024-07-24 22:29:32.285726] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0700 is same with the state(5) to be set 00:20:06.755 [2024-07-24 22:29:32.285739] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0700 is same with the state(5) to be set 00:20:06.755 [2024-07-24 22:29:32.285761] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0700 is same with the state(5) to be set 00:20:06.755 [2024-07-24 22:29:32.285776] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0700 is same with the state(5) to be set 00:20:06.755 [2024-07-24 22:29:32.285789] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0700 is same with the state(5) to be set 00:20:06.755 [2024-07-24 22:29:32.285802] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0700 is same with the state(5) to be set 00:20:06.755 [2024-07-24 22:29:32.285815] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0700 is same with the state(5) to be set 00:20:06.755 [2024-07-24 22:29:32.285828] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0700 is same with the state(5) to be set 00:20:06.755 [2024-07-24 22:29:32.285840] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0700 is same with the state(5) to be set 00:20:06.755 [2024-07-24 22:29:32.285853] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0700 is same with the state(5) to be set 00:20:06.755 [2024-07-24 22:29:32.285867] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0700 is same with the state(5) to be set 00:20:06.755 [2024-07-24 22:29:32.285880] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0700 is same with the state(5) to be set 00:20:06.755 [2024-07-24 22:29:32.285895] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0700 is same with the state(5) to be set 00:20:06.755 [2024-07-24 22:29:32.285908] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0700 is same with the state(5) to be set 00:20:06.755 [2024-07-24 22:29:32.285921] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0700 is same with the state(5) to be set 00:20:06.755 [2024-07-24 22:29:32.285939] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0700 is same with the state(5) to be set 00:20:06.755 [2024-07-24 22:29:32.285952] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0700 is same with the state(5) to be set 00:20:06.755 [2024-07-24 22:29:32.285965] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0700 is same with the state(5) to be set 00:20:06.755 [2024-07-24 22:29:32.285978] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0700 is same with the state(5) to be set 00:20:06.755 [2024-07-24 22:29:32.285991] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0700 is same with the state(5) to be set 00:20:06.755 [2024-07-24 22:29:32.286004] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0700 is same with the state(5) to be set 00:20:06.755 [2024-07-24 22:29:32.286017] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0700 is same with the state(5) to be set 00:20:06.755 [2024-07-24 22:29:32.286029] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0700 is same with the state(5) to be set 00:20:06.756 [2024-07-24 22:29:32.286047] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0700 is same with the state(5) to be set 00:20:06.756 [2024-07-24 22:29:32.286062] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0700 is same with the state(5) to be set 00:20:06.756 [2024-07-24 22:29:32.286076] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0700 is same with the state(5) to be set 00:20:06.756 [2024-07-24 22:29:32.286089] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0700 is same with the state(5) to be set 00:20:06.756 [2024-07-24 22:29:32.286102] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0700 is same with the state(5) to be set 00:20:06.756 [2024-07-24 22:29:32.286114] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0700 is same with the state(5) to be set 00:20:06.756 [2024-07-24 22:29:32.286132] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0700 is same with the state(5) to be set 00:20:06.756 [2024-07-24 22:29:32.286146] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0700 is same with the state(5) to be set 00:20:06.756 [2024-07-24 22:29:32.286159] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0700 is same with the state(5) to be set 00:20:06.756 [2024-07-24 22:29:32.286172] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0700 is same with the state(5) to be set 00:20:06.756 [2024-07-24 22:29:32.286185] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0700 is same with the state(5) to be set 00:20:06.756 [2024-07-24 22:29:32.286197] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0700 is same with the state(5) to be set 00:20:06.756 [2024-07-24 22:29:32.286210] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0700 is same with the state(5) to be set 00:20:06.756 [2024-07-24 22:29:32.286223] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0700 is same with the state(5) to be set 00:20:06.756 [2024-07-24 22:29:32.286236] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0700 is same with the state(5) to be set 00:20:06.756 [2024-07-24 22:29:32.286249] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0700 is same with the state(5) to be set 00:20:06.756 [2024-07-24 22:29:32.286262] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0700 is same with the state(5) to be set 00:20:06.756 [2024-07-24 22:29:32.286275] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0700 is same with the state(5) to be set 00:20:06.756 [2024-07-24 22:29:32.286288] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0700 is same with the state(5) to be set 00:20:06.756 [2024-07-24 22:29:32.286305] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0700 is same with the state(5) to be set 00:20:06.756 [2024-07-24 22:29:32.286320] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0700 is same with the state(5) to be set 00:20:06.756 [2024-07-24 22:29:32.286333] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0700 is same with the state(5) to be set 00:20:06.756 [2024-07-24 22:29:32.286347] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0700 is same with the state(5) to be set 00:20:06.756 [2024-07-24 22:29:32.286361] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0700 is same with the state(5) to be set 00:20:06.756 [2024-07-24 22:29:32.286374] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0700 is same with the state(5) to be set 00:20:06.756 [2024-07-24 22:29:32.286388] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0700 is same with the state(5) to be set 00:20:06.756 [2024-07-24 22:29:32.286403] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0700 is same with the state(5) to be set 00:20:06.756 [2024-07-24 22:29:32.286416] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0700 is same with the state(5) to be set 00:20:06.756 [2024-07-24 22:29:32.286430] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0700 is same with the state(5) to be set 00:20:06.756 [2024-07-24 22:29:32.286443] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0700 is same with the state(5) to be set 00:20:06.756 [2024-07-24 22:29:32.286456] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0700 is same with the state(5) to be set 00:20:06.756 [2024-07-24 22:29:32.286469] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0700 is same with the state(5) to be set 00:20:06.756 [2024-07-24 22:29:32.286490] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0700 is same with the state(5) to be set 00:20:06.756 [2024-07-24 22:29:32.286509] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0700 is same with the state(5) to be set 00:20:06.756 [2024-07-24 22:29:32.286523] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0700 is same with the state(5) to be set 00:20:06.756 [2024-07-24 22:29:32.286536] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0700 is same with the state(5) to be set 00:20:06.756 [2024-07-24 22:29:32.286549] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0700 is same with the state(5) to be set 00:20:06.756 [2024-07-24 22:29:32.287630] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0be0 is same with the state(5) to be set 00:20:06.756 [2024-07-24 22:29:32.287668] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0be0 is same with the state(5) to be set 00:20:06.756 [2024-07-24 22:29:32.287684] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0be0 is same with the state(5) to be set 00:20:06.756 [2024-07-24 22:29:32.287698] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0be0 is same with the state(5) to be set 00:20:06.756 [2024-07-24 22:29:32.287711] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0be0 is same with the state(5) to be set 00:20:06.756 [2024-07-24 22:29:32.287724] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0be0 is same with the state(5) to be set 00:20:06.756 [2024-07-24 22:29:32.287738] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0be0 is same with the state(5) to be set 00:20:06.756 [2024-07-24 22:29:32.287751] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0be0 is same with the state(5) to be set 00:20:06.756 [2024-07-24 22:29:32.287763] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0be0 is same with the state(5) to be set 00:20:06.756 [2024-07-24 22:29:32.287776] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0be0 is same with the state(5) to be set 00:20:06.756 [2024-07-24 22:29:32.287789] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0be0 is same with the state(5) to be set 00:20:06.756 [2024-07-24 22:29:32.287802] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0be0 is same with the state(5) to be set 00:20:06.756 [2024-07-24 22:29:32.287815] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0be0 is same with the state(5) to be set 00:20:06.756 [2024-07-24 22:29:32.287828] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0be0 is same with the state(5) to be set 00:20:06.756 [2024-07-24 22:29:32.287841] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0be0 is same with the state(5) to be set 00:20:06.756 [2024-07-24 22:29:32.287854] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0be0 is same with the state(5) to be set 00:20:06.756 [2024-07-24 22:29:32.287867] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0be0 is same with the state(5) to be set 00:20:06.756 [2024-07-24 22:29:32.287880] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0be0 is same with the state(5) to be set 00:20:06.756 [2024-07-24 22:29:32.287893] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0be0 is same with the state(5) to be set 00:20:06.756 [2024-07-24 22:29:32.287906] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0be0 is same with the state(5) to be set 00:20:06.756 [2024-07-24 22:29:32.287919] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0be0 is same with the state(5) to be set 00:20:06.756 [2024-07-24 22:29:32.287932] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0be0 is same with the state(5) to be set 00:20:06.756 [2024-07-24 22:29:32.287944] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0be0 is same with the state(5) to be set 00:20:06.756 [2024-07-24 22:29:32.287970] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0be0 is same with the state(5) to be set 00:20:06.756 [2024-07-24 22:29:32.287984] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0be0 is same with the state(5) to be set 00:20:06.756 [2024-07-24 22:29:32.287997] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0be0 is same with the state(5) to be set 00:20:06.756 [2024-07-24 22:29:32.288010] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0be0 is same with the state(5) to be set 00:20:06.756 [2024-07-24 22:29:32.288023] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0be0 is same with the state(5) to be set 00:20:06.757 [2024-07-24 22:29:32.288037] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0be0 is same with the state(5) to be set 00:20:06.757 [2024-07-24 22:29:32.288049] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0be0 is same with the state(5) to be set 00:20:06.757 [2024-07-24 22:29:32.288062] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0be0 is same with the state(5) to be set 00:20:06.757 [2024-07-24 22:29:32.288076] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0be0 is same with the state(5) to be set 00:20:06.757 [2024-07-24 22:29:32.288089] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0be0 is same with the state(5) to be set 00:20:06.757 [2024-07-24 22:29:32.288102] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0be0 is same with the state(5) to be set 00:20:06.757 [2024-07-24 22:29:32.288115] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0be0 is same with the state(5) to be set 00:20:06.757 [2024-07-24 22:29:32.288128] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0be0 is same with the state(5) to be set 00:20:06.757 [2024-07-24 22:29:32.288141] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0be0 is same with the state(5) to be set 00:20:06.757 [2024-07-24 22:29:32.288154] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0be0 is same with the state(5) to be set 00:20:06.757 [2024-07-24 22:29:32.288168] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0be0 is same with the state(5) to be set 00:20:06.757 [2024-07-24 22:29:32.288181] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0be0 is same with the state(5) to be set 00:20:06.757 [2024-07-24 22:29:32.288194] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0be0 is same with the state(5) to be set 00:20:06.757 [2024-07-24 22:29:32.288207] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0be0 is same with the state(5) to be set 00:20:06.757 [2024-07-24 22:29:32.288220] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0be0 is same with the state(5) to be set 00:20:06.757 [2024-07-24 22:29:32.288233] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0be0 is same with the state(5) to be set 00:20:06.757 [2024-07-24 22:29:32.288246] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0be0 is same with the state(5) to be set 00:20:06.757 [2024-07-24 22:29:32.288259] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0be0 is same with the state(5) to be set 00:20:06.757 [2024-07-24 22:29:32.288272] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0be0 is same with the state(5) to be set 00:20:06.757 [2024-07-24 22:29:32.288285] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0be0 is same with the state(5) to be set 00:20:06.757 [2024-07-24 22:29:32.288298] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0be0 is same with the state(5) to be set 00:20:06.757 [2024-07-24 22:29:32.288311] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0be0 is same with the state(5) to be set 00:20:06.757 [2024-07-24 22:29:32.288324] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0be0 is same with the state(5) to be set 00:20:06.757 [2024-07-24 22:29:32.288347] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0be0 is same with the state(5) to be set 00:20:06.757 [2024-07-24 22:29:32.288362] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0be0 is same with the state(5) to be set 00:20:06.757 [2024-07-24 22:29:32.288375] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0be0 is same with the state(5) to be set 00:20:06.757 [2024-07-24 22:29:32.288388] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0be0 is same with the state(5) to be set 00:20:06.757 [2024-07-24 22:29:32.288401] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0be0 is same with the state(5) to be set 00:20:06.757 [2024-07-24 22:29:32.288414] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0be0 is same with the state(5) to be set 00:20:06.757 [2024-07-24 22:29:32.288427] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0be0 is same with the state(5) to be set 00:20:06.757 [2024-07-24 22:29:32.288440] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0be0 is same with the state(5) to be set 00:20:06.757 [2024-07-24 22:29:32.288453] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0be0 is same with the state(5) to be set 00:20:06.757 [2024-07-24 22:29:32.288466] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0be0 is same with the state(5) to be set 00:20:06.757 [2024-07-24 22:29:32.288488] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0be0 is same with the state(5) to be set 00:20:06.757 [2024-07-24 22:29:32.288504] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f0be0 is same with the state(5) to be set 00:20:06.757 [2024-07-24 22:29:32.289335] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f10a0 is same with the state(5) to be set 00:20:06.757 [2024-07-24 22:29:32.289370] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f10a0 is same with the state(5) to be set 00:20:06.757 [2024-07-24 22:29:32.289386] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f10a0 is same with the state(5) to be set 00:20:06.757 [2024-07-24 22:29:32.289403] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f10a0 is same with the state(5) to be set 00:20:06.757 [2024-07-24 22:29:32.289416] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f10a0 is same with the state(5) to be set 00:20:06.757 [2024-07-24 22:29:32.289429] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f10a0 is same with the state(5) to be set 00:20:06.757 [2024-07-24 22:29:32.289444] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f10a0 is same with the state(5) to be set 00:20:06.757 [2024-07-24 22:29:32.289458] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f10a0 is same with the state(5) to be set 00:20:06.757 [2024-07-24 22:29:32.289471] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f10a0 is same with the state(5) to be set 00:20:06.757 [2024-07-24 22:29:32.289550] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f10a0 is same with the state(5) to be set 00:20:06.757 [2024-07-24 22:29:32.289575] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f10a0 is same with the state(5) to be set 00:20:06.757 [2024-07-24 22:29:32.289588] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f10a0 is same with the state(5) to be set 00:20:06.757 [2024-07-24 22:29:32.289602] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f10a0 is same with the state(5) to be set 00:20:06.757 [2024-07-24 22:29:32.289618] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f10a0 is same with the state(5) to be set 00:20:06.757 [2024-07-24 22:29:32.289631] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f10a0 is same with the state(5) to be set 00:20:06.757 [2024-07-24 22:29:32.289657] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f10a0 is same with the state(5) to be set 00:20:06.757 [2024-07-24 22:29:32.289674] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f10a0 is same with the state(5) to be set 00:20:06.757 [2024-07-24 22:29:32.289688] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f10a0 is same with the state(5) to be set 00:20:06.757 [2024-07-24 22:29:32.289701] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f10a0 is same with the state(5) to be set 00:20:06.757 [2024-07-24 22:29:32.289715] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f10a0 is same with the state(5) to be set 00:20:06.757 [2024-07-24 22:29:32.289729] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f10a0 is same with the state(5) to be set 00:20:06.757 [2024-07-24 22:29:32.289742] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f10a0 is same with the state(5) to be set 00:20:06.757 [2024-07-24 22:29:32.289756] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f10a0 is same with the state(5) to be set 00:20:06.757 [2024-07-24 22:29:32.289771] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f10a0 is same with the state(5) to be set 00:20:06.757 [2024-07-24 22:29:32.289785] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f10a0 is same with the state(5) to be set 00:20:06.757 [2024-07-24 22:29:32.289798] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f10a0 is same with the state(5) to be set 00:20:06.757 [2024-07-24 22:29:32.289813] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f10a0 is same with the state(5) to be set 00:20:06.757 [2024-07-24 22:29:32.289826] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f10a0 is same with the state(5) to be set 00:20:06.757 [2024-07-24 22:29:32.289839] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f10a0 is same with the state(5) to be set 00:20:06.757 [2024-07-24 22:29:32.289853] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f10a0 is same with the state(5) to be set 00:20:06.757 [2024-07-24 22:29:32.289873] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f10a0 is same with the state(5) to be set 00:20:06.757 [2024-07-24 22:29:32.289887] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f10a0 is same with the state(5) to be set 00:20:06.757 [2024-07-24 22:29:32.289900] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f10a0 is same with the state(5) to be set 00:20:06.757 [2024-07-24 22:29:32.289914] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f10a0 is same with the state(5) to be set 00:20:06.757 [2024-07-24 22:29:32.289928] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f10a0 is same with the state(5) to be set 00:20:06.757 [2024-07-24 22:29:32.289942] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f10a0 is same with the state(5) to be set 00:20:06.757 [2024-07-24 22:29:32.289956] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f10a0 is same with the state(5) to be set 00:20:06.759 [2024-07-24 22:29:32.289972] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f10a0 is same with the state(5) to be set 00:20:06.759 [2024-07-24 22:29:32.289985] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f10a0 is same with the state(5) to be set 00:20:06.759 [2024-07-24 22:29:32.289998] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f10a0 is same with the state(5) to be set 00:20:06.759 [2024-07-24 22:29:32.290012] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f10a0 is same with the state(5) to be set 00:20:06.759 [2024-07-24 22:29:32.290027] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f10a0 is same with the state(5) to be set 00:20:06.759 [2024-07-24 22:29:32.290045] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f10a0 is same with the state(5) to be set 00:20:06.759 [2024-07-24 22:29:32.290060] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f10a0 is same with the state(5) to be set 00:20:06.759 [2024-07-24 22:29:32.290075] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f10a0 is same with the state(5) to be set 00:20:06.759 [2024-07-24 22:29:32.290089] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f10a0 is same with the state(5) to be set 00:20:06.759 [2024-07-24 22:29:32.290102] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f10a0 is same with the state(5) to be set 00:20:06.759 [2024-07-24 22:29:32.290117] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f10a0 is same with the state(5) to be set 00:20:06.759 [2024-07-24 22:29:32.290131] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f10a0 is same with the state(5) to be set 00:20:06.759 [2024-07-24 22:29:32.290145] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f10a0 is same with the state(5) to be set 00:20:06.759 [2024-07-24 22:29:32.290158] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f10a0 is same with the state(5) to be set 00:20:06.759 [2024-07-24 22:29:32.290174] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f10a0 is same with the state(5) to be set 00:20:06.759 [2024-07-24 22:29:32.290187] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f10a0 is same with the state(5) to be set 00:20:06.759 [2024-07-24 22:29:32.290200] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f10a0 is same with the state(5) to be set 00:20:06.759 [2024-07-24 22:29:32.290213] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f10a0 is same with the state(5) to be set 00:20:06.759 [2024-07-24 22:29:32.290229] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f10a0 is same with the state(5) to be set 00:20:06.759 [2024-07-24 22:29:32.290242] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f10a0 is same with the state(5) to be set 00:20:06.759 [2024-07-24 22:29:32.290259] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f10a0 is same with the state(5) to be set 00:20:06.759 [2024-07-24 22:29:32.290273] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f10a0 is same with the state(5) to be set 00:20:06.759 [2024-07-24 22:29:32.290286] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f10a0 is same with the state(5) to be set 00:20:06.759 [2024-07-24 22:29:32.290299] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f10a0 is same with the state(5) to be set 00:20:06.759 [2024-07-24 22:29:32.290312] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f10a0 is same with the state(5) to be set 00:20:06.759 [2024-07-24 22:29:32.290327] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f10a0 is same with the state(5) to be set 00:20:06.759 [2024-07-24 22:29:32.292849] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f00 is same with the state(5) to be set 00:20:06.759 [2024-07-24 22:29:32.292886] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f00 is same with the state(5) to be set 00:20:06.759 [2024-07-24 22:29:32.292901] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f00 is same with the state(5) to be set 00:20:06.759 [2024-07-24 22:29:32.292915] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f00 is same with the state(5) to be set 00:20:06.759 [2024-07-24 22:29:32.292928] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f00 is same with the state(5) to be set 00:20:06.759 [2024-07-24 22:29:32.292941] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f00 is same with the state(5) to be set 00:20:06.759 [2024-07-24 22:29:32.292961] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f00 is same with the state(5) to be set 00:20:06.759 [2024-07-24 22:29:32.292975] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f00 is same with the state(5) to be set 00:20:06.759 [2024-07-24 22:29:32.292988] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f00 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.293001] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f00 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.293014] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f00 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.293027] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f00 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.293041] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f00 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.293054] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f00 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.293067] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f00 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.293081] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f00 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.293094] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f00 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.293108] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f00 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.293121] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f00 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.293135] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f00 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.293148] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f00 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.293161] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f00 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.293174] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f00 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.293187] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f00 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.293200] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f00 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.293213] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f00 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.293226] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f00 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.293240] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f00 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.293254] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f00 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.293267] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f00 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.293280] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f00 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.293293] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f00 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.293306] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f00 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.293323] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f00 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.293337] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f00 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.293350] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f00 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.293363] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f00 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.293379] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f00 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.293392] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f00 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.293405] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f00 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.293418] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f00 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.293432] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f00 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.293445] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f00 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.293458] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f00 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.293471] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f00 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.293494] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f00 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.293509] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f00 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.293523] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f00 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.293537] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f00 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.293550] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f00 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.293563] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f00 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.293576] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f00 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.293590] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f00 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.293603] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f00 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.293616] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f00 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.293629] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f00 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.293642] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f00 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.293655] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f00 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.293669] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f00 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.293682] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f00 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.293699] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f00 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.293712] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f00 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.293726] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f1f00 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.294477] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110880 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.294509] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110880 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.294524] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110880 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.294538] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110880 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.294551] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110880 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.294564] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110880 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.294577] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110880 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.294590] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110880 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.294603] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110880 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.294617] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110880 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.294630] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110880 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.294643] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110880 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.294656] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110880 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.294670] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110880 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.294683] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110880 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.294696] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110880 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.294709] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110880 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.294722] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110880 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.294735] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110880 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.294748] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110880 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.294762] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110880 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.294775] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110880 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.294788] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110880 is same with the state(5) to be set 00:20:06.760 [2024-07-24 22:29:32.294801] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110880 is same with the state(5) to be set 00:20:06.761 [2024-07-24 22:29:32.294820] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110880 is same with the state(5) to be set 00:20:06.761 [2024-07-24 22:29:32.294834] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110880 is same with the state(5) to be set 00:20:06.761 [2024-07-24 22:29:32.294847] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110880 is same with the state(5) to be set 00:20:06.761 [2024-07-24 22:29:32.294860] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110880 is same with the state(5) to be set 00:20:06.761 [2024-07-24 22:29:32.294873] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110880 is same with the state(5) to be set 00:20:06.761 [2024-07-24 22:29:32.294886] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110880 is same with the state(5) to be set 00:20:06.761 [2024-07-24 22:29:32.294900] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110880 is same with the state(5) to be set 00:20:06.761 [2024-07-24 22:29:32.294913] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110880 is same with the state(5) to be set 00:20:06.761 [2024-07-24 22:29:32.294926] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110880 is same with the state(5) to be set 00:20:06.761 [2024-07-24 22:29:32.294939] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110880 is same with the state(5) to be set 00:20:06.761 [2024-07-24 22:29:32.294953] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110880 is same with the state(5) to be set 00:20:06.761 [2024-07-24 22:29:32.294966] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110880 is same with the state(5) to be set 00:20:06.761 [2024-07-24 22:29:32.294980] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110880 is same with the state(5) to be set 00:20:06.761 [2024-07-24 22:29:32.294993] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110880 is same with the state(5) to be set 00:20:06.761 [2024-07-24 22:29:32.295007] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110880 is same with the state(5) to be set 00:20:06.761 [2024-07-24 22:29:32.295020] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110880 is same with the state(5) to be set 00:20:06.761 [2024-07-24 22:29:32.295033] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110880 is same with the state(5) to be set 00:20:06.761 [2024-07-24 22:29:32.295046] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110880 is same with the state(5) to be set 00:20:06.761 [2024-07-24 22:29:32.295059] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110880 is same with the state(5) to be set 00:20:06.761 [2024-07-24 22:29:32.295072] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110880 is same with the state(5) to be set 00:20:06.761 [2024-07-24 22:29:32.295085] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110880 is same with the state(5) to be set 00:20:06.761 [2024-07-24 22:29:32.295099] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110880 is same with the state(5) to be set 00:20:06.761 [2024-07-24 22:29:32.295112] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110880 is same with the state(5) to be set 00:20:06.761 [2024-07-24 22:29:32.295125] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110880 is same with the state(5) to be set 00:20:06.761 [2024-07-24 22:29:32.295138] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110880 is same with the state(5) to be set 00:20:06.761 [2024-07-24 22:29:32.295151] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110880 is same with the state(5) to be set 00:20:06.761 [2024-07-24 22:29:32.295165] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110880 is same with the state(5) to be set 00:20:06.761 [2024-07-24 22:29:32.295182] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110880 is same with the state(5) to be set 00:20:06.761 [2024-07-24 22:29:32.295196] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110880 is same with the state(5) to be set 00:20:06.761 [2024-07-24 22:29:32.295209] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110880 is same with the state(5) to be set 00:20:06.761 [2024-07-24 22:29:32.295222] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110880 is same with the state(5) to be set 00:20:06.761 [2024-07-24 22:29:32.295235] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110880 is same with the state(5) to be set 00:20:06.761 [2024-07-24 22:29:32.295248] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110880 is same with the state(5) to be set 00:20:06.761 [2024-07-24 22:29:32.295263] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110880 is same with the state(5) to be set 00:20:06.761 [2024-07-24 22:29:32.295277] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110880 is same with the state(5) to be set 00:20:06.761 [2024-07-24 22:29:32.295290] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110880 is same with the state(5) to be set 00:20:06.761 [2024-07-24 22:29:32.295304] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110880 is same with the state(5) to be set 00:20:06.761 [2024-07-24 22:29:32.295317] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110880 is same with the state(5) to be set 00:20:06.761 [2024-07-24 22:29:32.295330] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1110880 is same with the state(5) to be set 00:20:06.761 [2024-07-24 22:29:32.305535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.761 [2024-07-24 22:29:32.305641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.761 [2024-07-24 22:29:32.305687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.761 [2024-07-24 22:29:32.305704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.761 [2024-07-24 22:29:32.305723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.761 [2024-07-24 22:29:32.305741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.761 [2024-07-24 22:29:32.305758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.761 [2024-07-24 22:29:32.305774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.761 [2024-07-24 22:29:32.305792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.761 [2024-07-24 22:29:32.305807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.761 [2024-07-24 22:29:32.305825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.761 [2024-07-24 22:29:32.305841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.761 [2024-07-24 22:29:32.305858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.761 [2024-07-24 22:29:32.305873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.761 [2024-07-24 22:29:32.305912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.761 [2024-07-24 22:29:32.305929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.761 [2024-07-24 22:29:32.305947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.761 [2024-07-24 22:29:32.305963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.761 [2024-07-24 22:29:32.305980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.761 [2024-07-24 22:29:32.305995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.761 [2024-07-24 22:29:32.306013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.761 [2024-07-24 22:29:32.306036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.762 [2024-07-24 22:29:32.306053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.762 [2024-07-24 22:29:32.306069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.762 [2024-07-24 22:29:32.306089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.762 [2024-07-24 22:29:32.306104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.762 [2024-07-24 22:29:32.306122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.762 [2024-07-24 22:29:32.306137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.762 [2024-07-24 22:29:32.306154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.762 [2024-07-24 22:29:32.306169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.762 [2024-07-24 22:29:32.306187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.762 [2024-07-24 22:29:32.306202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.762 [2024-07-24 22:29:32.306219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.762 [2024-07-24 22:29:32.306235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.762 [2024-07-24 22:29:32.306252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.762 [2024-07-24 22:29:32.306268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.762 [2024-07-24 22:29:32.306286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.762 [2024-07-24 22:29:32.306301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.762 [2024-07-24 22:29:32.306319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.762 [2024-07-24 22:29:32.306338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.762 [2024-07-24 22:29:32.306356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.762 [2024-07-24 22:29:32.306372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.762 [2024-07-24 22:29:32.306389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.762 [2024-07-24 22:29:32.306404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.762 [2024-07-24 22:29:32.306422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.762 [2024-07-24 22:29:32.306437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.762 [2024-07-24 22:29:32.306454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.762 [2024-07-24 22:29:32.306469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.762 [2024-07-24 22:29:32.306493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.762 [2024-07-24 22:29:32.306510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.762 [2024-07-24 22:29:32.306528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.762 [2024-07-24 22:29:32.306543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.762 [2024-07-24 22:29:32.306568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.762 [2024-07-24 22:29:32.306583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.762 [2024-07-24 22:29:32.306600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.762 [2024-07-24 22:29:32.306615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.762 [2024-07-24 22:29:32.306639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.762 [2024-07-24 22:29:32.306654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.762 [2024-07-24 22:29:32.306671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.762 [2024-07-24 22:29:32.306686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.762 [2024-07-24 22:29:32.306703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.762 [2024-07-24 22:29:32.306718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.762 [2024-07-24 22:29:32.306736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.762 [2024-07-24 22:29:32.306756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.762 [2024-07-24 22:29:32.306777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.762 [2024-07-24 22:29:32.306793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.762 [2024-07-24 22:29:32.306818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.762 [2024-07-24 22:29:32.306835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.762 [2024-07-24 22:29:32.306853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.762 [2024-07-24 22:29:32.306868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.762 [2024-07-24 22:29:32.306886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.762 [2024-07-24 22:29:32.306901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.762 [2024-07-24 22:29:32.306918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.762 [2024-07-24 22:29:32.306933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.762 [2024-07-24 22:29:32.306951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.762 [2024-07-24 22:29:32.306966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.762 [2024-07-24 22:29:32.306983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.762 [2024-07-24 22:29:32.306998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.762 [2024-07-24 22:29:32.307015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.762 [2024-07-24 22:29:32.307030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.762 [2024-07-24 22:29:32.307047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.762 [2024-07-24 22:29:32.307067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.762 [2024-07-24 22:29:32.307084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.762 [2024-07-24 22:29:32.307099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.762 [2024-07-24 22:29:32.307116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.762 [2024-07-24 22:29:32.307139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.762 [2024-07-24 22:29:32.307157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.762 [2024-07-24 22:29:32.307172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.762 [2024-07-24 22:29:32.307190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.762 [2024-07-24 22:29:32.307210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.762 [2024-07-24 22:29:32.307228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.762 [2024-07-24 22:29:32.307243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.762 [2024-07-24 22:29:32.307261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.762 [2024-07-24 22:29:32.307276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.762 [2024-07-24 22:29:32.307293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.762 [2024-07-24 22:29:32.307308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.762 [2024-07-24 22:29:32.307326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.763 [2024-07-24 22:29:32.307341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.763 [2024-07-24 22:29:32.307358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.763 [2024-07-24 22:29:32.307374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.763 [2024-07-24 22:29:32.307392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.763 [2024-07-24 22:29:32.307407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.763 [2024-07-24 22:29:32.307424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.763 [2024-07-24 22:29:32.307439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.763 [2024-07-24 22:29:32.307457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.763 [2024-07-24 22:29:32.307472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.763 [2024-07-24 22:29:32.307497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.763 [2024-07-24 22:29:32.307516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.763 [2024-07-24 22:29:32.307533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.763 [2024-07-24 22:29:32.307549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.763 [2024-07-24 22:29:32.307566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.763 [2024-07-24 22:29:32.307588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.763 [2024-07-24 22:29:32.307605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.763 [2024-07-24 22:29:32.307620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.763 [2024-07-24 22:29:32.307642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.763 [2024-07-24 22:29:32.307658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.763 [2024-07-24 22:29:32.307676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.763 [2024-07-24 22:29:32.307691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.763 [2024-07-24 22:29:32.307710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.763 [2024-07-24 22:29:32.307725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.763 [2024-07-24 22:29:32.307743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.763 [2024-07-24 22:29:32.307758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.763 [2024-07-24 22:29:32.307776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.763 [2024-07-24 22:29:32.307791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.763 [2024-07-24 22:29:32.307808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.763 [2024-07-24 22:29:32.307823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.763 [2024-07-24 22:29:32.307841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.763 [2024-07-24 22:29:32.307856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.763 [2024-07-24 22:29:32.307977] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x13bfad0 was disconnected and freed. reset controller. 00:20:06.763 [2024-07-24 22:29:32.308651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.763 [2024-07-24 22:29:32.308681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.763 [2024-07-24 22:29:32.308709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.763 [2024-07-24 22:29:32.308726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.763 [2024-07-24 22:29:32.308744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.763 [2024-07-24 22:29:32.308759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.763 [2024-07-24 22:29:32.308777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.763 [2024-07-24 22:29:32.308792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.763 [2024-07-24 22:29:32.308810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.763 [2024-07-24 22:29:32.308825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.763 [2024-07-24 22:29:32.308850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.763 [2024-07-24 22:29:32.308866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.763 [2024-07-24 22:29:32.308884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.763 [2024-07-24 22:29:32.308899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.763 [2024-07-24 22:29:32.308916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.763 [2024-07-24 22:29:32.308933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.763 [2024-07-24 22:29:32.308951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.763 [2024-07-24 22:29:32.308966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.763 [2024-07-24 22:29:32.308983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.763 [2024-07-24 22:29:32.308998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.763 [2024-07-24 22:29:32.309016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.763 [2024-07-24 22:29:32.309031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.763 [2024-07-24 22:29:32.309049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.763 [2024-07-24 22:29:32.309064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.763 [2024-07-24 22:29:32.309081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.763 [2024-07-24 22:29:32.309096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.763 [2024-07-24 22:29:32.309114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.763 [2024-07-24 22:29:32.309129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.763 [2024-07-24 22:29:32.309146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.763 [2024-07-24 22:29:32.309162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.763 [2024-07-24 22:29:32.309179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.763 [2024-07-24 22:29:32.309194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.763 [2024-07-24 22:29:32.309212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.763 [2024-07-24 22:29:32.309228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.763 [2024-07-24 22:29:32.309245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.763 [2024-07-24 22:29:32.309264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.763 [2024-07-24 22:29:32.309282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.764 [2024-07-24 22:29:32.309298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.764 [2024-07-24 22:29:32.309316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.764 [2024-07-24 22:29:32.309331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.764 [2024-07-24 22:29:32.309349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.764 [2024-07-24 22:29:32.309364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.764 [2024-07-24 22:29:32.309381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.764 [2024-07-24 22:29:32.309396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.764 [2024-07-24 22:29:32.309414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.764 [2024-07-24 22:29:32.309429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.764 [2024-07-24 22:29:32.309447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.764 [2024-07-24 22:29:32.309462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.764 [2024-07-24 22:29:32.309487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.764 [2024-07-24 22:29:32.309505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.764 [2024-07-24 22:29:32.309530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.764 [2024-07-24 22:29:32.309545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.764 [2024-07-24 22:29:32.309562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.764 [2024-07-24 22:29:32.309584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.764 [2024-07-24 22:29:32.309603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.764 [2024-07-24 22:29:32.309618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.764 [2024-07-24 22:29:32.309635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.764 [2024-07-24 22:29:32.309658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.764 [2024-07-24 22:29:32.309676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.764 [2024-07-24 22:29:32.309691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.764 [2024-07-24 22:29:32.309713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.764 [2024-07-24 22:29:32.309729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.764 [2024-07-24 22:29:32.309746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.764 [2024-07-24 22:29:32.309762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.764 [2024-07-24 22:29:32.309779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.764 [2024-07-24 22:29:32.309795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.764 [2024-07-24 22:29:32.309812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.764 [2024-07-24 22:29:32.309828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.764 [2024-07-24 22:29:32.309845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.764 [2024-07-24 22:29:32.309860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.764 [2024-07-24 22:29:32.309878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.764 [2024-07-24 22:29:32.309894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.764 [2024-07-24 22:29:32.309911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.764 [2024-07-24 22:29:32.309926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.764 [2024-07-24 22:29:32.309952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.764 [2024-07-24 22:29:32.309967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.764 [2024-07-24 22:29:32.309985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.764 [2024-07-24 22:29:32.310000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.764 [2024-07-24 22:29:32.310018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.764 [2024-07-24 22:29:32.310033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.764 [2024-07-24 22:29:32.310051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.764 [2024-07-24 22:29:32.310066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.764 [2024-07-24 22:29:32.310084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.764 [2024-07-24 22:29:32.310099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.764 [2024-07-24 22:29:32.310116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.764 [2024-07-24 22:29:32.310135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.764 [2024-07-24 22:29:32.310153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.764 [2024-07-24 22:29:32.310169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.764 [2024-07-24 22:29:32.310187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.764 [2024-07-24 22:29:32.310208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.764 [2024-07-24 22:29:32.310226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.764 [2024-07-24 22:29:32.310241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.764 [2024-07-24 22:29:32.310259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.764 [2024-07-24 22:29:32.310275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.764 [2024-07-24 22:29:32.310292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.764 [2024-07-24 22:29:32.310307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.764 [2024-07-24 22:29:32.310325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.764 [2024-07-24 22:29:32.310340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.764 [2024-07-24 22:29:32.310358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.764 [2024-07-24 22:29:32.310373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.764 [2024-07-24 22:29:32.310390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.764 [2024-07-24 22:29:32.310405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.764 [2024-07-24 22:29:32.310423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.764 [2024-07-24 22:29:32.310438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.764 [2024-07-24 22:29:32.310456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.764 [2024-07-24 22:29:32.310471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.764 [2024-07-24 22:29:32.310496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.764 [2024-07-24 22:29:32.310512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.764 [2024-07-24 22:29:32.310530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.764 [2024-07-24 22:29:32.310545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.764 [2024-07-24 22:29:32.310567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.764 [2024-07-24 22:29:32.310583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.764 [2024-07-24 22:29:32.310601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.765 [2024-07-24 22:29:32.310616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.765 [2024-07-24 22:29:32.310634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.765 [2024-07-24 22:29:32.310650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.765 [2024-07-24 22:29:32.310676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.765 [2024-07-24 22:29:32.310691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.765 [2024-07-24 22:29:32.310708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.765 [2024-07-24 22:29:32.310724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.765 [2024-07-24 22:29:32.310741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.765 [2024-07-24 22:29:32.310757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.765 [2024-07-24 22:29:32.310775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.765 [2024-07-24 22:29:32.310790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.765 [2024-07-24 22:29:32.310808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.765 [2024-07-24 22:29:32.310823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.765 [2024-07-24 22:29:32.310841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.765 [2024-07-24 22:29:32.310857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.765 [2024-07-24 22:29:32.310946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:06.765 [2024-07-24 22:29:32.311035] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1218210 was disconnected and freed. reset controller. 00:20:06.765 [2024-07-24 22:29:32.311475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.765 [2024-07-24 22:29:32.311509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.765 [2024-07-24 22:29:32.311535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.765 [2024-07-24 22:29:32.311550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.765 [2024-07-24 22:29:32.311565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.765 [2024-07-24 22:29:32.311581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.765 [2024-07-24 22:29:32.311602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.765 [2024-07-24 22:29:32.311617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.765 [2024-07-24 22:29:32.311633] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x124c1e0 is same with the state(5) to be set 00:20:06.765 [2024-07-24 22:29:32.311694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.765 [2024-07-24 22:29:32.311753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.765 [2024-07-24 22:29:32.311776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.765 [2024-07-24 22:29:32.311791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.765 [2024-07-24 22:29:32.311807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.765 [2024-07-24 22:29:32.311822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.765 [2024-07-24 22:29:32.311837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.765 [2024-07-24 22:29:32.311852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.765 [2024-07-24 22:29:32.311866] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x124c9b0 is same with the state(5) to be set 00:20:06.765 [2024-07-24 22:29:32.311921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.765 [2024-07-24 22:29:32.311949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.765 [2024-07-24 22:29:32.311966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.765 [2024-07-24 22:29:32.311980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.765 [2024-07-24 22:29:32.311997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.765 [2024-07-24 22:29:32.312012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.765 [2024-07-24 22:29:32.312027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.765 [2024-07-24 22:29:32.312042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.765 [2024-07-24 22:29:32.312056] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d58c0 is same with the state(5) to be set 00:20:06.765 [2024-07-24 22:29:32.312109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.765 [2024-07-24 22:29:32.312129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.765 [2024-07-24 22:29:32.312147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.765 [2024-07-24 22:29:32.312162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.765 [2024-07-24 22:29:32.312183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.765 [2024-07-24 22:29:32.312198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.765 [2024-07-24 22:29:32.312214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.765 [2024-07-24 22:29:32.312228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.765 [2024-07-24 22:29:32.312242] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13bfe00 is same with the state(5) to be set 00:20:06.765 [2024-07-24 22:29:32.312292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.765 [2024-07-24 22:29:32.312313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.765 [2024-07-24 22:29:32.312329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.765 [2024-07-24 22:29:32.312344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.765 [2024-07-24 22:29:32.312360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.765 [2024-07-24 22:29:32.312374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.765 [2024-07-24 22:29:32.312390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.765 [2024-07-24 22:29:32.312405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.765 [2024-07-24 22:29:32.312419] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127b8e0 is same with the state(5) to be set 00:20:06.765 [2024-07-24 22:29:32.312478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.765 [2024-07-24 22:29:32.312508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.765 [2024-07-24 22:29:32.312525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.765 [2024-07-24 22:29:32.312540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.765 [2024-07-24 22:29:32.312555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.765 [2024-07-24 22:29:32.312570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.765 [2024-07-24 22:29:32.312585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.765 [2024-07-24 22:29:32.312600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.765 [2024-07-24 22:29:32.312615] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22610 is same with the state(5) to be set 00:20:06.766 [2024-07-24 22:29:32.312657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.766 [2024-07-24 22:29:32.312677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.766 [2024-07-24 22:29:32.312693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.766 [2024-07-24 22:29:32.312713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.766 [2024-07-24 22:29:32.312729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.766 [2024-07-24 22:29:32.312744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.766 [2024-07-24 22:29:32.312759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.766 [2024-07-24 22:29:32.312774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.766 [2024-07-24 22:29:32.312788] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e8690 is same with the state(5) to be set 00:20:06.766 [2024-07-24 22:29:32.312831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.766 [2024-07-24 22:29:32.312851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.766 [2024-07-24 22:29:32.312868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.766 [2024-07-24 22:29:32.312883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.766 [2024-07-24 22:29:32.312908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.766 [2024-07-24 22:29:32.312924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.766 [2024-07-24 22:29:32.312940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.766 [2024-07-24 22:29:32.312954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.766 [2024-07-24 22:29:32.312969] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123fa00 is same with the state(5) to be set 00:20:06.766 [2024-07-24 22:29:32.313019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.766 [2024-07-24 22:29:32.313039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.766 [2024-07-24 22:29:32.313056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.766 [2024-07-24 22:29:32.313070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.766 [2024-07-24 22:29:32.313086] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.766 [2024-07-24 22:29:32.313101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.766 [2024-07-24 22:29:32.313116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.766 [2024-07-24 22:29:32.313131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.766 [2024-07-24 22:29:32.313145] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1248c00 is same with the state(5) to be set 00:20:06.766 [2024-07-24 22:29:32.313192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.766 [2024-07-24 22:29:32.313217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.766 [2024-07-24 22:29:32.313234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.766 [2024-07-24 22:29:32.313249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.766 [2024-07-24 22:29:32.313265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.766 [2024-07-24 22:29:32.313280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.766 [2024-07-24 22:29:32.313295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:06.766 [2024-07-24 22:29:32.313310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.766 [2024-07-24 22:29:32.313325] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121cad0 is same with the state(5) to be set 00:20:06.766 [2024-07-24 22:29:32.313408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.766 [2024-07-24 22:29:32.313430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.766 [2024-07-24 22:29:32.313461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.766 [2024-07-24 22:29:32.313487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.766 [2024-07-24 22:29:32.313508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.766 [2024-07-24 22:29:32.313524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.766 [2024-07-24 22:29:32.313543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.766 [2024-07-24 22:29:32.313558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.766 [2024-07-24 22:29:32.313585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.766 [2024-07-24 22:29:32.313600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.766 [2024-07-24 22:29:32.313618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.766 [2024-07-24 22:29:32.313633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.766 [2024-07-24 22:29:32.313651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.766 [2024-07-24 22:29:32.313669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.766 [2024-07-24 22:29:32.313687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.766 [2024-07-24 22:29:32.313702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.766 [2024-07-24 22:29:32.313720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.766 [2024-07-24 22:29:32.313745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.766 [2024-07-24 22:29:32.313764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.766 [2024-07-24 22:29:32.313779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.766 [2024-07-24 22:29:32.313797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.766 [2024-07-24 22:29:32.313812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.766 [2024-07-24 22:29:32.313830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.766 [2024-07-24 22:29:32.313846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.766 [2024-07-24 22:29:32.313863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.766 [2024-07-24 22:29:32.313879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.766 [2024-07-24 22:29:32.313897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.766 [2024-07-24 22:29:32.313921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.766 [2024-07-24 22:29:32.313940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.767 [2024-07-24 22:29:32.313955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.767 [2024-07-24 22:29:32.313973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.767 [2024-07-24 22:29:32.313988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.767 [2024-07-24 22:29:32.314006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.767 [2024-07-24 22:29:32.314022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.767 [2024-07-24 22:29:32.314040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.767 [2024-07-24 22:29:32.314055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.767 [2024-07-24 22:29:32.314073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.767 [2024-07-24 22:29:32.314088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.767 [2024-07-24 22:29:32.314106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.767 [2024-07-24 22:29:32.314121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.767 [2024-07-24 22:29:32.314139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.767 [2024-07-24 22:29:32.314154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.767 [2024-07-24 22:29:32.314176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.767 [2024-07-24 22:29:32.314192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.767 [2024-07-24 22:29:32.314209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.767 [2024-07-24 22:29:32.314225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.767 [2024-07-24 22:29:32.314242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.767 [2024-07-24 22:29:32.314258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.767 [2024-07-24 22:29:32.314275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.767 [2024-07-24 22:29:32.314290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.767 [2024-07-24 22:29:32.314308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.767 [2024-07-24 22:29:32.314324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.767 [2024-07-24 22:29:32.314341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.767 [2024-07-24 22:29:32.314357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.767 [2024-07-24 22:29:32.314374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.767 [2024-07-24 22:29:32.314390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.767 [2024-07-24 22:29:32.314408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.767 [2024-07-24 22:29:32.314423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.767 [2024-07-24 22:29:32.314441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.767 [2024-07-24 22:29:32.314463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.767 [2024-07-24 22:29:32.314488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.767 [2024-07-24 22:29:32.314505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.767 [2024-07-24 22:29:32.314523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.767 [2024-07-24 22:29:32.314539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.767 [2024-07-24 22:29:32.314563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.767 [2024-07-24 22:29:32.314579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.767 [2024-07-24 22:29:32.314596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.767 [2024-07-24 22:29:32.314623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.767 [2024-07-24 22:29:32.314642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.767 [2024-07-24 22:29:32.314657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.767 [2024-07-24 22:29:32.314675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.767 [2024-07-24 22:29:32.314690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.767 [2024-07-24 22:29:32.314708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.767 [2024-07-24 22:29:32.314723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.767 [2024-07-24 22:29:32.314741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.767 [2024-07-24 22:29:32.314756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.767 [2024-07-24 22:29:32.314774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.767 [2024-07-24 22:29:32.314789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.767 [2024-07-24 22:29:32.314808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.767 [2024-07-24 22:29:32.314832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.767 [2024-07-24 22:29:32.314850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.767 [2024-07-24 22:29:32.314865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.767 [2024-07-24 22:29:32.314882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.767 [2024-07-24 22:29:32.314898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.767 [2024-07-24 22:29:32.314916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.767 [2024-07-24 22:29:32.314931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.767 [2024-07-24 22:29:32.314948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.767 [2024-07-24 22:29:32.314964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.767 [2024-07-24 22:29:32.314981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.767 [2024-07-24 22:29:32.314997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.767 [2024-07-24 22:29:32.315014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.767 [2024-07-24 22:29:32.315030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.767 [2024-07-24 22:29:32.315052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.767 [2024-07-24 22:29:32.315068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.767 [2024-07-24 22:29:32.315085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.767 [2024-07-24 22:29:32.315101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.767 [2024-07-24 22:29:32.315120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.767 [2024-07-24 22:29:32.315142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.767 [2024-07-24 22:29:32.315160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.767 [2024-07-24 22:29:32.315175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.767 [2024-07-24 22:29:32.315192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.767 [2024-07-24 22:29:32.315208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.767 [2024-07-24 22:29:32.315226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.767 [2024-07-24 22:29:32.315241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.768 [2024-07-24 22:29:32.315258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.768 [2024-07-24 22:29:32.315274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.768 [2024-07-24 22:29:32.315292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.768 [2024-07-24 22:29:32.315307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.768 [2024-07-24 22:29:32.315325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.768 [2024-07-24 22:29:32.315341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.768 [2024-07-24 22:29:32.315358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.768 [2024-07-24 22:29:32.315374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.768 [2024-07-24 22:29:32.315391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.768 [2024-07-24 22:29:32.315407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.768 [2024-07-24 22:29:32.315425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.768 [2024-07-24 22:29:32.315440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.768 [2024-07-24 22:29:32.315458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.768 [2024-07-24 22:29:32.315478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.768 [2024-07-24 22:29:32.315504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.768 [2024-07-24 22:29:32.315520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.768 [2024-07-24 22:29:32.315538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.768 [2024-07-24 22:29:32.315553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.768 [2024-07-24 22:29:32.315571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.768 [2024-07-24 22:29:32.315586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.768 [2024-07-24 22:29:32.315605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.768 [2024-07-24 22:29:32.315620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.768 [2024-07-24 22:29:32.315638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.768 [2024-07-24 22:29:32.315653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.768 [2024-07-24 22:29:32.315669] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13be6e0 is same with the state(5) to be set 00:20:06.768 [2024-07-24 22:29:32.315757] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x13be6e0 was disconnected and freed. reset controller. 00:20:06.768 [2024-07-24 22:29:32.319600] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:20:06.768 [2024-07-24 22:29:32.319684] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:20:06.768 [2024-07-24 22:29:32.319720] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd22610 (9): Bad file descriptor 00:20:06.768 [2024-07-24 22:29:32.319750] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13e8690 (9): Bad file descriptor 00:20:06.768 [2024-07-24 22:29:32.321580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.768 [2024-07-24 22:29:32.321641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.768 [2024-07-24 22:29:32.321677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.768 [2024-07-24 22:29:32.321694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.768 [2024-07-24 22:29:32.321713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.768 [2024-07-24 22:29:32.321729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.768 [2024-07-24 22:29:32.321747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.768 [2024-07-24 22:29:32.321763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.768 [2024-07-24 22:29:32.321781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.768 [2024-07-24 22:29:32.321811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.768 [2024-07-24 22:29:32.321829] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1376e30 is same with the state(5) to be set 00:20:06.768 [2024-07-24 22:29:32.321915] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1376e30 was disconnected and freed. reset controller. 00:20:06.768 [2024-07-24 22:29:32.322143] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:06.768 [2024-07-24 22:29:32.322197] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121cad0 (9): Bad file descriptor 00:20:06.768 [2024-07-24 22:29:32.322260] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x124c1e0 (9): Bad file descriptor 00:20:06.768 [2024-07-24 22:29:32.322302] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x124c9b0 (9): Bad file descriptor 00:20:06.768 [2024-07-24 22:29:32.322329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d58c0 (9): Bad file descriptor 00:20:06.768 [2024-07-24 22:29:32.322362] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13bfe00 (9): Bad file descriptor 00:20:06.768 [2024-07-24 22:29:32.322395] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127b8e0 (9): Bad file descriptor 00:20:06.768 [2024-07-24 22:29:32.322437] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123fa00 (9): Bad file descriptor 00:20:06.768 [2024-07-24 22:29:32.322468] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1248c00 (9): Bad file descriptor 00:20:06.768 [2024-07-24 22:29:32.324551] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:20:06.768 [2024-07-24 22:29:32.324831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.768 [2024-07-24 22:29:32.324871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13e8690 with addr=10.0.0.2, port=4420 00:20:06.768 [2024-07-24 22:29:32.324891] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e8690 is same with the state(5) to be set 00:20:06.768 [2024-07-24 22:29:32.325010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.768 [2024-07-24 22:29:32.325038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd22610 with addr=10.0.0.2, port=4420 00:20:06.768 [2024-07-24 22:29:32.325055] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22610 is same with the state(5) to be set 00:20:06.768 [2024-07-24 22:29:32.325525] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:06.768 [2024-07-24 22:29:32.325610] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:06.768 [2024-07-24 22:29:32.325683] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:06.768 [2024-07-24 22:29:32.325783] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:06.768 [2024-07-24 22:29:32.325851] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:06.768 [2024-07-24 22:29:32.325915] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:06.768 [2024-07-24 22:29:32.326051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.768 [2024-07-24 22:29:32.326081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121cad0 with addr=10.0.0.2, port=4420 00:20:06.768 [2024-07-24 22:29:32.326098] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121cad0 is same with the state(5) to be set 00:20:06.768 [2024-07-24 22:29:32.326207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.768 [2024-07-24 22:29:32.326232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124c9b0 with addr=10.0.0.2, port=4420 00:20:06.768 [2024-07-24 22:29:32.326249] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x124c9b0 is same with the state(5) to be set 00:20:06.768 [2024-07-24 22:29:32.326287] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13e8690 (9): Bad file descriptor 00:20:06.768 [2024-07-24 22:29:32.326311] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd22610 (9): Bad file descriptor 00:20:06.768 [2024-07-24 22:29:32.326781] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121cad0 (9): Bad file descriptor 00:20:06.768 [2024-07-24 22:29:32.326813] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x124c9b0 (9): Bad file descriptor 00:20:06.768 [2024-07-24 22:29:32.326832] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:06.768 [2024-07-24 22:29:32.326846] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:20:06.768 [2024-07-24 22:29:32.326864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:06.768 [2024-07-24 22:29:32.326892] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:20:06.768 [2024-07-24 22:29:32.326907] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:20:06.768 [2024-07-24 22:29:32.326921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:20:06.768 [2024-07-24 22:29:32.327004] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.768 [2024-07-24 22:29:32.327026] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.768 [2024-07-24 22:29:32.327040] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:06.768 [2024-07-24 22:29:32.327053] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:06.768 [2024-07-24 22:29:32.327068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:06.769 [2024-07-24 22:29:32.327088] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:20:06.769 [2024-07-24 22:29:32.327103] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:20:06.769 [2024-07-24 22:29:32.327117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:20:06.769 [2024-07-24 22:29:32.327183] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.769 [2024-07-24 22:29:32.327200] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.769 [2024-07-24 22:29:32.332411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.769 [2024-07-24 22:29:32.332498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.769 [2024-07-24 22:29:32.332540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.769 [2024-07-24 22:29:32.332557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.769 [2024-07-24 22:29:32.332576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.769 [2024-07-24 22:29:32.332592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.769 [2024-07-24 22:29:32.332610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.769 [2024-07-24 22:29:32.332626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.769 [2024-07-24 22:29:32.332661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.769 [2024-07-24 22:29:32.332677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.769 [2024-07-24 22:29:32.332696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.769 [2024-07-24 22:29:32.332711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.769 [2024-07-24 22:29:32.332729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.769 [2024-07-24 22:29:32.332744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.769 [2024-07-24 22:29:32.332762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.769 [2024-07-24 22:29:32.332777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.769 [2024-07-24 22:29:32.332795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.769 [2024-07-24 22:29:32.332810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.769 [2024-07-24 22:29:32.332828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.769 [2024-07-24 22:29:32.332843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.769 [2024-07-24 22:29:32.332860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.769 [2024-07-24 22:29:32.332875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.769 [2024-07-24 22:29:32.332894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.769 [2024-07-24 22:29:32.332909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.769 [2024-07-24 22:29:32.332927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.769 [2024-07-24 22:29:32.332942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.769 [2024-07-24 22:29:32.332960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.769 [2024-07-24 22:29:32.332975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.769 [2024-07-24 22:29:32.332992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.769 [2024-07-24 22:29:32.333007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.769 [2024-07-24 22:29:32.333025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.769 [2024-07-24 22:29:32.333040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.769 [2024-07-24 22:29:32.333058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.769 [2024-07-24 22:29:32.333077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.769 [2024-07-24 22:29:32.333095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.769 [2024-07-24 22:29:32.333110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.769 [2024-07-24 22:29:32.333128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.769 [2024-07-24 22:29:32.333143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.769 [2024-07-24 22:29:32.333161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.769 [2024-07-24 22:29:32.333176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.769 [2024-07-24 22:29:32.333193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.769 [2024-07-24 22:29:32.333208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.769 [2024-07-24 22:29:32.333226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.769 [2024-07-24 22:29:32.333242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.769 [2024-07-24 22:29:32.333259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.769 [2024-07-24 22:29:32.333274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.769 [2024-07-24 22:29:32.333292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.769 [2024-07-24 22:29:32.333307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.769 [2024-07-24 22:29:32.333325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.769 [2024-07-24 22:29:32.333340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.769 [2024-07-24 22:29:32.333357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.769 [2024-07-24 22:29:32.333372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.769 [2024-07-24 22:29:32.333390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.769 [2024-07-24 22:29:32.333405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.769 [2024-07-24 22:29:32.333423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.769 [2024-07-24 22:29:32.333438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.769 [2024-07-24 22:29:32.333456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.769 [2024-07-24 22:29:32.333471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.769 [2024-07-24 22:29:32.333501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.769 [2024-07-24 22:29:32.333526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.769 [2024-07-24 22:29:32.333543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.769 [2024-07-24 22:29:32.333558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.769 [2024-07-24 22:29:32.333576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.769 [2024-07-24 22:29:32.333591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.769 [2024-07-24 22:29:32.333609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.769 [2024-07-24 22:29:32.333624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.769 [2024-07-24 22:29:32.333642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.769 [2024-07-24 22:29:32.333657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.769 [2024-07-24 22:29:32.333675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.769 [2024-07-24 22:29:32.333690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.770 [2024-07-24 22:29:32.333708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.770 [2024-07-24 22:29:32.333723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.770 [2024-07-24 22:29:32.333741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.770 [2024-07-24 22:29:32.333755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.770 [2024-07-24 22:29:32.333773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.770 [2024-07-24 22:29:32.333788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.770 [2024-07-24 22:29:32.333806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.770 [2024-07-24 22:29:32.333821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.770 [2024-07-24 22:29:32.333839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.770 [2024-07-24 22:29:32.333854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.770 [2024-07-24 22:29:32.333872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.770 [2024-07-24 22:29:32.333887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.770 [2024-07-24 22:29:32.333905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.770 [2024-07-24 22:29:32.333924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.770 [2024-07-24 22:29:32.333943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.770 [2024-07-24 22:29:32.333962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.770 [2024-07-24 22:29:32.333980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.770 [2024-07-24 22:29:32.333995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.770 [2024-07-24 22:29:32.334013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.770 [2024-07-24 22:29:32.334029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.770 [2024-07-24 22:29:32.334046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.770 [2024-07-24 22:29:32.334061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.770 [2024-07-24 22:29:32.334079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.770 [2024-07-24 22:29:32.334094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.770 [2024-07-24 22:29:32.334112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.770 [2024-07-24 22:29:32.334127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.770 [2024-07-24 22:29:32.334145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.770 [2024-07-24 22:29:32.334160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.770 [2024-07-24 22:29:32.334177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.770 [2024-07-24 22:29:32.334193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.770 [2024-07-24 22:29:32.334211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.770 [2024-07-24 22:29:32.334226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.770 [2024-07-24 22:29:32.334244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.770 [2024-07-24 22:29:32.334259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.770 [2024-07-24 22:29:32.334277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.770 [2024-07-24 22:29:32.334292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.770 [2024-07-24 22:29:32.334310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.770 [2024-07-24 22:29:32.334325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.770 [2024-07-24 22:29:32.334347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.770 [2024-07-24 22:29:32.334362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.770 [2024-07-24 22:29:32.334380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.770 [2024-07-24 22:29:32.334395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.770 [2024-07-24 22:29:32.334413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.770 [2024-07-24 22:29:32.334429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.770 [2024-07-24 22:29:32.334446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.770 [2024-07-24 22:29:32.334463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.770 [2024-07-24 22:29:32.334488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.770 [2024-07-24 22:29:32.334506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.770 [2024-07-24 22:29:32.334525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.770 [2024-07-24 22:29:32.334540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.770 [2024-07-24 22:29:32.334558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.770 [2024-07-24 22:29:32.334583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.770 [2024-07-24 22:29:32.334600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.770 [2024-07-24 22:29:32.334615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.770 [2024-07-24 22:29:32.334633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.770 [2024-07-24 22:29:32.334649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.770 [2024-07-24 22:29:32.334667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.770 [2024-07-24 22:29:32.334682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.770 [2024-07-24 22:29:32.334700] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12f8cc0 is same with the state(5) to be set 00:20:06.770 [2024-07-24 22:29:32.336202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.770 [2024-07-24 22:29:32.336247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.770 [2024-07-24 22:29:32.336277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.770 [2024-07-24 22:29:32.336293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.770 [2024-07-24 22:29:32.336311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.770 [2024-07-24 22:29:32.336337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.770 [2024-07-24 22:29:32.336355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.771 [2024-07-24 22:29:32.336371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.771 [2024-07-24 22:29:32.336389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.771 [2024-07-24 22:29:32.336404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.771 [2024-07-24 22:29:32.336422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.771 [2024-07-24 22:29:32.336437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.771 [2024-07-24 22:29:32.336455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.771 [2024-07-24 22:29:32.336470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.771 [2024-07-24 22:29:32.336497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.771 [2024-07-24 22:29:32.336514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.771 [2024-07-24 22:29:32.336532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.771 [2024-07-24 22:29:32.336547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.771 [2024-07-24 22:29:32.336565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.771 [2024-07-24 22:29:32.336581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.771 [2024-07-24 22:29:32.336598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.771 [2024-07-24 22:29:32.336613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.771 [2024-07-24 22:29:32.336631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.771 [2024-07-24 22:29:32.336647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.771 [2024-07-24 22:29:32.336665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.771 [2024-07-24 22:29:32.336680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.771 [2024-07-24 22:29:32.336699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.771 [2024-07-24 22:29:32.336714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.771 [2024-07-24 22:29:32.336732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.771 [2024-07-24 22:29:32.336748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.771 [2024-07-24 22:29:32.336770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.771 [2024-07-24 22:29:32.336787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.771 [2024-07-24 22:29:32.336805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.771 [2024-07-24 22:29:32.336821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.771 [2024-07-24 22:29:32.336838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.771 [2024-07-24 22:29:32.336854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.771 [2024-07-24 22:29:32.336871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.771 [2024-07-24 22:29:32.336886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.771 [2024-07-24 22:29:32.336904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.771 [2024-07-24 22:29:32.336919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.771 [2024-07-24 22:29:32.336937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.771 [2024-07-24 22:29:32.336953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.771 [2024-07-24 22:29:32.336970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.771 [2024-07-24 22:29:32.336985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.771 [2024-07-24 22:29:32.337005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.771 [2024-07-24 22:29:32.337020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.771 [2024-07-24 22:29:32.337038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.771 [2024-07-24 22:29:32.337053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.771 [2024-07-24 22:29:32.337071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.771 [2024-07-24 22:29:32.337087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.771 [2024-07-24 22:29:32.337105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.771 [2024-07-24 22:29:32.337120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.771 [2024-07-24 22:29:32.337137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.771 [2024-07-24 22:29:32.337152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.771 [2024-07-24 22:29:32.337170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.771 [2024-07-24 22:29:32.337189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.771 [2024-07-24 22:29:32.337208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.771 [2024-07-24 22:29:32.337224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.771 [2024-07-24 22:29:32.337241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.771 [2024-07-24 22:29:32.337257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.771 [2024-07-24 22:29:32.337275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.771 [2024-07-24 22:29:32.337290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.771 [2024-07-24 22:29:32.337308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.771 [2024-07-24 22:29:32.337323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.771 [2024-07-24 22:29:32.337341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.771 [2024-07-24 22:29:32.337356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.771 [2024-07-24 22:29:32.337374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.771 [2024-07-24 22:29:32.337389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.771 [2024-07-24 22:29:32.337407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.771 [2024-07-24 22:29:32.337422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.771 [2024-07-24 22:29:32.337440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.771 [2024-07-24 22:29:32.337455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.771 [2024-07-24 22:29:32.337472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.771 [2024-07-24 22:29:32.337495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.771 [2024-07-24 22:29:32.337514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.771 [2024-07-24 22:29:32.337537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.771 [2024-07-24 22:29:32.337554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.771 [2024-07-24 22:29:32.337570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.771 [2024-07-24 22:29:32.337595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.771 [2024-07-24 22:29:32.337611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.771 [2024-07-24 22:29:32.337638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.771 [2024-07-24 22:29:32.337654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.771 [2024-07-24 22:29:32.337672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.772 [2024-07-24 22:29:32.337688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.772 [2024-07-24 22:29:32.337706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.772 [2024-07-24 22:29:32.337721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.772 [2024-07-24 22:29:32.337739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.772 [2024-07-24 22:29:32.337754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.772 [2024-07-24 22:29:32.337772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.772 [2024-07-24 22:29:32.337788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.772 [2024-07-24 22:29:32.337806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.772 [2024-07-24 22:29:32.337821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.772 [2024-07-24 22:29:32.337838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.772 [2024-07-24 22:29:32.337854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.772 [2024-07-24 22:29:32.337872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.772 [2024-07-24 22:29:32.337887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.772 [2024-07-24 22:29:32.337905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.772 [2024-07-24 22:29:32.337920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.772 [2024-07-24 22:29:32.337938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.772 [2024-07-24 22:29:32.337953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.772 [2024-07-24 22:29:32.337971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.772 [2024-07-24 22:29:32.337986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.772 [2024-07-24 22:29:32.338004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.772 [2024-07-24 22:29:32.338020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.772 [2024-07-24 22:29:32.338038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.772 [2024-07-24 22:29:32.338057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.772 [2024-07-24 22:29:32.338074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.772 [2024-07-24 22:29:32.338090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.772 [2024-07-24 22:29:32.338108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.772 [2024-07-24 22:29:32.338123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.772 [2024-07-24 22:29:32.338140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.772 [2024-07-24 22:29:32.338156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.772 [2024-07-24 22:29:32.338174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.772 [2024-07-24 22:29:32.338189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.772 [2024-07-24 22:29:32.338207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.772 [2024-07-24 22:29:32.338223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.772 [2024-07-24 22:29:32.338242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.772 [2024-07-24 22:29:32.338258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.772 [2024-07-24 22:29:32.338276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.772 [2024-07-24 22:29:32.338291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.772 [2024-07-24 22:29:32.338311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.772 [2024-07-24 22:29:32.338326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.772 [2024-07-24 22:29:32.338344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.772 [2024-07-24 22:29:32.338359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.772 [2024-07-24 22:29:32.338377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.772 [2024-07-24 22:29:32.338392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.772 [2024-07-24 22:29:32.338410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.772 [2024-07-24 22:29:32.338425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.772 [2024-07-24 22:29:32.338442] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fa200 is same with the state(5) to be set 00:20:06.772 [2024-07-24 22:29:32.339968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.772 [2024-07-24 22:29:32.340014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.772 [2024-07-24 22:29:32.340045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.772 [2024-07-24 22:29:32.340062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.772 [2024-07-24 22:29:32.340080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.772 [2024-07-24 22:29:32.340095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.772 [2024-07-24 22:29:32.340113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.772 [2024-07-24 22:29:32.340129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.772 [2024-07-24 22:29:32.340146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.772 [2024-07-24 22:29:32.340162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.772 [2024-07-24 22:29:32.340180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.772 [2024-07-24 22:29:32.340195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.772 [2024-07-24 22:29:32.340213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.772 [2024-07-24 22:29:32.340228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.772 [2024-07-24 22:29:32.340246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.772 [2024-07-24 22:29:32.340261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.772 [2024-07-24 22:29:32.340280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.772 [2024-07-24 22:29:32.340295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.772 [2024-07-24 22:29:32.340313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.772 [2024-07-24 22:29:32.340328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.772 [2024-07-24 22:29:32.340346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.772 [2024-07-24 22:29:32.340361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.772 [2024-07-24 22:29:32.340379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.772 [2024-07-24 22:29:32.340395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.772 [2024-07-24 22:29:32.340412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.772 [2024-07-24 22:29:32.340429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.772 [2024-07-24 22:29:32.340451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.772 [2024-07-24 22:29:32.340467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.772 [2024-07-24 22:29:32.340497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.772 [2024-07-24 22:29:32.340515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.772 [2024-07-24 22:29:32.340533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.773 [2024-07-24 22:29:32.340548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.773 [2024-07-24 22:29:32.340566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.773 [2024-07-24 22:29:32.340581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.773 [2024-07-24 22:29:32.340599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.773 [2024-07-24 22:29:32.340616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.773 [2024-07-24 22:29:32.340634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.773 [2024-07-24 22:29:32.340649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.773 [2024-07-24 22:29:32.340666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.773 [2024-07-24 22:29:32.340681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.773 [2024-07-24 22:29:32.340699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.773 [2024-07-24 22:29:32.340714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.773 [2024-07-24 22:29:32.340732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.773 [2024-07-24 22:29:32.340747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.773 [2024-07-24 22:29:32.340765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.773 [2024-07-24 22:29:32.340780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.773 [2024-07-24 22:29:32.340798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.773 [2024-07-24 22:29:32.340813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.773 [2024-07-24 22:29:32.340830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.773 [2024-07-24 22:29:32.340846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.773 [2024-07-24 22:29:32.340863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.773 [2024-07-24 22:29:32.340882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.773 [2024-07-24 22:29:32.340901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.773 [2024-07-24 22:29:32.340916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.773 [2024-07-24 22:29:32.340934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.773 [2024-07-24 22:29:32.340950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.773 [2024-07-24 22:29:32.340976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.773 [2024-07-24 22:29:32.340991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.773 [2024-07-24 22:29:32.341009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.773 [2024-07-24 22:29:32.341024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.773 [2024-07-24 22:29:32.341042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.773 [2024-07-24 22:29:32.341057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.773 [2024-07-24 22:29:32.341075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.773 [2024-07-24 22:29:32.341090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.773 [2024-07-24 22:29:32.341108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.773 [2024-07-24 22:29:32.341123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.773 [2024-07-24 22:29:32.341141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.773 [2024-07-24 22:29:32.341156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.773 [2024-07-24 22:29:32.341174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.773 [2024-07-24 22:29:32.341189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.773 [2024-07-24 22:29:32.341206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.773 [2024-07-24 22:29:32.341221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.773 [2024-07-24 22:29:32.341239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.773 [2024-07-24 22:29:32.341255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.773 [2024-07-24 22:29:32.341272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.773 [2024-07-24 22:29:32.341288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.773 [2024-07-24 22:29:32.341309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.773 [2024-07-24 22:29:32.341325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.773 [2024-07-24 22:29:32.341342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.773 [2024-07-24 22:29:32.341358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.773 [2024-07-24 22:29:32.341376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.773 [2024-07-24 22:29:32.341391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.773 [2024-07-24 22:29:32.341409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.773 [2024-07-24 22:29:32.341424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.773 [2024-07-24 22:29:32.341441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.773 [2024-07-24 22:29:32.341456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.773 [2024-07-24 22:29:32.341474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.773 [2024-07-24 22:29:32.341498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.773 [2024-07-24 22:29:32.341526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.773 [2024-07-24 22:29:32.341541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.773 [2024-07-24 22:29:32.341559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.773 [2024-07-24 22:29:32.341583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.773 [2024-07-24 22:29:32.341601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.773 [2024-07-24 22:29:32.341616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.773 [2024-07-24 22:29:32.341634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.773 [2024-07-24 22:29:32.341649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.773 [2024-07-24 22:29:32.341667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.773 [2024-07-24 22:29:32.341682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.773 [2024-07-24 22:29:32.341700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.773 [2024-07-24 22:29:32.341715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.773 [2024-07-24 22:29:32.341732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.773 [2024-07-24 22:29:32.341751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.774 [2024-07-24 22:29:32.341770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.774 [2024-07-24 22:29:32.341785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.774 [2024-07-24 22:29:32.341803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.774 [2024-07-24 22:29:32.341818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.774 [2024-07-24 22:29:32.341836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.774 [2024-07-24 22:29:32.341851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.774 [2024-07-24 22:29:32.341869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.774 [2024-07-24 22:29:32.341885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.774 [2024-07-24 22:29:32.341903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.774 [2024-07-24 22:29:32.341918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.774 [2024-07-24 22:29:32.341935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.774 [2024-07-24 22:29:32.341951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.774 [2024-07-24 22:29:32.341968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.774 [2024-07-24 22:29:32.341983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.774 [2024-07-24 22:29:32.342000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.774 [2024-07-24 22:29:32.342015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.774 [2024-07-24 22:29:32.342033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.774 [2024-07-24 22:29:32.342049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.774 [2024-07-24 22:29:32.342067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.774 [2024-07-24 22:29:32.342082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.774 [2024-07-24 22:29:32.342100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.774 [2024-07-24 22:29:32.342115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.774 [2024-07-24 22:29:32.342133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.774 [2024-07-24 22:29:32.342148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.774 [2024-07-24 22:29:32.342170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.774 [2024-07-24 22:29:32.342186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.774 [2024-07-24 22:29:32.342203] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1375eb0 is same with the state(5) to be set 00:20:06.774 [2024-07-24 22:29:32.343709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.774 [2024-07-24 22:29:32.343746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.774 [2024-07-24 22:29:32.343777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.774 [2024-07-24 22:29:32.343794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.774 [2024-07-24 22:29:32.343813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.774 [2024-07-24 22:29:32.343828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.774 [2024-07-24 22:29:32.343846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.774 [2024-07-24 22:29:32.343862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.774 [2024-07-24 22:29:32.343880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.774 [2024-07-24 22:29:32.343895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.774 [2024-07-24 22:29:32.343913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.774 [2024-07-24 22:29:32.343929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.774 [2024-07-24 22:29:32.343946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.774 [2024-07-24 22:29:32.343961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.774 [2024-07-24 22:29:32.343979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.774 [2024-07-24 22:29:32.343994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.774 [2024-07-24 22:29:32.344013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.774 [2024-07-24 22:29:32.344028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.774 [2024-07-24 22:29:32.344045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.774 [2024-07-24 22:29:32.344060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.774 [2024-07-24 22:29:32.344085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.774 [2024-07-24 22:29:32.344100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.774 [2024-07-24 22:29:32.344127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.774 [2024-07-24 22:29:32.344143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.774 [2024-07-24 22:29:32.344161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.774 [2024-07-24 22:29:32.344176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.774 [2024-07-24 22:29:32.344194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.774 [2024-07-24 22:29:32.344209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.774 [2024-07-24 22:29:32.344226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.774 [2024-07-24 22:29:32.344241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.774 [2024-07-24 22:29:32.344259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.774 [2024-07-24 22:29:32.344274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.774 [2024-07-24 22:29:32.344291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.774 [2024-07-24 22:29:32.344306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.774 [2024-07-24 22:29:32.344324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.774 [2024-07-24 22:29:32.344339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.774 [2024-07-24 22:29:32.344356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.774 [2024-07-24 22:29:32.344371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.774 [2024-07-24 22:29:32.344388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.774 [2024-07-24 22:29:32.344403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.774 [2024-07-24 22:29:32.344420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.774 [2024-07-24 22:29:32.344435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.774 [2024-07-24 22:29:32.344453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.775 [2024-07-24 22:29:32.344468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.775 [2024-07-24 22:29:32.344493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.775 [2024-07-24 22:29:32.344511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.775 [2024-07-24 22:29:32.344528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.775 [2024-07-24 22:29:32.344557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.775 [2024-07-24 22:29:32.344575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.775 [2024-07-24 22:29:32.344590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.775 [2024-07-24 22:29:32.344614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.775 [2024-07-24 22:29:32.344630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.775 [2024-07-24 22:29:32.344647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.775 [2024-07-24 22:29:32.344662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.775 [2024-07-24 22:29:32.344680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.775 [2024-07-24 22:29:32.344695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.775 [2024-07-24 22:29:32.344712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.775 [2024-07-24 22:29:32.344727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.775 [2024-07-24 22:29:32.344745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.775 [2024-07-24 22:29:32.344760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.775 [2024-07-24 22:29:32.344778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.775 [2024-07-24 22:29:32.344793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.775 [2024-07-24 22:29:32.344810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.775 [2024-07-24 22:29:32.344825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.775 [2024-07-24 22:29:32.344843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.775 [2024-07-24 22:29:32.344857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.775 [2024-07-24 22:29:32.344875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.775 [2024-07-24 22:29:32.344890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.775 [2024-07-24 22:29:32.344907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.775 [2024-07-24 22:29:32.344922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.775 [2024-07-24 22:29:32.344940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.775 [2024-07-24 22:29:32.344955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.775 [2024-07-24 22:29:32.344982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.775 [2024-07-24 22:29:32.344997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.775 [2024-07-24 22:29:32.345015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.775 [2024-07-24 22:29:32.345030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.775 [2024-07-24 22:29:32.345050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.775 [2024-07-24 22:29:32.345066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.775 [2024-07-24 22:29:32.345090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.775 [2024-07-24 22:29:32.345105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.775 [2024-07-24 22:29:32.345123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.775 [2024-07-24 22:29:32.345138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.775 [2024-07-24 22:29:32.345156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.775 [2024-07-24 22:29:32.345171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.775 [2024-07-24 22:29:32.345189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.775 [2024-07-24 22:29:32.345204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.775 [2024-07-24 22:29:32.345221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.775 [2024-07-24 22:29:32.345236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.775 [2024-07-24 22:29:32.345254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.775 [2024-07-24 22:29:32.345269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.775 [2024-07-24 22:29:32.345287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.775 [2024-07-24 22:29:32.345304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.775 [2024-07-24 22:29:32.345322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.775 [2024-07-24 22:29:32.345337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.775 [2024-07-24 22:29:32.345355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.775 [2024-07-24 22:29:32.345370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.775 [2024-07-24 22:29:32.345388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.775 [2024-07-24 22:29:32.345406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.775 [2024-07-24 22:29:32.345425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.775 [2024-07-24 22:29:32.345440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.776 [2024-07-24 22:29:32.345458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.776 [2024-07-24 22:29:32.345473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.776 [2024-07-24 22:29:32.345497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.776 [2024-07-24 22:29:32.345513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.776 [2024-07-24 22:29:32.345530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.776 [2024-07-24 22:29:32.345545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.776 [2024-07-24 22:29:32.345564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.776 [2024-07-24 22:29:32.345579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.776 [2024-07-24 22:29:32.345597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.776 [2024-07-24 22:29:32.345620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.776 [2024-07-24 22:29:32.345637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.776 [2024-07-24 22:29:32.345652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.776 [2024-07-24 22:29:32.345670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.776 [2024-07-24 22:29:32.345685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.776 [2024-07-24 22:29:32.345702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.776 [2024-07-24 22:29:32.345717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.776 [2024-07-24 22:29:32.345735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.776 [2024-07-24 22:29:32.345750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.776 [2024-07-24 22:29:32.345768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.776 [2024-07-24 22:29:32.345783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.776 [2024-07-24 22:29:32.345800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.776 [2024-07-24 22:29:32.345815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.776 [2024-07-24 22:29:32.345833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.776 [2024-07-24 22:29:32.345851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.776 [2024-07-24 22:29:32.345870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.776 [2024-07-24 22:29:32.345885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.776 [2024-07-24 22:29:32.345902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.776 [2024-07-24 22:29:32.345918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.776 [2024-07-24 22:29:32.345935] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1378350 is same with the state(5) to be set 00:20:06.776 [2024-07-24 22:29:32.347410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.776 [2024-07-24 22:29:32.347445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.776 [2024-07-24 22:29:32.347476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.776 [2024-07-24 22:29:32.347499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.776 [2024-07-24 22:29:32.347517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.776 [2024-07-24 22:29:32.347533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.776 [2024-07-24 22:29:32.347559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.776 [2024-07-24 22:29:32.347575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.776 [2024-07-24 22:29:32.347592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.776 [2024-07-24 22:29:32.347614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.776 [2024-07-24 22:29:32.347632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.776 [2024-07-24 22:29:32.347647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.776 [2024-07-24 22:29:32.347665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.776 [2024-07-24 22:29:32.347680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.776 [2024-07-24 22:29:32.347698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.776 [2024-07-24 22:29:32.347713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.776 [2024-07-24 22:29:32.347731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.776 [2024-07-24 22:29:32.347746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.776 [2024-07-24 22:29:32.347765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.776 [2024-07-24 22:29:32.347790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.776 [2024-07-24 22:29:32.347808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.776 [2024-07-24 22:29:32.347823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.776 [2024-07-24 22:29:32.347841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.776 [2024-07-24 22:29:32.347856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.776 [2024-07-24 22:29:32.347873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.776 [2024-07-24 22:29:32.347889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.776 [2024-07-24 22:29:32.347906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.776 [2024-07-24 22:29:32.347921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.776 [2024-07-24 22:29:32.347939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.776 [2024-07-24 22:29:32.347954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.776 [2024-07-24 22:29:32.347972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.776 [2024-07-24 22:29:32.347987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.776 [2024-07-24 22:29:32.348005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.776 [2024-07-24 22:29:32.348020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.776 [2024-07-24 22:29:32.348037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.776 [2024-07-24 22:29:32.348052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.776 [2024-07-24 22:29:32.348070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.776 [2024-07-24 22:29:32.348085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.776 [2024-07-24 22:29:32.348102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.776 [2024-07-24 22:29:32.348117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.776 [2024-07-24 22:29:32.348134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.776 [2024-07-24 22:29:32.348149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.776 [2024-07-24 22:29:32.348167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.776 [2024-07-24 22:29:32.348182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.776 [2024-07-24 22:29:32.348204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.776 [2024-07-24 22:29:32.348220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.776 [2024-07-24 22:29:32.348238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.777 [2024-07-24 22:29:32.348253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.777 [2024-07-24 22:29:32.348271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.777 [2024-07-24 22:29:32.348286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.777 [2024-07-24 22:29:32.348304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.777 [2024-07-24 22:29:32.348319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.777 [2024-07-24 22:29:32.348337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.777 [2024-07-24 22:29:32.348352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.777 [2024-07-24 22:29:32.348370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.777 [2024-07-24 22:29:32.348385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.777 [2024-07-24 22:29:32.348403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.777 [2024-07-24 22:29:32.348418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.777 [2024-07-24 22:29:32.348435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.777 [2024-07-24 22:29:32.348450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.777 [2024-07-24 22:29:32.348468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.777 [2024-07-24 22:29:32.348489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.777 [2024-07-24 22:29:32.348508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.777 [2024-07-24 22:29:32.348524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.777 [2024-07-24 22:29:32.348541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.777 [2024-07-24 22:29:32.348557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.777 [2024-07-24 22:29:32.348575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.777 [2024-07-24 22:29:32.348590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.777 [2024-07-24 22:29:32.348607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.777 [2024-07-24 22:29:32.348631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.777 [2024-07-24 22:29:32.348651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.777 [2024-07-24 22:29:32.348667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.777 [2024-07-24 22:29:32.348684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.777 [2024-07-24 22:29:32.348699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.777 [2024-07-24 22:29:32.348717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.777 [2024-07-24 22:29:32.348732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.777 [2024-07-24 22:29:32.348750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.777 [2024-07-24 22:29:32.348765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.777 [2024-07-24 22:29:32.348782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.777 [2024-07-24 22:29:32.348797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.777 [2024-07-24 22:29:32.348815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.777 [2024-07-24 22:29:32.348830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.777 [2024-07-24 22:29:32.348848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.777 [2024-07-24 22:29:32.348862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.777 [2024-07-24 22:29:32.348880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.777 [2024-07-24 22:29:32.348895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.777 [2024-07-24 22:29:32.348913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.777 [2024-07-24 22:29:32.348927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.777 [2024-07-24 22:29:32.348945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.777 [2024-07-24 22:29:32.348960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.777 [2024-07-24 22:29:32.348977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.777 [2024-07-24 22:29:32.348992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.777 [2024-07-24 22:29:32.349010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.777 [2024-07-24 22:29:32.349025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.777 [2024-07-24 22:29:32.349046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.777 [2024-07-24 22:29:32.349061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.777 [2024-07-24 22:29:32.349079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.777 [2024-07-24 22:29:32.349095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.777 [2024-07-24 22:29:32.349112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.777 [2024-07-24 22:29:32.349127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.777 [2024-07-24 22:29:32.349145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.777 [2024-07-24 22:29:32.349160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.777 [2024-07-24 22:29:32.349178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.777 [2024-07-24 22:29:32.349193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.777 [2024-07-24 22:29:32.349211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.777 [2024-07-24 22:29:32.349226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.777 [2024-07-24 22:29:32.349243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.777 [2024-07-24 22:29:32.349259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.777 [2024-07-24 22:29:32.349277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.777 [2024-07-24 22:29:32.349292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.777 [2024-07-24 22:29:32.349310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.777 [2024-07-24 22:29:32.349325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.777 [2024-07-24 22:29:32.349343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.777 [2024-07-24 22:29:32.349358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.777 [2024-07-24 22:29:32.349375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.777 [2024-07-24 22:29:32.349390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.777 [2024-07-24 22:29:32.349408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.777 [2024-07-24 22:29:32.349423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.777 [2024-07-24 22:29:32.349440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.778 [2024-07-24 22:29:32.349459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.778 [2024-07-24 22:29:32.349477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.778 [2024-07-24 22:29:32.349500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.778 [2024-07-24 22:29:32.349519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.778 [2024-07-24 22:29:32.349534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.778 [2024-07-24 22:29:32.349561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.778 [2024-07-24 22:29:32.349576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.778 [2024-07-24 22:29:32.349594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.778 [2024-07-24 22:29:32.349610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.778 [2024-07-24 22:29:32.349630] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1379620 is same with the state(5) to be set 00:20:06.778 [2024-07-24 22:29:32.351489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.778 [2024-07-24 22:29:32.351530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.778 [2024-07-24 22:29:32.351571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.778 [2024-07-24 22:29:32.351589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.778 [2024-07-24 22:29:32.351613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.778 [2024-07-24 22:29:32.351628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.778 [2024-07-24 22:29:32.351646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.778 [2024-07-24 22:29:32.351663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.778 [2024-07-24 22:29:32.351681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.778 [2024-07-24 22:29:32.351697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.778 [2024-07-24 22:29:32.351715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.778 [2024-07-24 22:29:32.351730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.778 [2024-07-24 22:29:32.351748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.778 [2024-07-24 22:29:32.351763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.778 [2024-07-24 22:29:32.351780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.778 [2024-07-24 22:29:32.351806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.778 [2024-07-24 22:29:32.351824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.778 [2024-07-24 22:29:32.351840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.778 [2024-07-24 22:29:32.351858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.778 [2024-07-24 22:29:32.351873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.778 [2024-07-24 22:29:32.351891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.778 [2024-07-24 22:29:32.351906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.778 [2024-07-24 22:29:32.351924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.778 [2024-07-24 22:29:32.351939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.778 [2024-07-24 22:29:32.351956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.778 [2024-07-24 22:29:32.351972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.778 [2024-07-24 22:29:32.351990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.778 [2024-07-24 22:29:32.352005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.778 [2024-07-24 22:29:32.352023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.778 [2024-07-24 22:29:32.352038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.778 [2024-07-24 22:29:32.352056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.778 [2024-07-24 22:29:32.352071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.778 [2024-07-24 22:29:32.352089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.778 [2024-07-24 22:29:32.352104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.778 [2024-07-24 22:29:32.352123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.778 [2024-07-24 22:29:32.352138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.778 [2024-07-24 22:29:32.352155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.778 [2024-07-24 22:29:32.352171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.778 [2024-07-24 22:29:32.352188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.778 [2024-07-24 22:29:32.352204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.778 [2024-07-24 22:29:32.352226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.778 [2024-07-24 22:29:32.352241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.778 [2024-07-24 22:29:32.352259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.778 [2024-07-24 22:29:32.352274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.778 [2024-07-24 22:29:32.352292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.778 [2024-07-24 22:29:32.352307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.778 [2024-07-24 22:29:32.352325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.778 [2024-07-24 22:29:32.352341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.778 [2024-07-24 22:29:32.352358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.778 [2024-07-24 22:29:32.352374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.778 [2024-07-24 22:29:32.352391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.778 [2024-07-24 22:29:32.352406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.778 [2024-07-24 22:29:32.352424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.778 [2024-07-24 22:29:32.352439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.778 [2024-07-24 22:29:32.352457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.778 [2024-07-24 22:29:32.352473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.778 [2024-07-24 22:29:32.352499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.778 [2024-07-24 22:29:32.352515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.778 [2024-07-24 22:29:32.352533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.778 [2024-07-24 22:29:32.352557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.778 [2024-07-24 22:29:32.352574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.778 [2024-07-24 22:29:32.352589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.778 [2024-07-24 22:29:32.352614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.778 [2024-07-24 22:29:32.352629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.778 [2024-07-24 22:29:32.352646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.778 [2024-07-24 22:29:32.352666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.778 [2024-07-24 22:29:32.352684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.778 [2024-07-24 22:29:32.352700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.778 [2024-07-24 22:29:32.352718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.778 [2024-07-24 22:29:32.352733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.778 [2024-07-24 22:29:32.352751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.778 [2024-07-24 22:29:32.352766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.778 [2024-07-24 22:29:32.352783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.779 [2024-07-24 22:29:32.352798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.779 [2024-07-24 22:29:32.352816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.779 [2024-07-24 22:29:32.352831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.779 [2024-07-24 22:29:32.352848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.779 [2024-07-24 22:29:32.352863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.779 [2024-07-24 22:29:32.352881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.779 [2024-07-24 22:29:32.352896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.779 [2024-07-24 22:29:32.352914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.779 [2024-07-24 22:29:32.352929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.779 [2024-07-24 22:29:32.352947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.779 [2024-07-24 22:29:32.352963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.779 [2024-07-24 22:29:32.352980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.779 [2024-07-24 22:29:32.352995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.779 [2024-07-24 22:29:32.353013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.779 [2024-07-24 22:29:32.353028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.779 [2024-07-24 22:29:32.353046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.779 [2024-07-24 22:29:32.353061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.779 [2024-07-24 22:29:32.353082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.779 [2024-07-24 22:29:32.353098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.779 [2024-07-24 22:29:32.353116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.779 [2024-07-24 22:29:32.353131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.779 [2024-07-24 22:29:32.353148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.779 [2024-07-24 22:29:32.353163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.779 [2024-07-24 22:29:32.353181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.779 [2024-07-24 22:29:32.353197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.779 [2024-07-24 22:29:32.353215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.779 [2024-07-24 22:29:32.353230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.779 [2024-07-24 22:29:32.353248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.779 [2024-07-24 22:29:32.353263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.779 [2024-07-24 22:29:32.353281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.779 [2024-07-24 22:29:32.353296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.779 [2024-07-24 22:29:32.353313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.779 [2024-07-24 22:29:32.353328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.779 [2024-07-24 22:29:32.353346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.779 [2024-07-24 22:29:32.353361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.779 [2024-07-24 22:29:32.353379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.779 [2024-07-24 22:29:32.353395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.779 [2024-07-24 22:29:32.353412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.779 [2024-07-24 22:29:32.353428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.779 [2024-07-24 22:29:32.353446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.779 [2024-07-24 22:29:32.353461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.779 [2024-07-24 22:29:32.353490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.779 [2024-07-24 22:29:32.353511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.779 [2024-07-24 22:29:32.353530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.779 [2024-07-24 22:29:32.353546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.779 [2024-07-24 22:29:32.353564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.779 [2024-07-24 22:29:32.353579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.779 [2024-07-24 22:29:32.353597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.779 [2024-07-24 22:29:32.353618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.779 [2024-07-24 22:29:32.353635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.779 [2024-07-24 22:29:32.353651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.779 [2024-07-24 22:29:32.353668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.779 [2024-07-24 22:29:32.353684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.779 [2024-07-24 22:29:32.353701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:06.779 [2024-07-24 22:29:32.353716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:06.779 [2024-07-24 22:29:32.353734] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a31c0 is same with the state(5) to be set 00:20:06.779 [2024-07-24 22:29:32.355618] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:20:06.779 [2024-07-24 22:29:32.355680] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:20:06.779 [2024-07-24 22:29:32.355699] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:20:06.779 [2024-07-24 22:29:32.355839] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:06.779 [2024-07-24 22:29:32.355866] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:06.779 [2024-07-24 22:29:32.355888] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:06.779 [2024-07-24 22:29:32.355913] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:06.779 [2024-07-24 22:29:32.356019] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:20:06.779 [2024-07-24 22:29:32.356043] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:20:06.779 task offset: 24704 on job bdev=Nvme2n1 fails 00:20:06.779 00:20:06.779 Latency(us) 00:20:06.779 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:06.779 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:06.779 Job: Nvme1n1 ended in about 1.18 seconds with error 00:20:06.779 Verification LBA range: start 0x0 length 0x400 00:20:06.779 Nvme1n1 : 1.18 162.84 10.18 54.28 0.00 291572.81 14854.83 301368.51 00:20:06.779 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:06.779 Job: Nvme2n1 ended in about 1.18 seconds with error 00:20:06.779 Verification LBA range: start 0x0 length 0x400 00:20:06.779 Nvme2n1 : 1.18 163.39 10.21 54.46 0.00 284871.11 27185.30 302921.96 00:20:06.779 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:06.779 Job: Nvme3n1 ended in about 1.19 seconds with error 00:20:06.779 Verification LBA range: start 0x0 length 0x400 00:20:06.779 Nvme3n1 : 1.19 164.15 10.26 53.60 0.00 279568.73 28156.21 285834.05 00:20:06.779 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:06.779 Job: Nvme4n1 ended in about 1.20 seconds with error 00:20:06.779 Verification LBA range: start 0x0 length 0x400 00:20:06.779 Nvme4n1 : 1.20 160.30 10.02 53.43 0.00 279201.56 18350.08 295154.73 00:20:06.779 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:06.779 Job: Nvme5n1 ended in about 1.20 seconds with error 00:20:06.779 Verification LBA range: start 0x0 length 0x400 00:20:06.779 Nvme5n1 : 1.20 163.96 10.25 53.27 0.00 269116.84 23301.69 318456.41 00:20:06.779 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:06.779 Job: Nvme6n1 ended in about 1.18 seconds with error 00:20:06.779 Verification LBA range: start 0x0 length 0x400 00:20:06.780 Nvme6n1 : 1.18 163.18 10.20 54.39 0.00 262492.63 9077.95 312242.63 00:20:06.780 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:06.780 Job: Nvme7n1 ended in about 1.18 seconds with error 00:20:06.780 Verification LBA range: start 0x0 length 0x400 00:20:06.780 Nvme7n1 : 1.18 162.39 10.15 4.23 0.00 334640.94 24855.13 299815.06 00:20:06.780 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:06.780 Job: Nvme8n1 ended in about 1.21 seconds with error 00:20:06.780 Verification LBA range: start 0x0 length 0x400 00:20:06.780 Nvme8n1 : 1.21 159.31 9.96 53.10 0.00 258426.88 24466.77 299815.06 00:20:06.780 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:06.780 Job: Nvme9n1 ended in about 1.21 seconds with error 00:20:06.780 Verification LBA range: start 0x0 length 0x400 00:20:06.780 Nvme9n1 : 1.21 105.88 6.62 52.94 0.00 338479.79 39224.51 306028.85 00:20:06.780 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:06.780 Job: Nvme10n1 ended in about 1.21 seconds with error 00:20:06.780 Verification LBA range: start 0x0 length 0x400 00:20:06.780 Nvme10n1 : 1.21 105.52 6.60 52.76 0.00 332597.22 26991.12 327777.09 00:20:06.780 =================================================================================================================== 00:20:06.780 Total : 1510.93 94.43 486.47 0.00 289714.13 9077.95 327777.09 00:20:06.780 [2024-07-24 22:29:32.384318] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:06.780 [2024-07-24 22:29:32.384415] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:20:06.780 [2024-07-24 22:29:32.384458] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:20:06.780 [2024-07-24 22:29:32.384832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.780 [2024-07-24 22:29:32.384873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1248c00 with addr=10.0.0.2, port=4420 00:20:06.780 [2024-07-24 22:29:32.384895] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1248c00 is same with the state(5) to be set 00:20:06.780 [2024-07-24 22:29:32.385047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.780 [2024-07-24 22:29:32.385074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x123fa00 with addr=10.0.0.2, port=4420 00:20:06.780 [2024-07-24 22:29:32.385091] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x123fa00 is same with the state(5) to be set 00:20:06.780 [2024-07-24 22:29:32.385239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.780 [2024-07-24 22:29:32.385281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124c1e0 with addr=10.0.0.2, port=4420 00:20:06.780 [2024-07-24 22:29:32.385298] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x124c1e0 is same with the state(5) to be set 00:20:06.780 [2024-07-24 22:29:32.387290] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:20:06.780 [2024-07-24 22:29:32.387352] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:20:06.780 [2024-07-24 22:29:32.387685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.780 [2024-07-24 22:29:32.387722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x127b8e0 with addr=10.0.0.2, port=4420 00:20:06.780 [2024-07-24 22:29:32.387743] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127b8e0 is same with the state(5) to be set 00:20:06.780 [2024-07-24 22:29:32.387862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.780 [2024-07-24 22:29:32.387888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13bfe00 with addr=10.0.0.2, port=4420 00:20:06.780 [2024-07-24 22:29:32.387905] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13bfe00 is same with the state(5) to be set 00:20:06.780 [2024-07-24 22:29:32.388037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.780 [2024-07-24 22:29:32.388063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13d58c0 with addr=10.0.0.2, port=4420 00:20:06.780 [2024-07-24 22:29:32.388079] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d58c0 is same with the state(5) to be set 00:20:06.780 [2024-07-24 22:29:32.388179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.780 [2024-07-24 22:29:32.388203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd22610 with addr=10.0.0.2, port=4420 00:20:06.780 [2024-07-24 22:29:32.388220] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd22610 is same with the state(5) to be set 00:20:06.780 [2024-07-24 22:29:32.388247] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1248c00 (9): Bad file descriptor 00:20:06.780 [2024-07-24 22:29:32.388273] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x123fa00 (9): Bad file descriptor 00:20:06.780 [2024-07-24 22:29:32.388293] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x124c1e0 (9): Bad file descriptor 00:20:06.780 [2024-07-24 22:29:32.388347] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:06.780 [2024-07-24 22:29:32.388379] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:06.780 [2024-07-24 22:29:32.388400] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:06.780 [2024-07-24 22:29:32.388419] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:06.780 [2024-07-24 22:29:32.388527] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:06.780 [2024-07-24 22:29:32.388693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.780 [2024-07-24 22:29:32.388723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13e8690 with addr=10.0.0.2, port=4420 00:20:06.780 [2024-07-24 22:29:32.388740] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e8690 is same with the state(5) to be set 00:20:06.780 [2024-07-24 22:29:32.388847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.780 [2024-07-24 22:29:32.388874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x124c9b0 with addr=10.0.0.2, port=4420 00:20:06.780 [2024-07-24 22:29:32.388890] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x124c9b0 is same with the state(5) to be set 00:20:06.780 [2024-07-24 22:29:32.388927] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x127b8e0 (9): Bad file descriptor 00:20:06.780 [2024-07-24 22:29:32.388948] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13bfe00 (9): Bad file descriptor 00:20:06.780 [2024-07-24 22:29:32.388967] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13d58c0 (9): Bad file descriptor 00:20:06.780 [2024-07-24 22:29:32.388986] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd22610 (9): Bad file descriptor 00:20:06.780 [2024-07-24 22:29:32.389003] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:20:06.780 [2024-07-24 22:29:32.389018] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:20:06.780 [2024-07-24 22:29:32.389036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:20:06.780 [2024-07-24 22:29:32.389059] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:20:06.780 [2024-07-24 22:29:32.389074] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:20:06.780 [2024-07-24 22:29:32.389088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:20:06.780 [2024-07-24 22:29:32.389106] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:20:06.780 [2024-07-24 22:29:32.389121] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:20:06.780 [2024-07-24 22:29:32.389135] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:20:06.780 [2024-07-24 22:29:32.389248] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.780 [2024-07-24 22:29:32.389269] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.780 [2024-07-24 22:29:32.389282] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.780 [2024-07-24 22:29:32.389409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:06.780 [2024-07-24 22:29:32.389435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x121cad0 with addr=10.0.0.2, port=4420 00:20:06.780 [2024-07-24 22:29:32.389452] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x121cad0 is same with the state(5) to be set 00:20:06.780 [2024-07-24 22:29:32.389473] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13e8690 (9): Bad file descriptor 00:20:06.780 [2024-07-24 22:29:32.389503] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x124c9b0 (9): Bad file descriptor 00:20:06.780 [2024-07-24 22:29:32.389521] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:20:06.780 [2024-07-24 22:29:32.389542] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:20:06.780 [2024-07-24 22:29:32.389556] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:20:06.780 [2024-07-24 22:29:32.389575] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:20:06.780 [2024-07-24 22:29:32.389596] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:20:06.780 [2024-07-24 22:29:32.389609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:20:06.780 [2024-07-24 22:29:32.389626] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:20:06.781 [2024-07-24 22:29:32.389641] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:20:06.781 [2024-07-24 22:29:32.389655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:20:06.781 [2024-07-24 22:29:32.389677] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:20:06.781 [2024-07-24 22:29:32.389692] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:20:06.781 [2024-07-24 22:29:32.389706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:20:06.781 [2024-07-24 22:29:32.389753] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.781 [2024-07-24 22:29:32.389771] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.781 [2024-07-24 22:29:32.389784] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.781 [2024-07-24 22:29:32.389796] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.781 [2024-07-24 22:29:32.389813] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x121cad0 (9): Bad file descriptor 00:20:06.781 [2024-07-24 22:29:32.389830] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:06.781 [2024-07-24 22:29:32.389844] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:20:06.781 [2024-07-24 22:29:32.389858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:06.781 [2024-07-24 22:29:32.389876] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:20:06.781 [2024-07-24 22:29:32.389890] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:20:06.781 [2024-07-24 22:29:32.389904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:20:06.781 [2024-07-24 22:29:32.389949] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.781 [2024-07-24 22:29:32.389966] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:06.781 [2024-07-24 22:29:32.389979] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:06.781 [2024-07-24 22:29:32.389993] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:06.781 [2024-07-24 22:29:32.390007] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:06.781 [2024-07-24 22:29:32.390049] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:07.041 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:20:07.041 22:29:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:20:08.421 22:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 3880386 00:20:08.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (3880386) - No such process 00:20:08.421 22:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:20:08.421 22:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:20:08.421 22:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:08.421 22:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:08.421 22:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:08.421 22:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:08.421 22:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:08.421 22:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:20:08.421 22:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:08.421 22:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:20:08.421 22:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:08.421 22:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:08.421 rmmod nvme_tcp 00:20:08.421 rmmod nvme_fabrics 00:20:08.421 rmmod nvme_keyring 00:20:08.421 22:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:08.421 22:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:20:08.421 22:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:20:08.421 22:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:20:08.421 22:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:08.421 22:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:08.421 22:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:08.421 22:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:08.421 22:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:08.422 22:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:08.422 22:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:08.422 22:29:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:10.332 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:10.332 00:20:10.332 real 0m7.393s 00:20:10.332 user 0m17.803s 00:20:10.332 sys 0m1.539s 00:20:10.333 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:10.333 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:10.333 ************************************ 00:20:10.333 END TEST nvmf_shutdown_tc3 00:20:10.333 ************************************ 00:20:10.333 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:20:10.333 00:20:10.333 real 0m26.801s 00:20:10.333 user 1m15.342s 00:20:10.333 sys 0m6.145s 00:20:10.333 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:10.333 22:29:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:10.333 ************************************ 00:20:10.333 END TEST nvmf_shutdown 00:20:10.333 ************************************ 00:20:10.333 22:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:20:10.333 00:20:10.333 real 10m48.586s 00:20:10.333 user 26m0.330s 00:20:10.333 sys 2m27.342s 00:20:10.333 22:29:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:10.333 22:29:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:10.333 ************************************ 00:20:10.333 END TEST nvmf_target_extra 00:20:10.333 ************************************ 00:20:10.333 22:29:35 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:20:10.333 22:29:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:10.333 22:29:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:10.333 22:29:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:10.333 ************************************ 00:20:10.333 START TEST nvmf_host 00:20:10.333 ************************************ 00:20:10.333 22:29:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:20:10.333 * Looking for test storage... 00:20:10.333 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:20:10.333 22:29:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:10.333 22:29:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:20:10.333 22:29:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:10.333 22:29:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:10.333 22:29:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:10.333 22:29:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:10.333 22:29:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:10.333 22:29:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:10.333 22:29:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:10.333 22:29:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:10.333 22:29:35 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:10.333 22:29:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:10.333 22:29:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:10.333 22:29:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:20:10.333 22:29:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:10.333 22:29:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:10.333 22:29:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:10.333 22:29:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:10.333 22:29:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:10.333 22:29:36 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:10.333 22:29:36 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:10.333 22:29:36 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:10.333 22:29:36 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.333 22:29:36 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.333 22:29:36 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.333 22:29:36 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:20:10.333 22:29:36 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.333 22:29:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:20:10.333 22:29:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:10.333 22:29:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:10.333 22:29:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:10.333 22:29:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:10.333 22:29:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:10.333 22:29:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:10.333 22:29:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:10.333 22:29:36 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:10.333 22:29:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:20:10.333 22:29:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:20:10.333 22:29:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:20:10.333 22:29:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:10.333 22:29:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:10.333 22:29:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:10.333 22:29:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:10.593 ************************************ 00:20:10.593 START TEST nvmf_multicontroller 00:20:10.593 ************************************ 00:20:10.593 22:29:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:10.593 * Looking for test storage... 00:20:10.593 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:10.593 22:29:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:10.593 22:29:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:20:10.593 22:29:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:10.593 22:29:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:10.593 22:29:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:10.593 22:29:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:10.593 22:29:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:10.593 22:29:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:10.593 22:29:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:10.593 22:29:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:10.593 22:29:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:10.593 22:29:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:10.593 22:29:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:10.593 22:29:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:20:10.593 22:29:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:10.593 22:29:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:10.593 22:29:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:10.593 22:29:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:10.593 22:29:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:10.593 22:29:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:10.593 22:29:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:10.593 22:29:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:10.593 22:29:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.593 22:29:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.593 22:29:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.593 22:29:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:20:10.593 22:29:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.593 22:29:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:20:10.593 22:29:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:10.593 22:29:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:10.593 22:29:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:10.593 22:29:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:10.593 22:29:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:10.593 22:29:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:10.593 22:29:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:10.593 22:29:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:10.593 22:29:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:10.593 22:29:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:10.593 22:29:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:20:10.593 22:29:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:20:10.593 22:29:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:10.593 22:29:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:20:10.593 22:29:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:20:10.593 22:29:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:10.593 22:29:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:10.593 22:29:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:10.593 22:29:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:10.593 22:29:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:10.593 22:29:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:10.593 22:29:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:10.593 22:29:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:10.593 22:29:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:10.593 22:29:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:10.593 22:29:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:20:10.593 22:29:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:12.499 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:12.499 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:20:12.499 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:12.499 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:12.499 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:12.499 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:12.499 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:12.499 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:20:12.499 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:12.499 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:20:12.499 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:20:12.499 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:20:12.499 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:20:12.499 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:20:12.499 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:20:12.500 Found 0000:08:00.0 (0x8086 - 0x159b) 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:20:12.500 Found 0000:08:00.1 (0x8086 - 0x159b) 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:20:12.500 Found net devices under 0000:08:00.0: cvl_0_0 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:20:12.500 Found net devices under 0000:08:00.1: cvl_0_1 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:12.500 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:12.500 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:20:12.500 00:20:12.500 --- 10.0.0.2 ping statistics --- 00:20:12.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:12.500 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:12.500 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:12.500 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.063 ms 00:20:12.500 00:20:12.500 --- 10.0.0.1 ping statistics --- 00:20:12.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:12.500 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=3882329 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:12.500 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 3882329 00:20:12.501 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 3882329 ']' 00:20:12.501 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:12.501 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:12.501 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:12.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:12.501 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:12.501 22:29:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:12.501 [2024-07-24 22:29:37.898215] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:20:12.501 [2024-07-24 22:29:37.898320] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:12.501 EAL: No free 2048 kB hugepages reported on node 1 00:20:12.501 [2024-07-24 22:29:37.965628] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:12.501 [2024-07-24 22:29:38.082430] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:12.501 [2024-07-24 22:29:38.082503] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:12.501 [2024-07-24 22:29:38.082520] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:12.501 [2024-07-24 22:29:38.082534] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:12.501 [2024-07-24 22:29:38.082545] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:12.501 [2024-07-24 22:29:38.082631] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:12.501 [2024-07-24 22:29:38.082713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:12.501 [2024-07-24 22:29:38.082747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:12.501 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:12.501 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:20:12.501 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:12.501 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:12.501 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:12.760 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:12.760 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:12.760 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.760 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:12.760 [2024-07-24 22:29:38.224525] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:12.760 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.760 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:12.760 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.760 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:12.760 Malloc0 00:20:12.760 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.760 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:12.760 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.760 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:12.760 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.760 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:12.760 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.760 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:12.760 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.760 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:12.760 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.760 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:12.760 [2024-07-24 22:29:38.289879] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:12.760 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.760 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:12.760 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.760 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:12.760 [2024-07-24 22:29:38.297791] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:12.760 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.760 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:12.760 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.760 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:12.760 Malloc1 00:20:12.761 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.761 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:20:12.761 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.761 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:12.761 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.761 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:20:12.761 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.761 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:12.761 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.761 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:12.761 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.761 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:12.761 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.761 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:20:12.761 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.761 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:12.761 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.761 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3882402 00:20:12.761 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:12.761 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3882402 /var/tmp/bdevperf.sock 00:20:12.761 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 3882402 ']' 00:20:12.761 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:20:12.761 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:12.761 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:12.761 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:12.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:12.761 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:12.761 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:13.020 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:13.020 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:20:13.020 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:13.020 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.020 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:13.280 NVMe0n1 00:20:13.280 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.280 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:13.280 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:20:13.280 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.280 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:13.280 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.280 1 00:20:13.280 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:13.280 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:13.280 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:13.280 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:13.280 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:13.280 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:13.280 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:13.280 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:13.280 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.280 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:13.280 request: 00:20:13.280 { 00:20:13.280 "name": "NVMe0", 00:20:13.280 "trtype": "tcp", 00:20:13.280 "traddr": "10.0.0.2", 00:20:13.280 "adrfam": "ipv4", 00:20:13.280 "trsvcid": "4420", 00:20:13.280 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:13.280 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:20:13.280 "hostaddr": "10.0.0.2", 00:20:13.280 "hostsvcid": "60000", 00:20:13.280 "prchk_reftag": false, 00:20:13.280 "prchk_guard": false, 00:20:13.280 "hdgst": false, 00:20:13.280 "ddgst": false, 00:20:13.280 "method": "bdev_nvme_attach_controller", 00:20:13.280 "req_id": 1 00:20:13.280 } 00:20:13.280 Got JSON-RPC error response 00:20:13.280 response: 00:20:13.280 { 00:20:13.280 "code": -114, 00:20:13.280 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:13.280 } 00:20:13.280 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:13.280 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:13.280 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:13.280 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:13.280 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:13.280 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:13.280 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:13.280 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:13.280 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:13.280 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:13.280 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:13.280 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:13.280 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:13.280 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.280 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:13.280 request: 00:20:13.280 { 00:20:13.280 "name": "NVMe0", 00:20:13.280 "trtype": "tcp", 00:20:13.280 "traddr": "10.0.0.2", 00:20:13.280 "adrfam": "ipv4", 00:20:13.280 "trsvcid": "4420", 00:20:13.280 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:13.280 "hostaddr": "10.0.0.2", 00:20:13.280 "hostsvcid": "60000", 00:20:13.280 "prchk_reftag": false, 00:20:13.280 "prchk_guard": false, 00:20:13.280 "hdgst": false, 00:20:13.280 "ddgst": false, 00:20:13.280 "method": "bdev_nvme_attach_controller", 00:20:13.280 "req_id": 1 00:20:13.280 } 00:20:13.280 Got JSON-RPC error response 00:20:13.280 response: 00:20:13.280 { 00:20:13.280 "code": -114, 00:20:13.280 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:13.280 } 00:20:13.280 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:13.280 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:13.280 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:13.280 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:13.280 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:13.280 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:13.280 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:13.280 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:13.280 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:13.280 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:13.280 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:13.280 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:13.281 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:13.281 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.281 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:13.281 request: 00:20:13.281 { 00:20:13.281 "name": "NVMe0", 00:20:13.281 "trtype": "tcp", 00:20:13.281 "traddr": "10.0.0.2", 00:20:13.281 "adrfam": "ipv4", 00:20:13.281 "trsvcid": "4420", 00:20:13.281 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:13.281 "hostaddr": "10.0.0.2", 00:20:13.281 "hostsvcid": "60000", 00:20:13.281 "prchk_reftag": false, 00:20:13.281 "prchk_guard": false, 00:20:13.281 "hdgst": false, 00:20:13.281 "ddgst": false, 00:20:13.281 "multipath": "disable", 00:20:13.281 "method": "bdev_nvme_attach_controller", 00:20:13.281 "req_id": 1 00:20:13.281 } 00:20:13.281 Got JSON-RPC error response 00:20:13.281 response: 00:20:13.281 { 00:20:13.281 "code": -114, 00:20:13.281 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:20:13.281 } 00:20:13.281 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:13.281 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:13.281 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:13.281 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:13.281 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:13.281 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:13.281 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:13.281 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:13.281 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:13.281 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:13.281 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:13.281 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:13.281 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:13.281 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.281 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:13.281 request: 00:20:13.281 { 00:20:13.281 "name": "NVMe0", 00:20:13.281 "trtype": "tcp", 00:20:13.281 "traddr": "10.0.0.2", 00:20:13.281 "adrfam": "ipv4", 00:20:13.281 "trsvcid": "4420", 00:20:13.281 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:13.281 "hostaddr": "10.0.0.2", 00:20:13.281 "hostsvcid": "60000", 00:20:13.281 "prchk_reftag": false, 00:20:13.281 "prchk_guard": false, 00:20:13.281 "hdgst": false, 00:20:13.281 "ddgst": false, 00:20:13.281 "multipath": "failover", 00:20:13.281 "method": "bdev_nvme_attach_controller", 00:20:13.281 "req_id": 1 00:20:13.281 } 00:20:13.281 Got JSON-RPC error response 00:20:13.281 response: 00:20:13.281 { 00:20:13.281 "code": -114, 00:20:13.281 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:13.281 } 00:20:13.281 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:13.281 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:13.281 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:13.281 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:13.281 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:13.281 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:13.281 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.281 22:29:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:13.540 00:20:13.540 22:29:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.540 22:29:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:13.540 22:29:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.540 22:29:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:13.540 22:29:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.540 22:29:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:13.540 22:29:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.540 22:29:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:13.540 00:20:13.540 22:29:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.540 22:29:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:13.540 22:29:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:20:13.540 22:29:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.540 22:29:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:13.800 22:29:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.800 22:29:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:20:13.800 22:29:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:14.738 0 00:20:14.738 22:29:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:20:14.738 22:29:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.738 22:29:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:14.738 22:29:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.738 22:29:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 3882402 00:20:14.738 22:29:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 3882402 ']' 00:20:14.738 22:29:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 3882402 00:20:14.738 22:29:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:20:14.738 22:29:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:14.738 22:29:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3882402 00:20:14.738 22:29:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:14.738 22:29:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:14.738 22:29:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3882402' 00:20:14.738 killing process with pid 3882402 00:20:14.738 22:29:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 3882402 00:20:14.738 22:29:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 3882402 00:20:14.996 22:29:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:14.996 22:29:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.996 22:29:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:14.996 22:29:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.996 22:29:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:14.996 22:29:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.996 22:29:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:14.996 22:29:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.996 22:29:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:20:14.996 22:29:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:14.997 22:29:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1610 -- # read -r file 00:20:14.997 22:29:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:20:14.997 22:29:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # sort -u 00:20:14.997 22:29:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # cat 00:20:14.997 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:14.997 [2024-07-24 22:29:38.403149] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:20:14.997 [2024-07-24 22:29:38.403260] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3882402 ] 00:20:14.997 EAL: No free 2048 kB hugepages reported on node 1 00:20:14.997 [2024-07-24 22:29:38.464661] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:14.997 [2024-07-24 22:29:38.581649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:14.997 [2024-07-24 22:29:39.237371] bdev.c:4633:bdev_name_add: *ERROR*: Bdev name 5d461fb6-eace-470f-9407-d5c1a6a52f92 already exists 00:20:14.997 [2024-07-24 22:29:39.237415] bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:5d461fb6-eace-470f-9407-d5c1a6a52f92 alias for bdev NVMe1n1 00:20:14.997 [2024-07-24 22:29:39.237431] bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:20:14.997 Running I/O for 1 seconds... 00:20:14.997 00:20:14.997 Latency(us) 00:20:14.997 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:14.997 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:20:14.997 NVMe0n1 : 1.01 16798.12 65.62 0.00 0.00 7606.70 2536.49 12913.02 00:20:14.997 =================================================================================================================== 00:20:14.997 Total : 16798.12 65.62 0.00 0.00 7606.70 2536.49 12913.02 00:20:14.997 Received shutdown signal, test time was about 1.000000 seconds 00:20:14.997 00:20:14.997 Latency(us) 00:20:14.997 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:14.997 =================================================================================================================== 00:20:14.997 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:14.997 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:14.997 22:29:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1616 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:14.997 22:29:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1610 -- # read -r file 00:20:14.997 22:29:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:20:14.997 22:29:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:14.997 22:29:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:20:14.997 22:29:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:14.997 22:29:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:20:14.997 22:29:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:14.997 22:29:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:14.997 rmmod nvme_tcp 00:20:14.997 rmmod nvme_fabrics 00:20:14.997 rmmod nvme_keyring 00:20:15.256 22:29:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:15.256 22:29:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:20:15.256 22:29:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:20:15.256 22:29:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 3882329 ']' 00:20:15.256 22:29:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 3882329 00:20:15.256 22:29:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 3882329 ']' 00:20:15.256 22:29:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 3882329 00:20:15.256 22:29:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:20:15.256 22:29:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:15.256 22:29:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3882329 00:20:15.256 22:29:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:15.256 22:29:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:15.256 22:29:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3882329' 00:20:15.256 killing process with pid 3882329 00:20:15.256 22:29:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 3882329 00:20:15.256 22:29:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 3882329 00:20:15.516 22:29:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:15.516 22:29:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:15.516 22:29:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:15.516 22:29:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:15.516 22:29:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:15.516 22:29:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:15.516 22:29:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:15.516 22:29:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:17.427 22:29:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:17.427 00:20:17.427 real 0m6.997s 00:20:17.427 user 0m11.536s 00:20:17.427 sys 0m2.009s 00:20:17.427 22:29:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:17.427 22:29:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:17.427 ************************************ 00:20:17.427 END TEST nvmf_multicontroller 00:20:17.427 ************************************ 00:20:17.427 22:29:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:17.427 22:29:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:17.427 22:29:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:17.427 22:29:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:17.427 ************************************ 00:20:17.427 START TEST nvmf_aer 00:20:17.427 ************************************ 00:20:17.427 22:29:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:17.686 * Looking for test storage... 00:20:17.686 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:17.686 22:29:43 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:17.686 22:29:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:20:17.686 22:29:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:17.686 22:29:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:17.686 22:29:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:17.686 22:29:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:17.686 22:29:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:17.686 22:29:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:17.686 22:29:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:17.686 22:29:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:17.686 22:29:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:17.686 22:29:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:17.686 22:29:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:17.686 22:29:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:20:17.686 22:29:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:17.686 22:29:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:17.686 22:29:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:17.686 22:29:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:17.686 22:29:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:17.686 22:29:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:17.686 22:29:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:17.686 22:29:43 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:17.686 22:29:43 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.686 22:29:43 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.686 22:29:43 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.686 22:29:43 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:20:17.686 22:29:43 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:17.686 22:29:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:20:17.686 22:29:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:17.686 22:29:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:17.686 22:29:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:17.686 22:29:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:17.686 22:29:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:17.686 22:29:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:17.686 22:29:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:17.686 22:29:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:17.686 22:29:43 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:20:17.686 22:29:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:17.686 22:29:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:17.686 22:29:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:17.686 22:29:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:17.686 22:29:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:17.686 22:29:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:17.686 22:29:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:17.686 22:29:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:17.686 22:29:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:17.686 22:29:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:17.686 22:29:43 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:20:17.687 22:29:43 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:19.062 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:19.062 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:20:19.062 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:19.062 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:19.062 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:19.062 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:19.062 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:19.062 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:20:19.062 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:19.062 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:20:19.062 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:20:19.062 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:20:19.062 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:20:19.062 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:20:19.062 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:20:19.062 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:19.062 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:19.062 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:19.062 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:19.062 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:19.062 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:19.062 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:19.062 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:19.062 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:19.062 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:19.062 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:19.062 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:19.062 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:19.062 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:19.062 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:19.062 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:19.062 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:19.062 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:19.062 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:20:19.062 Found 0000:08:00.0 (0x8086 - 0x159b) 00:20:19.062 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:19.062 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:19.062 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:19.062 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:19.062 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:19.062 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:19.063 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:20:19.063 Found 0000:08:00.1 (0x8086 - 0x159b) 00:20:19.063 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:19.063 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:19.063 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:19.063 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:19.063 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:19.063 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:19.063 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:19.063 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:19.063 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:19.063 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:19.063 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:19.063 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:19.063 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:19.063 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:19.063 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:19.063 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:20:19.063 Found net devices under 0000:08:00.0: cvl_0_0 00:20:19.063 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:19.063 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:19.063 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:19.063 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:19.063 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:19.063 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:19.063 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:19.063 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:19.063 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:20:19.063 Found net devices under 0000:08:00.1: cvl_0_1 00:20:19.063 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:19.063 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:19.063 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:20:19.063 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:19.063 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:19.063 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:19.063 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:19.063 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:19.063 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:19.063 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:19.063 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:19.063 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:19.063 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:19.063 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:19.063 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:19.063 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:19.063 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:19.063 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:19.063 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:19.324 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:19.324 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:19.324 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:19.324 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:19.324 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:19.324 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:19.324 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:19.324 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:19.324 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:20:19.324 00:20:19.324 --- 10.0.0.2 ping statistics --- 00:20:19.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:19.324 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:20:19.324 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:19.324 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:19.324 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:20:19.324 00:20:19.324 --- 10.0.0.1 ping statistics --- 00:20:19.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:19.324 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:20:19.324 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:19.324 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:20:19.324 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:19.324 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:19.324 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:19.324 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:19.324 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:19.324 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:19.324 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:19.324 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:20:19.324 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:19.324 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:19.324 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:19.324 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=3884114 00:20:19.324 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:19.324 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 3884114 00:20:19.324 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 3884114 ']' 00:20:19.324 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:19.324 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:19.324 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:19.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:19.324 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:19.324 22:29:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:19.324 [2024-07-24 22:29:44.918208] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:20:19.324 [2024-07-24 22:29:44.918307] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:19.324 EAL: No free 2048 kB hugepages reported on node 1 00:20:19.324 [2024-07-24 22:29:44.984995] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:19.585 [2024-07-24 22:29:45.106201] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:19.585 [2024-07-24 22:29:45.106266] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:19.585 [2024-07-24 22:29:45.106281] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:19.585 [2024-07-24 22:29:45.106295] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:19.585 [2024-07-24 22:29:45.106306] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:19.585 [2024-07-24 22:29:45.106384] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:19.585 [2024-07-24 22:29:45.106438] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:19.585 [2024-07-24 22:29:45.106495] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:19.585 [2024-07-24 22:29:45.106502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:19.585 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:19.585 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:20:19.585 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:19.585 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:19.585 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:19.585 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:19.585 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:19.585 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.585 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:19.585 [2024-07-24 22:29:45.257802] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:19.585 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.585 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:20:19.585 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.585 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:19.585 Malloc0 00:20:19.585 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.585 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:20:19.585 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.845 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:19.845 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.845 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:19.845 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.845 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:19.845 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.845 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:19.845 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.845 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:19.845 [2024-07-24 22:29:45.308369] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:19.845 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.845 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:20:19.845 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.845 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:19.845 [ 00:20:19.845 { 00:20:19.845 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:19.845 "subtype": "Discovery", 00:20:19.845 "listen_addresses": [], 00:20:19.845 "allow_any_host": true, 00:20:19.845 "hosts": [] 00:20:19.845 }, 00:20:19.845 { 00:20:19.845 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:19.845 "subtype": "NVMe", 00:20:19.845 "listen_addresses": [ 00:20:19.845 { 00:20:19.845 "trtype": "TCP", 00:20:19.845 "adrfam": "IPv4", 00:20:19.845 "traddr": "10.0.0.2", 00:20:19.845 "trsvcid": "4420" 00:20:19.845 } 00:20:19.845 ], 00:20:19.845 "allow_any_host": true, 00:20:19.845 "hosts": [], 00:20:19.845 "serial_number": "SPDK00000000000001", 00:20:19.845 "model_number": "SPDK bdev Controller", 00:20:19.845 "max_namespaces": 2, 00:20:19.845 "min_cntlid": 1, 00:20:19.845 "max_cntlid": 65519, 00:20:19.845 "namespaces": [ 00:20:19.845 { 00:20:19.845 "nsid": 1, 00:20:19.845 "bdev_name": "Malloc0", 00:20:19.845 "name": "Malloc0", 00:20:19.845 "nguid": "0E693001690C4239AF9DBAF2BDCF182A", 00:20:19.845 "uuid": "0e693001-690c-4239-af9d-baf2bdcf182a" 00:20:19.845 } 00:20:19.845 ] 00:20:19.845 } 00:20:19.845 ] 00:20:19.845 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.845 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:19.845 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:20:19.845 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3884145 00:20:19.845 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:20:19.845 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:20:19.845 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1263 -- # local i=0 00:20:19.845 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1264 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:19.845 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' 0 -lt 200 ']' 00:20:19.845 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # i=1 00:20:19.845 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # sleep 0.1 00:20:19.845 EAL: No free 2048 kB hugepages reported on node 1 00:20:19.845 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1264 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:19.845 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' 1 -lt 200 ']' 00:20:19.845 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # i=2 00:20:19.845 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # sleep 0.1 00:20:19.845 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1264 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:19.845 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:19.845 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1274 -- # return 0 00:20:19.845 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:20:19.845 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.845 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:20.105 Malloc1 00:20:20.105 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.105 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:20:20.105 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.105 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:20.105 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.105 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:20:20.105 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.105 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:20.105 [ 00:20:20.105 { 00:20:20.105 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:20.105 "subtype": "Discovery", 00:20:20.105 "listen_addresses": [], 00:20:20.105 "allow_any_host": true, 00:20:20.105 "hosts": [] 00:20:20.105 }, 00:20:20.105 { 00:20:20.105 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.105 "subtype": "NVMe", 00:20:20.105 "listen_addresses": [ 00:20:20.105 { 00:20:20.105 "trtype": "TCP", 00:20:20.105 "adrfam": "IPv4", 00:20:20.105 "traddr": "10.0.0.2", 00:20:20.105 "trsvcid": "4420" 00:20:20.105 } 00:20:20.105 ], 00:20:20.105 "allow_any_host": true, 00:20:20.105 "hosts": [], 00:20:20.105 "serial_number": "SPDK00000000000001", 00:20:20.105 "model_number": "SPDK bdev Controller", 00:20:20.105 "max_namespaces": 2, 00:20:20.105 "min_cntlid": 1, 00:20:20.105 "max_cntlid": 65519, 00:20:20.105 "namespaces": [ 00:20:20.106 { 00:20:20.106 "nsid": 1, 00:20:20.106 "bdev_name": "Malloc0", 00:20:20.106 "name": "Malloc0", 00:20:20.106 "nguid": "0E693001690C4239AF9DBAF2BDCF182A", 00:20:20.106 "uuid": "0e693001-690c-4239-af9d-baf2bdcf182a" 00:20:20.106 }, 00:20:20.106 { 00:20:20.106 "nsid": 2, 00:20:20.106 "bdev_name": "Malloc1", 00:20:20.106 "name": "Malloc1", 00:20:20.106 "nguid": "BC755D82755F4796BF5A1D9C323CABB9", 00:20:20.106 "uuid": "bc755d82-755f-4796-bf5a-1d9c323cabb9" 00:20:20.106 } 00:20:20.106 ] 00:20:20.106 } 00:20:20.106 ] 00:20:20.106 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.106 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3884145 00:20:20.106 Asynchronous Event Request test 00:20:20.106 Attaching to 10.0.0.2 00:20:20.106 Attached to 10.0.0.2 00:20:20.106 Registering asynchronous event callbacks... 00:20:20.106 Starting namespace attribute notice tests for all controllers... 00:20:20.106 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:20.106 aer_cb - Changed Namespace 00:20:20.106 Cleaning up... 00:20:20.106 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:20.106 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.106 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:20.106 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.106 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:20:20.106 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.106 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:20.106 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.106 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:20.106 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.106 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:20.106 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.106 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:20:20.106 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:20:20.106 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:20.106 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:20:20.106 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:20.106 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:20:20.106 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:20.106 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:20.106 rmmod nvme_tcp 00:20:20.106 rmmod nvme_fabrics 00:20:20.106 rmmod nvme_keyring 00:20:20.106 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:20.106 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:20:20.106 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:20:20.106 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 3884114 ']' 00:20:20.106 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 3884114 00:20:20.106 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 3884114 ']' 00:20:20.106 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 3884114 00:20:20.106 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:20:20.106 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:20.106 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3884114 00:20:20.106 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:20.106 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:20.106 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3884114' 00:20:20.106 killing process with pid 3884114 00:20:20.106 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@967 -- # kill 3884114 00:20:20.106 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # wait 3884114 00:20:20.365 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:20.365 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:20.365 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:20.365 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:20.365 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:20.365 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:20.366 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:20.366 22:29:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:22.908 22:29:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:22.908 00:20:22.908 real 0m4.915s 00:20:22.908 user 0m3.855s 00:20:22.908 sys 0m1.607s 00:20:22.908 22:29:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:22.908 22:29:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:22.908 ************************************ 00:20:22.908 END TEST nvmf_aer 00:20:22.908 ************************************ 00:20:22.908 22:29:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:22.908 22:29:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:22.908 22:29:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:22.908 22:29:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:22.908 ************************************ 00:20:22.908 START TEST nvmf_async_init 00:20:22.908 ************************************ 00:20:22.908 22:29:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:22.908 * Looking for test storage... 00:20:22.908 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:22.908 22:29:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:22.908 22:29:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:20:22.908 22:29:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:22.908 22:29:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:22.908 22:29:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:22.909 22:29:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:22.909 22:29:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:22.909 22:29:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:22.909 22:29:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:22.909 22:29:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:22.909 22:29:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:22.909 22:29:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:22.909 22:29:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:22.909 22:29:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:20:22.909 22:29:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:22.909 22:29:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:22.909 22:29:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:22.909 22:29:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:22.909 22:29:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:22.909 22:29:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:22.909 22:29:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:22.909 22:29:48 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:22.909 22:29:48 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.909 22:29:48 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.909 22:29:48 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.909 22:29:48 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:20:22.909 22:29:48 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.909 22:29:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:20:22.909 22:29:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:22.909 22:29:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:22.909 22:29:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:22.909 22:29:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:22.909 22:29:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:22.909 22:29:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:22.909 22:29:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:22.909 22:29:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:22.909 22:29:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:20:22.909 22:29:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:20:22.909 22:29:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:20:22.909 22:29:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:20:22.909 22:29:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:20:22.909 22:29:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:20:22.909 22:29:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=7f60bff4b8cc45d7a57803fb3a932922 00:20:22.909 22:29:48 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:20:22.909 22:29:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:22.909 22:29:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:22.909 22:29:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:22.909 22:29:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:22.909 22:29:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:22.909 22:29:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:22.909 22:29:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:22.909 22:29:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:22.909 22:29:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:22.909 22:29:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:22.909 22:29:48 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:20:22.909 22:29:48 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:20:24.288 Found 0000:08:00.0 (0x8086 - 0x159b) 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:20:24.288 Found 0000:08:00.1 (0x8086 - 0x159b) 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:20:24.288 Found net devices under 0000:08:00.0: cvl_0_0 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:20:24.288 Found net devices under 0000:08:00.1: cvl_0_1 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:24.288 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:24.289 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:24.289 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:24.289 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:24.289 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:24.289 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:24.289 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:24.289 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:24.289 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:24.289 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:24.289 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:24.289 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:24.289 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.356 ms 00:20:24.289 00:20:24.289 --- 10.0.0.2 ping statistics --- 00:20:24.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:24.289 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:20:24.289 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:24.289 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:24.289 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:20:24.289 00:20:24.289 --- 10.0.0.1 ping statistics --- 00:20:24.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:24.289 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:20:24.289 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:24.289 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:20:24.289 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:24.289 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:24.289 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:24.289 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:24.289 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:24.289 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:24.289 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:24.289 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:20:24.289 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:24.289 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:24.289 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:24.289 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=3885647 00:20:24.289 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:24.289 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 3885647 00:20:24.289 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 3885647 ']' 00:20:24.289 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:24.289 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:24.289 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:24.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:24.289 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:24.289 22:29:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:24.289 [2024-07-24 22:29:49.926283] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:20:24.289 [2024-07-24 22:29:49.926376] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:24.289 EAL: No free 2048 kB hugepages reported on node 1 00:20:24.547 [2024-07-24 22:29:49.993042] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:24.547 [2024-07-24 22:29:50.108845] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:24.547 [2024-07-24 22:29:50.108909] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:24.547 [2024-07-24 22:29:50.108925] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:24.547 [2024-07-24 22:29:50.108938] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:24.547 [2024-07-24 22:29:50.108950] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:24.547 [2024-07-24 22:29:50.108988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:24.547 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:24.547 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:20:24.547 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:24.547 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:24.547 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:24.547 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:24.547 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:20:24.547 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.547 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:24.547 [2024-07-24 22:29:50.234943] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:24.547 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.547 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:20:24.547 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.547 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:24.547 null0 00:20:24.547 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.547 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:20:24.547 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.547 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:24.807 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.807 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:20:24.807 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.807 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:24.807 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.807 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 7f60bff4b8cc45d7a57803fb3a932922 00:20:24.807 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.807 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:24.807 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.807 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:24.807 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.807 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:24.807 [2024-07-24 22:29:50.275184] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:24.807 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.807 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:20:24.807 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.807 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:24.807 nvme0n1 00:20:24.807 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.807 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:24.807 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.807 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:25.068 [ 00:20:25.068 { 00:20:25.068 "name": "nvme0n1", 00:20:25.068 "aliases": [ 00:20:25.068 "7f60bff4-b8cc-45d7-a578-03fb3a932922" 00:20:25.068 ], 00:20:25.068 "product_name": "NVMe disk", 00:20:25.068 "block_size": 512, 00:20:25.068 "num_blocks": 2097152, 00:20:25.068 "uuid": "7f60bff4-b8cc-45d7-a578-03fb3a932922", 00:20:25.068 "assigned_rate_limits": { 00:20:25.068 "rw_ios_per_sec": 0, 00:20:25.068 "rw_mbytes_per_sec": 0, 00:20:25.068 "r_mbytes_per_sec": 0, 00:20:25.068 "w_mbytes_per_sec": 0 00:20:25.068 }, 00:20:25.068 "claimed": false, 00:20:25.068 "zoned": false, 00:20:25.068 "supported_io_types": { 00:20:25.068 "read": true, 00:20:25.068 "write": true, 00:20:25.068 "unmap": false, 00:20:25.068 "flush": true, 00:20:25.068 "reset": true, 00:20:25.068 "nvme_admin": true, 00:20:25.068 "nvme_io": true, 00:20:25.068 "nvme_io_md": false, 00:20:25.068 "write_zeroes": true, 00:20:25.068 "zcopy": false, 00:20:25.068 "get_zone_info": false, 00:20:25.068 "zone_management": false, 00:20:25.068 "zone_append": false, 00:20:25.068 "compare": true, 00:20:25.068 "compare_and_write": true, 00:20:25.068 "abort": true, 00:20:25.068 "seek_hole": false, 00:20:25.068 "seek_data": false, 00:20:25.068 "copy": true, 00:20:25.068 "nvme_iov_md": false 00:20:25.068 }, 00:20:25.068 "memory_domains": [ 00:20:25.068 { 00:20:25.068 "dma_device_id": "system", 00:20:25.068 "dma_device_type": 1 00:20:25.068 } 00:20:25.068 ], 00:20:25.068 "driver_specific": { 00:20:25.068 "nvme": [ 00:20:25.068 { 00:20:25.068 "trid": { 00:20:25.068 "trtype": "TCP", 00:20:25.068 "adrfam": "IPv4", 00:20:25.068 "traddr": "10.0.0.2", 00:20:25.068 "trsvcid": "4420", 00:20:25.068 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:25.068 }, 00:20:25.068 "ctrlr_data": { 00:20:25.068 "cntlid": 1, 00:20:25.068 "vendor_id": "0x8086", 00:20:25.068 "model_number": "SPDK bdev Controller", 00:20:25.068 "serial_number": "00000000000000000000", 00:20:25.068 "firmware_revision": "24.09", 00:20:25.068 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:25.068 "oacs": { 00:20:25.068 "security": 0, 00:20:25.068 "format": 0, 00:20:25.068 "firmware": 0, 00:20:25.068 "ns_manage": 0 00:20:25.068 }, 00:20:25.068 "multi_ctrlr": true, 00:20:25.068 "ana_reporting": false 00:20:25.068 }, 00:20:25.068 "vs": { 00:20:25.068 "nvme_version": "1.3" 00:20:25.068 }, 00:20:25.068 "ns_data": { 00:20:25.068 "id": 1, 00:20:25.068 "can_share": true 00:20:25.068 } 00:20:25.068 } 00:20:25.068 ], 00:20:25.068 "mp_policy": "active_passive" 00:20:25.068 } 00:20:25.068 } 00:20:25.068 ] 00:20:25.068 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.068 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:20:25.068 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.068 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:25.068 [2024-07-24 22:29:50.529389] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:25.068 [2024-07-24 22:29:50.529475] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f73d0 (9): Bad file descriptor 00:20:25.068 [2024-07-24 22:29:50.661653] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:25.068 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.068 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:25.068 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.068 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:25.068 [ 00:20:25.068 { 00:20:25.068 "name": "nvme0n1", 00:20:25.068 "aliases": [ 00:20:25.068 "7f60bff4-b8cc-45d7-a578-03fb3a932922" 00:20:25.068 ], 00:20:25.068 "product_name": "NVMe disk", 00:20:25.068 "block_size": 512, 00:20:25.068 "num_blocks": 2097152, 00:20:25.068 "uuid": "7f60bff4-b8cc-45d7-a578-03fb3a932922", 00:20:25.068 "assigned_rate_limits": { 00:20:25.068 "rw_ios_per_sec": 0, 00:20:25.068 "rw_mbytes_per_sec": 0, 00:20:25.068 "r_mbytes_per_sec": 0, 00:20:25.068 "w_mbytes_per_sec": 0 00:20:25.068 }, 00:20:25.068 "claimed": false, 00:20:25.068 "zoned": false, 00:20:25.068 "supported_io_types": { 00:20:25.068 "read": true, 00:20:25.068 "write": true, 00:20:25.068 "unmap": false, 00:20:25.068 "flush": true, 00:20:25.068 "reset": true, 00:20:25.068 "nvme_admin": true, 00:20:25.068 "nvme_io": true, 00:20:25.068 "nvme_io_md": false, 00:20:25.068 "write_zeroes": true, 00:20:25.068 "zcopy": false, 00:20:25.068 "get_zone_info": false, 00:20:25.068 "zone_management": false, 00:20:25.068 "zone_append": false, 00:20:25.068 "compare": true, 00:20:25.068 "compare_and_write": true, 00:20:25.068 "abort": true, 00:20:25.068 "seek_hole": false, 00:20:25.068 "seek_data": false, 00:20:25.068 "copy": true, 00:20:25.068 "nvme_iov_md": false 00:20:25.068 }, 00:20:25.068 "memory_domains": [ 00:20:25.068 { 00:20:25.068 "dma_device_id": "system", 00:20:25.068 "dma_device_type": 1 00:20:25.068 } 00:20:25.068 ], 00:20:25.068 "driver_specific": { 00:20:25.068 "nvme": [ 00:20:25.068 { 00:20:25.068 "trid": { 00:20:25.068 "trtype": "TCP", 00:20:25.068 "adrfam": "IPv4", 00:20:25.069 "traddr": "10.0.0.2", 00:20:25.069 "trsvcid": "4420", 00:20:25.069 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:25.069 }, 00:20:25.069 "ctrlr_data": { 00:20:25.069 "cntlid": 2, 00:20:25.069 "vendor_id": "0x8086", 00:20:25.069 "model_number": "SPDK bdev Controller", 00:20:25.069 "serial_number": "00000000000000000000", 00:20:25.069 "firmware_revision": "24.09", 00:20:25.069 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:25.069 "oacs": { 00:20:25.069 "security": 0, 00:20:25.069 "format": 0, 00:20:25.069 "firmware": 0, 00:20:25.069 "ns_manage": 0 00:20:25.069 }, 00:20:25.069 "multi_ctrlr": true, 00:20:25.069 "ana_reporting": false 00:20:25.069 }, 00:20:25.069 "vs": { 00:20:25.069 "nvme_version": "1.3" 00:20:25.069 }, 00:20:25.069 "ns_data": { 00:20:25.069 "id": 1, 00:20:25.069 "can_share": true 00:20:25.069 } 00:20:25.069 } 00:20:25.069 ], 00:20:25.069 "mp_policy": "active_passive" 00:20:25.069 } 00:20:25.069 } 00:20:25.069 ] 00:20:25.069 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.069 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:25.069 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.069 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:25.069 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.069 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:20:25.069 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.ZUjIWaOEOr 00:20:25.069 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:25.069 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.ZUjIWaOEOr 00:20:25.069 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:20:25.069 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.069 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:25.069 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.069 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:20:25.069 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.069 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:25.069 [2024-07-24 22:29:50.714072] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:25.069 [2024-07-24 22:29:50.714194] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:25.069 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.069 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ZUjIWaOEOr 00:20:25.069 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.069 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:25.069 [2024-07-24 22:29:50.722087] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:25.069 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.069 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ZUjIWaOEOr 00:20:25.069 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.069 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:25.069 [2024-07-24 22:29:50.730124] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:25.069 [2024-07-24 22:29:50.730181] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:25.330 nvme0n1 00:20:25.330 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.330 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:25.330 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.330 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:25.330 [ 00:20:25.330 { 00:20:25.330 "name": "nvme0n1", 00:20:25.330 "aliases": [ 00:20:25.330 "7f60bff4-b8cc-45d7-a578-03fb3a932922" 00:20:25.330 ], 00:20:25.330 "product_name": "NVMe disk", 00:20:25.330 "block_size": 512, 00:20:25.330 "num_blocks": 2097152, 00:20:25.330 "uuid": "7f60bff4-b8cc-45d7-a578-03fb3a932922", 00:20:25.330 "assigned_rate_limits": { 00:20:25.330 "rw_ios_per_sec": 0, 00:20:25.330 "rw_mbytes_per_sec": 0, 00:20:25.330 "r_mbytes_per_sec": 0, 00:20:25.330 "w_mbytes_per_sec": 0 00:20:25.330 }, 00:20:25.330 "claimed": false, 00:20:25.330 "zoned": false, 00:20:25.330 "supported_io_types": { 00:20:25.330 "read": true, 00:20:25.330 "write": true, 00:20:25.330 "unmap": false, 00:20:25.330 "flush": true, 00:20:25.330 "reset": true, 00:20:25.330 "nvme_admin": true, 00:20:25.330 "nvme_io": true, 00:20:25.330 "nvme_io_md": false, 00:20:25.330 "write_zeroes": true, 00:20:25.330 "zcopy": false, 00:20:25.330 "get_zone_info": false, 00:20:25.330 "zone_management": false, 00:20:25.330 "zone_append": false, 00:20:25.330 "compare": true, 00:20:25.330 "compare_and_write": true, 00:20:25.330 "abort": true, 00:20:25.330 "seek_hole": false, 00:20:25.330 "seek_data": false, 00:20:25.330 "copy": true, 00:20:25.330 "nvme_iov_md": false 00:20:25.330 }, 00:20:25.330 "memory_domains": [ 00:20:25.330 { 00:20:25.330 "dma_device_id": "system", 00:20:25.330 "dma_device_type": 1 00:20:25.330 } 00:20:25.330 ], 00:20:25.330 "driver_specific": { 00:20:25.330 "nvme": [ 00:20:25.330 { 00:20:25.330 "trid": { 00:20:25.330 "trtype": "TCP", 00:20:25.330 "adrfam": "IPv4", 00:20:25.330 "traddr": "10.0.0.2", 00:20:25.330 "trsvcid": "4421", 00:20:25.330 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:25.330 }, 00:20:25.330 "ctrlr_data": { 00:20:25.330 "cntlid": 3, 00:20:25.330 "vendor_id": "0x8086", 00:20:25.330 "model_number": "SPDK bdev Controller", 00:20:25.330 "serial_number": "00000000000000000000", 00:20:25.330 "firmware_revision": "24.09", 00:20:25.330 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:25.330 "oacs": { 00:20:25.330 "security": 0, 00:20:25.330 "format": 0, 00:20:25.330 "firmware": 0, 00:20:25.330 "ns_manage": 0 00:20:25.330 }, 00:20:25.330 "multi_ctrlr": true, 00:20:25.330 "ana_reporting": false 00:20:25.330 }, 00:20:25.330 "vs": { 00:20:25.330 "nvme_version": "1.3" 00:20:25.330 }, 00:20:25.330 "ns_data": { 00:20:25.330 "id": 1, 00:20:25.330 "can_share": true 00:20:25.330 } 00:20:25.330 } 00:20:25.330 ], 00:20:25.330 "mp_policy": "active_passive" 00:20:25.330 } 00:20:25.330 } 00:20:25.330 ] 00:20:25.330 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.330 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:25.330 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.330 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:25.330 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.330 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.ZUjIWaOEOr 00:20:25.330 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:20:25.330 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:20:25.330 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:25.330 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:20:25.330 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:25.330 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:20:25.330 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:25.330 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:25.330 rmmod nvme_tcp 00:20:25.330 rmmod nvme_fabrics 00:20:25.330 rmmod nvme_keyring 00:20:25.330 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:25.330 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:20:25.330 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:20:25.330 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 3885647 ']' 00:20:25.330 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 3885647 00:20:25.330 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 3885647 ']' 00:20:25.330 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 3885647 00:20:25.330 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:20:25.330 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:25.330 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3885647 00:20:25.330 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:25.330 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:25.330 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3885647' 00:20:25.330 killing process with pid 3885647 00:20:25.330 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 3885647 00:20:25.330 [2024-07-24 22:29:50.912705] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:25.330 [2024-07-24 22:29:50.912753] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:25.330 22:29:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 3885647 00:20:25.591 22:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:25.591 22:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:25.591 22:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:25.591 22:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:25.591 22:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:25.591 22:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:25.591 22:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:25.591 22:29:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:27.499 22:29:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:27.499 00:20:27.499 real 0m5.103s 00:20:27.499 user 0m1.977s 00:20:27.499 sys 0m1.540s 00:20:27.499 22:29:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:27.499 22:29:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:27.499 ************************************ 00:20:27.499 END TEST nvmf_async_init 00:20:27.499 ************************************ 00:20:27.499 22:29:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:27.499 22:29:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:27.499 22:29:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:27.499 22:29:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.758 ************************************ 00:20:27.758 START TEST dma 00:20:27.758 ************************************ 00:20:27.758 22:29:53 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:27.758 * Looking for test storage... 00:20:27.758 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:27.758 22:29:53 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:27.758 22:29:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:20:27.758 22:29:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:27.759 22:29:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:27.759 22:29:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:27.759 22:29:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:27.759 22:29:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:27.759 22:29:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:27.759 22:29:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:27.759 22:29:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:27.759 22:29:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:27.759 22:29:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:27.759 22:29:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:27.759 22:29:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:20:27.759 22:29:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:27.759 22:29:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:27.759 22:29:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:27.759 22:29:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:27.759 22:29:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:27.759 22:29:53 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:27.759 22:29:53 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:27.759 22:29:53 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:27.759 22:29:53 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.759 22:29:53 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.759 22:29:53 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.759 22:29:53 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:20:27.759 22:29:53 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.759 22:29:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@47 -- # : 0 00:20:27.759 22:29:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:27.759 22:29:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:27.759 22:29:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:27.759 22:29:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:27.759 22:29:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:27.759 22:29:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:27.759 22:29:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:27.759 22:29:53 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:27.759 22:29:53 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:20:27.759 22:29:53 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:20:27.759 00:20:27.759 real 0m0.070s 00:20:27.759 user 0m0.028s 00:20:27.759 sys 0m0.046s 00:20:27.759 22:29:53 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:27.759 22:29:53 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:20:27.759 ************************************ 00:20:27.759 END TEST dma 00:20:27.759 ************************************ 00:20:27.759 22:29:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:27.759 22:29:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:27.759 22:29:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:27.759 22:29:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.759 ************************************ 00:20:27.759 START TEST nvmf_identify 00:20:27.759 ************************************ 00:20:27.759 22:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:27.759 * Looking for test storage... 00:20:27.759 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:27.759 22:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:27.759 22:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:20:27.759 22:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:27.759 22:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:27.759 22:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:27.759 22:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:27.759 22:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:27.759 22:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:27.759 22:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:27.759 22:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:27.759 22:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:27.759 22:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:27.759 22:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:27.759 22:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:20:27.759 22:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:27.759 22:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:27.759 22:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:27.759 22:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:27.759 22:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:27.759 22:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:27.759 22:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:27.759 22:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:27.759 22:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.759 22:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.759 22:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.759 22:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:20:27.759 22:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.759 22:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:20:27.759 22:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:27.759 22:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:27.759 22:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:27.759 22:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:27.759 22:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:27.759 22:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:27.760 22:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:27.760 22:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:27.760 22:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:27.760 22:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:27.760 22:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:20:27.760 22:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:27.760 22:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:27.760 22:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:27.760 22:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:27.760 22:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:27.760 22:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:27.760 22:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:27.760 22:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:27.760 22:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:27.760 22:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:27.760 22:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:20:27.760 22:29:53 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:29.667 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:29.667 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:20:29.667 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:29.667 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:29.667 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:29.667 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:29.667 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:29.667 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:20:29.668 Found 0000:08:00.0 (0x8086 - 0x159b) 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:20:29.668 Found 0000:08:00.1 (0x8086 - 0x159b) 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:20:29.668 Found net devices under 0000:08:00.0: cvl_0_0 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:20:29.668 Found net devices under 0000:08:00.1: cvl_0_1 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:29.668 22:29:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:29.668 22:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:29.668 22:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:29.668 22:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:29.668 22:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:29.668 22:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:29.668 22:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:29.668 22:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:29.668 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:29.668 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:20:29.668 00:20:29.668 --- 10.0.0.2 ping statistics --- 00:20:29.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:29.668 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:20:29.668 22:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:29.668 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:29.668 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:20:29.668 00:20:29.668 --- 10.0.0.1 ping statistics --- 00:20:29.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:29.668 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:20:29.668 22:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:29.668 22:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:20:29.668 22:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:29.668 22:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:29.668 22:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:29.668 22:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:29.668 22:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:29.669 22:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:29.669 22:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:29.669 22:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:20:29.669 22:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:29.669 22:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:29.669 22:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3887298 00:20:29.669 22:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:29.669 22:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:29.669 22:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3887298 00:20:29.669 22:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 3887298 ']' 00:20:29.669 22:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:29.669 22:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:29.669 22:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:29.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:29.669 22:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:29.669 22:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:29.669 [2024-07-24 22:29:55.169701] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:20:29.669 [2024-07-24 22:29:55.169803] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:29.669 EAL: No free 2048 kB hugepages reported on node 1 00:20:29.669 [2024-07-24 22:29:55.239366] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:29.669 [2024-07-24 22:29:55.361090] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:29.669 [2024-07-24 22:29:55.361153] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:29.669 [2024-07-24 22:29:55.361169] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:29.669 [2024-07-24 22:29:55.361182] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:29.669 [2024-07-24 22:29:55.361193] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:29.669 [2024-07-24 22:29:55.362502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:29.669 [2024-07-24 22:29:55.362587] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:29.669 [2024-07-24 22:29:55.362740] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:29.669 [2024-07-24 22:29:55.362773] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:29.928 22:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:29.928 22:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:20:29.928 22:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:29.928 22:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.928 22:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:29.928 [2024-07-24 22:29:55.491686] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:29.928 22:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.928 22:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:20:29.928 22:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:29.928 22:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:29.928 22:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:29.928 22:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.928 22:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:29.928 Malloc0 00:20:29.928 22:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.928 22:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:29.928 22:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.928 22:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:29.929 22:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.929 22:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:20:29.929 22:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.929 22:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:29.929 22:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.929 22:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:29.929 22:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.929 22:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:29.929 [2024-07-24 22:29:55.569025] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:29.929 22:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.929 22:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:29.929 22:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.929 22:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:29.929 22:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.929 22:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:20:29.929 22:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.929 22:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:29.929 [ 00:20:29.929 { 00:20:29.929 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:29.929 "subtype": "Discovery", 00:20:29.929 "listen_addresses": [ 00:20:29.929 { 00:20:29.929 "trtype": "TCP", 00:20:29.929 "adrfam": "IPv4", 00:20:29.929 "traddr": "10.0.0.2", 00:20:29.929 "trsvcid": "4420" 00:20:29.929 } 00:20:29.929 ], 00:20:29.929 "allow_any_host": true, 00:20:29.929 "hosts": [] 00:20:29.929 }, 00:20:29.929 { 00:20:29.929 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:29.929 "subtype": "NVMe", 00:20:29.929 "listen_addresses": [ 00:20:29.929 { 00:20:29.929 "trtype": "TCP", 00:20:29.929 "adrfam": "IPv4", 00:20:29.929 "traddr": "10.0.0.2", 00:20:29.929 "trsvcid": "4420" 00:20:29.929 } 00:20:29.929 ], 00:20:29.929 "allow_any_host": true, 00:20:29.929 "hosts": [], 00:20:29.929 "serial_number": "SPDK00000000000001", 00:20:29.929 "model_number": "SPDK bdev Controller", 00:20:29.929 "max_namespaces": 32, 00:20:29.929 "min_cntlid": 1, 00:20:29.929 "max_cntlid": 65519, 00:20:29.929 "namespaces": [ 00:20:29.929 { 00:20:29.929 "nsid": 1, 00:20:29.929 "bdev_name": "Malloc0", 00:20:29.929 "name": "Malloc0", 00:20:29.929 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:20:29.929 "eui64": "ABCDEF0123456789", 00:20:29.929 "uuid": "dd70b050-c69e-44c7-b967-9a2e0ee43cfe" 00:20:29.929 } 00:20:29.929 ] 00:20:29.929 } 00:20:29.929 ] 00:20:29.929 22:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.929 22:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:20:29.929 [2024-07-24 22:29:55.612660] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:20:29.929 [2024-07-24 22:29:55.612710] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3887375 ] 00:20:29.929 EAL: No free 2048 kB hugepages reported on node 1 00:20:30.189 [2024-07-24 22:29:55.656541] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:20:30.189 [2024-07-24 22:29:55.656621] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:30.189 [2024-07-24 22:29:55.656633] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:30.189 [2024-07-24 22:29:55.656652] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:30.189 [2024-07-24 22:29:55.656668] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:30.189 [2024-07-24 22:29:55.656911] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:20:30.189 [2024-07-24 22:29:55.656970] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x17bd400 0 00:20:30.189 [2024-07-24 22:29:55.663506] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:30.189 [2024-07-24 22:29:55.663544] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:30.189 [2024-07-24 22:29:55.663555] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:30.189 [2024-07-24 22:29:55.663562] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:30.189 [2024-07-24 22:29:55.663625] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:30.189 [2024-07-24 22:29:55.663638] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:30.189 [2024-07-24 22:29:55.663647] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17bd400) 00:20:30.189 [2024-07-24 22:29:55.663669] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:30.189 [2024-07-24 22:29:55.663704] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181d3c0, cid 0, qid 0 00:20:30.189 [2024-07-24 22:29:55.671512] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:30.189 [2024-07-24 22:29:55.671531] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:30.189 [2024-07-24 22:29:55.671539] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:30.189 [2024-07-24 22:29:55.671548] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181d3c0) on tqpair=0x17bd400 00:20:30.189 [2024-07-24 22:29:55.671570] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:30.189 [2024-07-24 22:29:55.671583] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:20:30.189 [2024-07-24 22:29:55.671602] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:20:30.189 [2024-07-24 22:29:55.671628] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:30.189 [2024-07-24 22:29:55.671637] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:30.189 [2024-07-24 22:29:55.671644] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17bd400) 00:20:30.189 [2024-07-24 22:29:55.671657] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.189 [2024-07-24 22:29:55.671682] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181d3c0, cid 0, qid 0 00:20:30.189 [2024-07-24 22:29:55.671830] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:30.189 [2024-07-24 22:29:55.671846] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:30.189 [2024-07-24 22:29:55.671853] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:30.189 [2024-07-24 22:29:55.671861] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181d3c0) on tqpair=0x17bd400 00:20:30.190 [2024-07-24 22:29:55.671875] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:20:30.190 [2024-07-24 22:29:55.671890] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:20:30.190 [2024-07-24 22:29:55.671904] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:30.190 [2024-07-24 22:29:55.671913] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:30.190 [2024-07-24 22:29:55.671920] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17bd400) 00:20:30.190 [2024-07-24 22:29:55.671932] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.190 [2024-07-24 22:29:55.671954] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181d3c0, cid 0, qid 0 00:20:30.190 [2024-07-24 22:29:55.672084] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:30.190 [2024-07-24 22:29:55.672096] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:30.190 [2024-07-24 22:29:55.672103] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:30.190 [2024-07-24 22:29:55.672111] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181d3c0) on tqpair=0x17bd400 00:20:30.190 [2024-07-24 22:29:55.672121] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:20:30.190 [2024-07-24 22:29:55.672137] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:20:30.190 [2024-07-24 22:29:55.672149] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:30.190 [2024-07-24 22:29:55.672158] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:30.190 [2024-07-24 22:29:55.672165] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17bd400) 00:20:30.190 [2024-07-24 22:29:55.672176] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.190 [2024-07-24 22:29:55.672204] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181d3c0, cid 0, qid 0 00:20:30.190 [2024-07-24 22:29:55.672332] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:30.190 [2024-07-24 22:29:55.672345] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:30.190 [2024-07-24 22:29:55.672352] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:30.190 [2024-07-24 22:29:55.672359] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181d3c0) on tqpair=0x17bd400 00:20:30.190 [2024-07-24 22:29:55.672369] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:30.190 [2024-07-24 22:29:55.672386] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:30.190 [2024-07-24 22:29:55.672396] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:30.190 [2024-07-24 22:29:55.672403] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17bd400) 00:20:30.190 [2024-07-24 22:29:55.672414] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.190 [2024-07-24 22:29:55.672435] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181d3c0, cid 0, qid 0 00:20:30.190 [2024-07-24 22:29:55.672580] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:30.190 [2024-07-24 22:29:55.672595] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:30.190 [2024-07-24 22:29:55.672602] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:30.190 [2024-07-24 22:29:55.672610] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181d3c0) on tqpair=0x17bd400 00:20:30.190 [2024-07-24 22:29:55.672619] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:20:30.190 [2024-07-24 22:29:55.672629] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:20:30.190 [2024-07-24 22:29:55.672643] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:30.190 [2024-07-24 22:29:55.672754] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:20:30.190 [2024-07-24 22:29:55.672763] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:30.190 [2024-07-24 22:29:55.672780] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:30.190 [2024-07-24 22:29:55.672788] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:30.190 [2024-07-24 22:29:55.672795] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17bd400) 00:20:30.190 [2024-07-24 22:29:55.672807] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.190 [2024-07-24 22:29:55.672830] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181d3c0, cid 0, qid 0 00:20:30.190 [2024-07-24 22:29:55.672959] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:30.190 [2024-07-24 22:29:55.672972] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:30.190 [2024-07-24 22:29:55.672979] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:30.190 [2024-07-24 22:29:55.672986] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181d3c0) on tqpair=0x17bd400 00:20:30.190 [2024-07-24 22:29:55.672996] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:30.190 [2024-07-24 22:29:55.673012] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:30.190 [2024-07-24 22:29:55.673021] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:30.190 [2024-07-24 22:29:55.673028] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17bd400) 00:20:30.190 [2024-07-24 22:29:55.673044] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.190 [2024-07-24 22:29:55.673067] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181d3c0, cid 0, qid 0 00:20:30.190 [2024-07-24 22:29:55.673193] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:30.190 [2024-07-24 22:29:55.673206] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:30.190 [2024-07-24 22:29:55.673213] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:30.190 [2024-07-24 22:29:55.673220] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181d3c0) on tqpair=0x17bd400 00:20:30.190 [2024-07-24 22:29:55.673228] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:30.190 [2024-07-24 22:29:55.673238] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:20:30.190 [2024-07-24 22:29:55.673251] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:20:30.190 [2024-07-24 22:29:55.673272] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:20:30.190 [2024-07-24 22:29:55.673291] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:30.190 [2024-07-24 22:29:55.673299] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17bd400) 00:20:30.190 [2024-07-24 22:29:55.673311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.190 [2024-07-24 22:29:55.673333] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181d3c0, cid 0, qid 0 00:20:30.190 [2024-07-24 22:29:55.673508] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:30.190 [2024-07-24 22:29:55.673523] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:30.190 [2024-07-24 22:29:55.673530] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:30.190 [2024-07-24 22:29:55.673538] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17bd400): datao=0, datal=4096, cccid=0 00:20:30.190 [2024-07-24 22:29:55.673547] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x181d3c0) on tqpair(0x17bd400): expected_datao=0, payload_size=4096 00:20:30.190 [2024-07-24 22:29:55.673556] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:30.190 [2024-07-24 22:29:55.673575] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:30.190 [2024-07-24 22:29:55.673585] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:30.190 [2024-07-24 22:29:55.719499] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:30.190 [2024-07-24 22:29:55.719519] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:30.190 [2024-07-24 22:29:55.719527] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:30.190 [2024-07-24 22:29:55.719535] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181d3c0) on tqpair=0x17bd400 00:20:30.190 [2024-07-24 22:29:55.719549] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:20:30.190 [2024-07-24 22:29:55.719559] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:20:30.190 [2024-07-24 22:29:55.719568] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:20:30.190 [2024-07-24 22:29:55.719578] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:20:30.190 [2024-07-24 22:29:55.719587] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:20:30.190 [2024-07-24 22:29:55.719596] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:20:30.190 [2024-07-24 22:29:55.719618] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:20:30.190 [2024-07-24 22:29:55.719638] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:30.190 [2024-07-24 22:29:55.719647] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:30.190 [2024-07-24 22:29:55.719655] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17bd400) 00:20:30.190 [2024-07-24 22:29:55.719668] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:30.190 [2024-07-24 22:29:55.719693] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181d3c0, cid 0, qid 0 00:20:30.190 [2024-07-24 22:29:55.719828] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:30.190 [2024-07-24 22:29:55.719844] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:30.190 [2024-07-24 22:29:55.719851] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:30.190 [2024-07-24 22:29:55.719859] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181d3c0) on tqpair=0x17bd400 00:20:30.190 [2024-07-24 22:29:55.719872] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:30.190 [2024-07-24 22:29:55.719880] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:30.190 [2024-07-24 22:29:55.719887] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17bd400) 00:20:30.190 [2024-07-24 22:29:55.719898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:30.191 [2024-07-24 22:29:55.719909] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:30.191 [2024-07-24 22:29:55.719917] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:30.191 [2024-07-24 22:29:55.719924] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x17bd400) 00:20:30.191 [2024-07-24 22:29:55.719934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:30.191 [2024-07-24 22:29:55.719944] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:30.191 [2024-07-24 22:29:55.719952] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:30.191 [2024-07-24 22:29:55.719959] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x17bd400) 00:20:30.191 [2024-07-24 22:29:55.719969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:30.191 [2024-07-24 22:29:55.719979] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:30.191 [2024-07-24 22:29:55.719987] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:30.191 [2024-07-24 22:29:55.719994] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17bd400) 00:20:30.191 [2024-07-24 22:29:55.720003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:30.191 [2024-07-24 22:29:55.720013] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:20:30.191 [2024-07-24 22:29:55.720034] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:30.191 [2024-07-24 22:29:55.720048] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:30.191 [2024-07-24 22:29:55.720056] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x17bd400) 00:20:30.191 [2024-07-24 22:29:55.720068] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.191 [2024-07-24 22:29:55.720091] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181d3c0, cid 0, qid 0 00:20:30.191 [2024-07-24 22:29:55.720103] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181d540, cid 1, qid 0 00:20:30.191 [2024-07-24 22:29:55.720116] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181d6c0, cid 2, qid 0 00:20:30.191 [2024-07-24 22:29:55.720125] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181d840, cid 3, qid 0 00:20:30.191 [2024-07-24 22:29:55.720133] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181d9c0, cid 4, qid 0 00:20:30.191 [2024-07-24 22:29:55.720302] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:30.191 [2024-07-24 22:29:55.720315] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:30.191 [2024-07-24 22:29:55.720322] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:30.191 [2024-07-24 22:29:55.720329] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181d9c0) on tqpair=0x17bd400 00:20:30.191 [2024-07-24 22:29:55.720339] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:20:30.191 [2024-07-24 22:29:55.720349] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:20:30.191 [2024-07-24 22:29:55.720368] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:30.191 [2024-07-24 22:29:55.720378] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x17bd400) 00:20:30.191 [2024-07-24 22:29:55.720389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.191 [2024-07-24 22:29:55.720411] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181d9c0, cid 4, qid 0 00:20:30.191 [2024-07-24 22:29:55.720555] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:30.191 [2024-07-24 22:29:55.720568] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:30.191 [2024-07-24 22:29:55.720576] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:30.191 [2024-07-24 22:29:55.720583] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17bd400): datao=0, datal=4096, cccid=4 00:20:30.191 [2024-07-24 22:29:55.720592] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x181d9c0) on tqpair(0x17bd400): expected_datao=0, payload_size=4096 00:20:30.191 [2024-07-24 22:29:55.720600] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:30.191 [2024-07-24 22:29:55.720617] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:30.191 [2024-07-24 22:29:55.720627] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:30.191 [2024-07-24 22:29:55.720682] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:30.191 [2024-07-24 22:29:55.720697] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:30.191 [2024-07-24 22:29:55.720704] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:30.191 [2024-07-24 22:29:55.720712] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181d9c0) on tqpair=0x17bd400 00:20:30.191 [2024-07-24 22:29:55.720733] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:20:30.191 [2024-07-24 22:29:55.720774] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:30.191 [2024-07-24 22:29:55.720785] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x17bd400) 00:20:30.191 [2024-07-24 22:29:55.720797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.191 [2024-07-24 22:29:55.720810] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:30.191 [2024-07-24 22:29:55.720818] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:30.191 [2024-07-24 22:29:55.720825] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x17bd400) 00:20:30.191 [2024-07-24 22:29:55.720835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:30.191 [2024-07-24 22:29:55.720864] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181d9c0, cid 4, qid 0 00:20:30.191 [2024-07-24 22:29:55.720880] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181db40, cid 5, qid 0 00:20:30.191 [2024-07-24 22:29:55.721049] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:30.191 [2024-07-24 22:29:55.721062] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:30.191 [2024-07-24 22:29:55.721069] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:30.191 [2024-07-24 22:29:55.721076] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17bd400): datao=0, datal=1024, cccid=4 00:20:30.191 [2024-07-24 22:29:55.721085] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x181d9c0) on tqpair(0x17bd400): expected_datao=0, payload_size=1024 00:20:30.191 [2024-07-24 22:29:55.721093] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:30.191 [2024-07-24 22:29:55.721104] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:30.191 [2024-07-24 22:29:55.721112] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:30.191 [2024-07-24 22:29:55.721122] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:30.191 [2024-07-24 22:29:55.721132] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:30.191 [2024-07-24 22:29:55.721139] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:30.191 [2024-07-24 22:29:55.721146] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181db40) on tqpair=0x17bd400 00:20:30.191 [2024-07-24 22:29:55.766493] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:30.191 [2024-07-24 22:29:55.766512] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:30.191 [2024-07-24 22:29:55.766519] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:30.191 [2024-07-24 22:29:55.766527] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181d9c0) on tqpair=0x17bd400 00:20:30.191 [2024-07-24 22:29:55.766553] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:30.191 [2024-07-24 22:29:55.766563] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x17bd400) 00:20:30.191 [2024-07-24 22:29:55.766575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.191 [2024-07-24 22:29:55.766607] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181d9c0, cid 4, qid 0 00:20:30.191 [2024-07-24 22:29:55.766762] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:30.191 [2024-07-24 22:29:55.766775] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:30.191 [2024-07-24 22:29:55.766782] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:30.191 [2024-07-24 22:29:55.766789] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17bd400): datao=0, datal=3072, cccid=4 00:20:30.191 [2024-07-24 22:29:55.766798] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x181d9c0) on tqpair(0x17bd400): expected_datao=0, payload_size=3072 00:20:30.191 [2024-07-24 22:29:55.766806] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:30.191 [2024-07-24 22:29:55.766827] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:30.191 [2024-07-24 22:29:55.766837] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:30.191 [2024-07-24 22:29:55.808608] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:30.191 [2024-07-24 22:29:55.808627] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:30.191 [2024-07-24 22:29:55.808635] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:30.191 [2024-07-24 22:29:55.808643] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181d9c0) on tqpair=0x17bd400 00:20:30.191 [2024-07-24 22:29:55.808660] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:30.191 [2024-07-24 22:29:55.808670] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x17bd400) 00:20:30.191 [2024-07-24 22:29:55.808682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.191 [2024-07-24 22:29:55.808713] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181d9c0, cid 4, qid 0 00:20:30.191 [2024-07-24 22:29:55.808850] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:30.191 [2024-07-24 22:29:55.808863] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:30.191 [2024-07-24 22:29:55.808871] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:30.191 [2024-07-24 22:29:55.808878] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17bd400): datao=0, datal=8, cccid=4 00:20:30.191 [2024-07-24 22:29:55.808886] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x181d9c0) on tqpair(0x17bd400): expected_datao=0, payload_size=8 00:20:30.191 [2024-07-24 22:29:55.808895] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:30.191 [2024-07-24 22:29:55.808906] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:30.191 [2024-07-24 22:29:55.808914] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:30.191 [2024-07-24 22:29:55.854509] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:30.191 [2024-07-24 22:29:55.854527] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:30.191 [2024-07-24 22:29:55.854534] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:30.192 [2024-07-24 22:29:55.854542] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181d9c0) on tqpair=0x17bd400 00:20:30.192 ===================================================== 00:20:30.192 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:30.192 ===================================================== 00:20:30.192 Controller Capabilities/Features 00:20:30.192 ================================ 00:20:30.192 Vendor ID: 0000 00:20:30.192 Subsystem Vendor ID: 0000 00:20:30.192 Serial Number: .................... 00:20:30.192 Model Number: ........................................ 00:20:30.192 Firmware Version: 24.09 00:20:30.192 Recommended Arb Burst: 0 00:20:30.192 IEEE OUI Identifier: 00 00 00 00:20:30.192 Multi-path I/O 00:20:30.192 May have multiple subsystem ports: No 00:20:30.192 May have multiple controllers: No 00:20:30.192 Associated with SR-IOV VF: No 00:20:30.192 Max Data Transfer Size: 131072 00:20:30.192 Max Number of Namespaces: 0 00:20:30.192 Max Number of I/O Queues: 1024 00:20:30.192 NVMe Specification Version (VS): 1.3 00:20:30.192 NVMe Specification Version (Identify): 1.3 00:20:30.192 Maximum Queue Entries: 128 00:20:30.192 Contiguous Queues Required: Yes 00:20:30.192 Arbitration Mechanisms Supported 00:20:30.192 Weighted Round Robin: Not Supported 00:20:30.192 Vendor Specific: Not Supported 00:20:30.192 Reset Timeout: 15000 ms 00:20:30.192 Doorbell Stride: 4 bytes 00:20:30.192 NVM Subsystem Reset: Not Supported 00:20:30.192 Command Sets Supported 00:20:30.192 NVM Command Set: Supported 00:20:30.192 Boot Partition: Not Supported 00:20:30.192 Memory Page Size Minimum: 4096 bytes 00:20:30.192 Memory Page Size Maximum: 4096 bytes 00:20:30.192 Persistent Memory Region: Not Supported 00:20:30.192 Optional Asynchronous Events Supported 00:20:30.192 Namespace Attribute Notices: Not Supported 00:20:30.192 Firmware Activation Notices: Not Supported 00:20:30.192 ANA Change Notices: Not Supported 00:20:30.192 PLE Aggregate Log Change Notices: Not Supported 00:20:30.192 LBA Status Info Alert Notices: Not Supported 00:20:30.192 EGE Aggregate Log Change Notices: Not Supported 00:20:30.192 Normal NVM Subsystem Shutdown event: Not Supported 00:20:30.192 Zone Descriptor Change Notices: Not Supported 00:20:30.192 Discovery Log Change Notices: Supported 00:20:30.192 Controller Attributes 00:20:30.192 128-bit Host Identifier: Not Supported 00:20:30.192 Non-Operational Permissive Mode: Not Supported 00:20:30.192 NVM Sets: Not Supported 00:20:30.192 Read Recovery Levels: Not Supported 00:20:30.192 Endurance Groups: Not Supported 00:20:30.192 Predictable Latency Mode: Not Supported 00:20:30.192 Traffic Based Keep ALive: Not Supported 00:20:30.192 Namespace Granularity: Not Supported 00:20:30.192 SQ Associations: Not Supported 00:20:30.192 UUID List: Not Supported 00:20:30.192 Multi-Domain Subsystem: Not Supported 00:20:30.192 Fixed Capacity Management: Not Supported 00:20:30.192 Variable Capacity Management: Not Supported 00:20:30.192 Delete Endurance Group: Not Supported 00:20:30.192 Delete NVM Set: Not Supported 00:20:30.192 Extended LBA Formats Supported: Not Supported 00:20:30.192 Flexible Data Placement Supported: Not Supported 00:20:30.192 00:20:30.192 Controller Memory Buffer Support 00:20:30.192 ================================ 00:20:30.192 Supported: No 00:20:30.192 00:20:30.192 Persistent Memory Region Support 00:20:30.192 ================================ 00:20:30.192 Supported: No 00:20:30.192 00:20:30.192 Admin Command Set Attributes 00:20:30.192 ============================ 00:20:30.192 Security Send/Receive: Not Supported 00:20:30.192 Format NVM: Not Supported 00:20:30.192 Firmware Activate/Download: Not Supported 00:20:30.192 Namespace Management: Not Supported 00:20:30.192 Device Self-Test: Not Supported 00:20:30.192 Directives: Not Supported 00:20:30.192 NVMe-MI: Not Supported 00:20:30.192 Virtualization Management: Not Supported 00:20:30.192 Doorbell Buffer Config: Not Supported 00:20:30.192 Get LBA Status Capability: Not Supported 00:20:30.192 Command & Feature Lockdown Capability: Not Supported 00:20:30.192 Abort Command Limit: 1 00:20:30.192 Async Event Request Limit: 4 00:20:30.192 Number of Firmware Slots: N/A 00:20:30.192 Firmware Slot 1 Read-Only: N/A 00:20:30.192 Firmware Activation Without Reset: N/A 00:20:30.192 Multiple Update Detection Support: N/A 00:20:30.192 Firmware Update Granularity: No Information Provided 00:20:30.192 Per-Namespace SMART Log: No 00:20:30.192 Asymmetric Namespace Access Log Page: Not Supported 00:20:30.192 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:30.192 Command Effects Log Page: Not Supported 00:20:30.192 Get Log Page Extended Data: Supported 00:20:30.192 Telemetry Log Pages: Not Supported 00:20:30.192 Persistent Event Log Pages: Not Supported 00:20:30.192 Supported Log Pages Log Page: May Support 00:20:30.192 Commands Supported & Effects Log Page: Not Supported 00:20:30.192 Feature Identifiers & Effects Log Page:May Support 00:20:30.192 NVMe-MI Commands & Effects Log Page: May Support 00:20:30.192 Data Area 4 for Telemetry Log: Not Supported 00:20:30.192 Error Log Page Entries Supported: 128 00:20:30.192 Keep Alive: Not Supported 00:20:30.192 00:20:30.192 NVM Command Set Attributes 00:20:30.192 ========================== 00:20:30.192 Submission Queue Entry Size 00:20:30.192 Max: 1 00:20:30.192 Min: 1 00:20:30.192 Completion Queue Entry Size 00:20:30.192 Max: 1 00:20:30.192 Min: 1 00:20:30.192 Number of Namespaces: 0 00:20:30.192 Compare Command: Not Supported 00:20:30.192 Write Uncorrectable Command: Not Supported 00:20:30.192 Dataset Management Command: Not Supported 00:20:30.192 Write Zeroes Command: Not Supported 00:20:30.192 Set Features Save Field: Not Supported 00:20:30.192 Reservations: Not Supported 00:20:30.192 Timestamp: Not Supported 00:20:30.192 Copy: Not Supported 00:20:30.192 Volatile Write Cache: Not Present 00:20:30.192 Atomic Write Unit (Normal): 1 00:20:30.192 Atomic Write Unit (PFail): 1 00:20:30.192 Atomic Compare & Write Unit: 1 00:20:30.192 Fused Compare & Write: Supported 00:20:30.192 Scatter-Gather List 00:20:30.192 SGL Command Set: Supported 00:20:30.192 SGL Keyed: Supported 00:20:30.192 SGL Bit Bucket Descriptor: Not Supported 00:20:30.192 SGL Metadata Pointer: Not Supported 00:20:30.192 Oversized SGL: Not Supported 00:20:30.192 SGL Metadata Address: Not Supported 00:20:30.192 SGL Offset: Supported 00:20:30.192 Transport SGL Data Block: Not Supported 00:20:30.192 Replay Protected Memory Block: Not Supported 00:20:30.192 00:20:30.192 Firmware Slot Information 00:20:30.192 ========================= 00:20:30.192 Active slot: 0 00:20:30.192 00:20:30.192 00:20:30.192 Error Log 00:20:30.192 ========= 00:20:30.192 00:20:30.192 Active Namespaces 00:20:30.192 ================= 00:20:30.192 Discovery Log Page 00:20:30.192 ================== 00:20:30.193 Generation Counter: 2 00:20:30.193 Number of Records: 2 00:20:30.193 Record Format: 0 00:20:30.193 00:20:30.193 Discovery Log Entry 0 00:20:30.193 ---------------------- 00:20:30.193 Transport Type: 3 (TCP) 00:20:30.193 Address Family: 1 (IPv4) 00:20:30.193 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:30.193 Entry Flags: 00:20:30.193 Duplicate Returned Information: 1 00:20:30.193 Explicit Persistent Connection Support for Discovery: 1 00:20:30.193 Transport Requirements: 00:20:30.193 Secure Channel: Not Required 00:20:30.193 Port ID: 0 (0x0000) 00:20:30.193 Controller ID: 65535 (0xffff) 00:20:30.193 Admin Max SQ Size: 128 00:20:30.193 Transport Service Identifier: 4420 00:20:30.193 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:30.193 Transport Address: 10.0.0.2 00:20:30.193 Discovery Log Entry 1 00:20:30.193 ---------------------- 00:20:30.193 Transport Type: 3 (TCP) 00:20:30.193 Address Family: 1 (IPv4) 00:20:30.193 Subsystem Type: 2 (NVM Subsystem) 00:20:30.193 Entry Flags: 00:20:30.193 Duplicate Returned Information: 0 00:20:30.193 Explicit Persistent Connection Support for Discovery: 0 00:20:30.193 Transport Requirements: 00:20:30.193 Secure Channel: Not Required 00:20:30.193 Port ID: 0 (0x0000) 00:20:30.193 Controller ID: 65535 (0xffff) 00:20:30.193 Admin Max SQ Size: 128 00:20:30.193 Transport Service Identifier: 4420 00:20:30.193 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:20:30.193 Transport Address: 10.0.0.2 [2024-07-24 22:29:55.854673] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:20:30.193 [2024-07-24 22:29:55.854697] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181d3c0) on tqpair=0x17bd400 00:20:30.193 [2024-07-24 22:29:55.854710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.193 [2024-07-24 22:29:55.854720] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181d540) on tqpair=0x17bd400 00:20:30.193 [2024-07-24 22:29:55.854729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.193 [2024-07-24 22:29:55.854738] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181d6c0) on tqpair=0x17bd400 00:20:30.193 [2024-07-24 22:29:55.854747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.193 [2024-07-24 22:29:55.854756] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181d840) on tqpair=0x17bd400 00:20:30.193 [2024-07-24 22:29:55.854764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.193 [2024-07-24 22:29:55.854785] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:30.193 [2024-07-24 22:29:55.854795] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:30.193 [2024-07-24 22:29:55.854802] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17bd400) 00:20:30.193 [2024-07-24 22:29:55.854815] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.193 [2024-07-24 22:29:55.854843] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181d840, cid 3, qid 0 00:20:30.193 [2024-07-24 22:29:55.854969] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:30.193 [2024-07-24 22:29:55.854982] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:30.193 [2024-07-24 22:29:55.854989] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:30.193 [2024-07-24 22:29:55.854997] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181d840) on tqpair=0x17bd400 00:20:30.193 [2024-07-24 22:29:55.855010] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:30.193 [2024-07-24 22:29:55.855019] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:30.193 [2024-07-24 22:29:55.855026] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17bd400) 00:20:30.193 [2024-07-24 22:29:55.855037] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.193 [2024-07-24 22:29:55.855066] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181d840, cid 3, qid 0 00:20:30.193 [2024-07-24 22:29:55.855220] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:30.193 [2024-07-24 22:29:55.855235] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:30.193 [2024-07-24 22:29:55.855243] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:30.193 [2024-07-24 22:29:55.855250] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181d840) on tqpair=0x17bd400 00:20:30.193 [2024-07-24 22:29:55.855262] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:20:30.193 [2024-07-24 22:29:55.855271] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:20:30.193 [2024-07-24 22:29:55.855288] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:30.193 [2024-07-24 22:29:55.855298] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:30.193 [2024-07-24 22:29:55.855305] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17bd400) 00:20:30.193 [2024-07-24 22:29:55.855316] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.193 [2024-07-24 22:29:55.855338] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181d840, cid 3, qid 0 00:20:30.193 [2024-07-24 22:29:55.855476] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:30.193 [2024-07-24 22:29:55.855498] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:30.193 [2024-07-24 22:29:55.855506] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:30.193 [2024-07-24 22:29:55.855513] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181d840) on tqpair=0x17bd400 00:20:30.193 [2024-07-24 22:29:55.855532] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:30.193 [2024-07-24 22:29:55.855541] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:30.193 [2024-07-24 22:29:55.855548] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17bd400) 00:20:30.193 [2024-07-24 22:29:55.855560] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.193 [2024-07-24 22:29:55.855582] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181d840, cid 3, qid 0 00:20:30.193 [2024-07-24 22:29:55.855715] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:30.193 [2024-07-24 22:29:55.855727] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:30.193 [2024-07-24 22:29:55.855734] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:30.193 [2024-07-24 22:29:55.855742] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181d840) on tqpair=0x17bd400 00:20:30.193 [2024-07-24 22:29:55.855759] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:30.193 [2024-07-24 22:29:55.855769] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:30.193 [2024-07-24 22:29:55.855776] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17bd400) 00:20:30.193 [2024-07-24 22:29:55.855787] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.193 [2024-07-24 22:29:55.855810] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181d840, cid 3, qid 0 00:20:30.193 [2024-07-24 22:29:55.855938] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:30.193 [2024-07-24 22:29:55.855951] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:30.193 [2024-07-24 22:29:55.855958] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:30.193 [2024-07-24 22:29:55.855965] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181d840) on tqpair=0x17bd400 00:20:30.193 [2024-07-24 22:29:55.855982] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:30.193 [2024-07-24 22:29:55.855991] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:30.193 [2024-07-24 22:29:55.855998] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17bd400) 00:20:30.193 [2024-07-24 22:29:55.856010] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.193 [2024-07-24 22:29:55.856035] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181d840, cid 3, qid 0 00:20:30.193 [2024-07-24 22:29:55.856162] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:30.193 [2024-07-24 22:29:55.856174] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:30.193 [2024-07-24 22:29:55.856181] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:30.193 [2024-07-24 22:29:55.856189] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181d840) on tqpair=0x17bd400 00:20:30.193 [2024-07-24 22:29:55.856206] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:30.193 [2024-07-24 22:29:55.856215] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:30.194 [2024-07-24 22:29:55.856222] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17bd400) 00:20:30.194 [2024-07-24 22:29:55.856234] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.194 [2024-07-24 22:29:55.856255] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181d840, cid 3, qid 0 00:20:30.194 [2024-07-24 22:29:55.856387] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:30.194 [2024-07-24 22:29:55.856402] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:30.194 [2024-07-24 22:29:55.856410] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:30.194 [2024-07-24 22:29:55.856417] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181d840) on tqpair=0x17bd400 00:20:30.194 [2024-07-24 22:29:55.856435] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:30.194 [2024-07-24 22:29:55.856444] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:30.194 [2024-07-24 22:29:55.856451] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17bd400) 00:20:30.194 [2024-07-24 22:29:55.856462] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.194 [2024-07-24 22:29:55.856492] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181d840, cid 3, qid 0 00:20:30.194 [2024-07-24 22:29:55.856619] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:30.194 [2024-07-24 22:29:55.856632] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:30.194 [2024-07-24 22:29:55.856639] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:30.194 [2024-07-24 22:29:55.856647] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181d840) on tqpair=0x17bd400 00:20:30.194 [2024-07-24 22:29:55.856664] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:30.194 [2024-07-24 22:29:55.856673] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:30.194 [2024-07-24 22:29:55.856680] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17bd400) 00:20:30.194 [2024-07-24 22:29:55.856691] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.194 [2024-07-24 22:29:55.856713] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181d840, cid 3, qid 0 00:20:30.194 [2024-07-24 22:29:55.856841] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:30.194 [2024-07-24 22:29:55.856853] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:30.194 [2024-07-24 22:29:55.856860] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:30.194 [2024-07-24 22:29:55.856867] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181d840) on tqpair=0x17bd400 00:20:30.194 [2024-07-24 22:29:55.856884] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:30.194 [2024-07-24 22:29:55.856894] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:30.194 [2024-07-24 22:29:55.856901] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17bd400) 00:20:30.194 [2024-07-24 22:29:55.856912] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.194 [2024-07-24 22:29:55.856938] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181d840, cid 3, qid 0 00:20:30.194 [2024-07-24 22:29:55.857063] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:30.194 [2024-07-24 22:29:55.857075] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:30.194 [2024-07-24 22:29:55.857082] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:30.194 [2024-07-24 22:29:55.857089] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181d840) on tqpair=0x17bd400 00:20:30.194 [2024-07-24 22:29:55.857106] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:30.194 [2024-07-24 22:29:55.857116] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:30.194 [2024-07-24 22:29:55.857123] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17bd400) 00:20:30.194 [2024-07-24 22:29:55.857134] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.194 [2024-07-24 22:29:55.857156] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181d840, cid 3, qid 0 00:20:30.194 [2024-07-24 22:29:55.857280] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:30.194 [2024-07-24 22:29:55.857292] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:30.194 [2024-07-24 22:29:55.857299] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:30.194 [2024-07-24 22:29:55.857307] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181d840) on tqpair=0x17bd400 00:20:30.194 [2024-07-24 22:29:55.857323] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:30.194 [2024-07-24 22:29:55.857333] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:30.194 [2024-07-24 22:29:55.857340] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17bd400) 00:20:30.194 [2024-07-24 22:29:55.857351] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.194 [2024-07-24 22:29:55.857372] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181d840, cid 3, qid 0 00:20:30.194 [2024-07-24 22:29:55.857497] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:30.194 [2024-07-24 22:29:55.857513] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:30.194 [2024-07-24 22:29:55.857520] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:30.194 [2024-07-24 22:29:55.857527] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181d840) on tqpair=0x17bd400 00:20:30.194 [2024-07-24 22:29:55.857545] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:30.194 [2024-07-24 22:29:55.857554] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:30.194 [2024-07-24 22:29:55.857562] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17bd400) 00:20:30.194 [2024-07-24 22:29:55.857573] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.194 [2024-07-24 22:29:55.857595] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181d840, cid 3, qid 0 00:20:30.194 [2024-07-24 22:29:55.857720] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:30.194 [2024-07-24 22:29:55.857732] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:30.194 [2024-07-24 22:29:55.857739] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:30.194 [2024-07-24 22:29:55.857747] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181d840) on tqpair=0x17bd400 00:20:30.194 [2024-07-24 22:29:55.857764] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:30.194 [2024-07-24 22:29:55.857773] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:30.194 [2024-07-24 22:29:55.857780] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17bd400) 00:20:30.194 [2024-07-24 22:29:55.857791] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.194 [2024-07-24 22:29:55.857812] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181d840, cid 3, qid 0 00:20:30.194 [2024-07-24 22:29:55.857927] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:30.194 [2024-07-24 22:29:55.857940] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:30.194 [2024-07-24 22:29:55.857947] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:30.194 [2024-07-24 22:29:55.857954] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181d840) on tqpair=0x17bd400 00:20:30.194 [2024-07-24 22:29:55.857971] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:30.195 [2024-07-24 22:29:55.857980] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:30.195 [2024-07-24 22:29:55.857987] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17bd400) 00:20:30.195 [2024-07-24 22:29:55.857999] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.195 [2024-07-24 22:29:55.858020] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181d840, cid 3, qid 0 00:20:30.195 [2024-07-24 22:29:55.858119] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:30.195 [2024-07-24 22:29:55.858134] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:30.195 [2024-07-24 22:29:55.858141] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:30.195 [2024-07-24 22:29:55.858149] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181d840) on tqpair=0x17bd400 00:20:30.195 [2024-07-24 22:29:55.858166] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:30.195 [2024-07-24 22:29:55.858176] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:30.195 [2024-07-24 22:29:55.858183] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17bd400) 00:20:30.195 [2024-07-24 22:29:55.858194] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.195 [2024-07-24 22:29:55.858216] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181d840, cid 3, qid 0 00:20:30.195 [2024-07-24 22:29:55.858313] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:30.195 [2024-07-24 22:29:55.858325] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:30.195 [2024-07-24 22:29:55.858332] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:30.195 [2024-07-24 22:29:55.858340] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181d840) on tqpair=0x17bd400 00:20:30.195 [2024-07-24 22:29:55.858357] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:30.195 [2024-07-24 22:29:55.858366] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:30.195 [2024-07-24 22:29:55.858373] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17bd400) 00:20:30.195 [2024-07-24 22:29:55.858385] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.195 [2024-07-24 22:29:55.858406] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181d840, cid 3, qid 0 00:20:30.195 [2024-07-24 22:29:55.862499] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:30.195 [2024-07-24 22:29:55.862516] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:30.195 [2024-07-24 22:29:55.862523] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:30.195 [2024-07-24 22:29:55.862531] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181d840) on tqpair=0x17bd400 00:20:30.195 [2024-07-24 22:29:55.862549] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:30.195 [2024-07-24 22:29:55.862559] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:30.195 [2024-07-24 22:29:55.862566] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17bd400) 00:20:30.195 [2024-07-24 22:29:55.862578] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.195 [2024-07-24 22:29:55.862601] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x181d840, cid 3, qid 0 00:20:30.195 [2024-07-24 22:29:55.862697] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:30.195 [2024-07-24 22:29:55.862714] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:30.195 [2024-07-24 22:29:55.862722] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:30.195 [2024-07-24 22:29:55.862730] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x181d840) on tqpair=0x17bd400 00:20:30.195 [2024-07-24 22:29:55.862744] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:20:30.195 00:20:30.195 22:29:55 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:20:30.458 [2024-07-24 22:29:55.901975] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:20:30.458 [2024-07-24 22:29:55.902026] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3887411 ] 00:20:30.458 EAL: No free 2048 kB hugepages reported on node 1 00:20:30.458 [2024-07-24 22:29:55.944990] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:20:30.458 [2024-07-24 22:29:55.945047] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:30.458 [2024-07-24 22:29:55.945057] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:30.458 [2024-07-24 22:29:55.945074] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:30.458 [2024-07-24 22:29:55.945088] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:30.458 [2024-07-24 22:29:55.945263] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:20:30.458 [2024-07-24 22:29:55.945303] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x187e400 0 00:20:30.458 [2024-07-24 22:29:55.951506] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:30.458 [2024-07-24 22:29:55.951530] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:30.458 [2024-07-24 22:29:55.951539] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:30.458 [2024-07-24 22:29:55.951546] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:30.458 [2024-07-24 22:29:55.951588] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:30.458 [2024-07-24 22:29:55.951599] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:30.458 [2024-07-24 22:29:55.951611] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x187e400) 00:20:30.458 [2024-07-24 22:29:55.951627] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:30.458 [2024-07-24 22:29:55.951654] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18de3c0, cid 0, qid 0 00:20:30.458 [2024-07-24 22:29:55.959505] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:30.458 [2024-07-24 22:29:55.959524] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:30.458 [2024-07-24 22:29:55.959532] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:30.458 [2024-07-24 22:29:55.959540] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18de3c0) on tqpair=0x187e400 00:20:30.458 [2024-07-24 22:29:55.959559] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:30.458 [2024-07-24 22:29:55.959571] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:20:30.458 [2024-07-24 22:29:55.959581] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:20:30.458 [2024-07-24 22:29:55.959615] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:30.458 [2024-07-24 22:29:55.959625] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:30.458 [2024-07-24 22:29:55.959633] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x187e400) 00:20:30.458 [2024-07-24 22:29:55.959645] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.458 [2024-07-24 22:29:55.959670] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18de3c0, cid 0, qid 0 00:20:30.458 [2024-07-24 22:29:55.959787] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:30.458 [2024-07-24 22:29:55.959800] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:30.458 [2024-07-24 22:29:55.959807] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:30.458 [2024-07-24 22:29:55.959815] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18de3c0) on tqpair=0x187e400 00:20:30.458 [2024-07-24 22:29:55.959828] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:20:30.458 [2024-07-24 22:29:55.959843] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:20:30.458 [2024-07-24 22:29:55.959856] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:30.458 [2024-07-24 22:29:55.959864] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:30.458 [2024-07-24 22:29:55.959871] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x187e400) 00:20:30.458 [2024-07-24 22:29:55.959883] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.458 [2024-07-24 22:29:55.959905] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18de3c0, cid 0, qid 0 00:20:30.458 [2024-07-24 22:29:55.960011] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:30.458 [2024-07-24 22:29:55.960027] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:30.458 [2024-07-24 22:29:55.960034] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:30.458 [2024-07-24 22:29:55.960042] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18de3c0) on tqpair=0x187e400 00:20:30.458 [2024-07-24 22:29:55.960052] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:20:30.458 [2024-07-24 22:29:55.960067] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:20:30.458 [2024-07-24 22:29:55.960080] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:30.458 [2024-07-24 22:29:55.960088] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:30.458 [2024-07-24 22:29:55.960095] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x187e400) 00:20:30.458 [2024-07-24 22:29:55.960107] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.458 [2024-07-24 22:29:55.960129] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18de3c0, cid 0, qid 0 00:20:30.458 [2024-07-24 22:29:55.960241] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:30.458 [2024-07-24 22:29:55.960254] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:30.458 [2024-07-24 22:29:55.960261] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:30.458 [2024-07-24 22:29:55.960269] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18de3c0) on tqpair=0x187e400 00:20:30.458 [2024-07-24 22:29:55.960278] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:30.458 [2024-07-24 22:29:55.960295] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:30.458 [2024-07-24 22:29:55.960304] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:30.458 [2024-07-24 22:29:55.960312] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x187e400) 00:20:30.458 [2024-07-24 22:29:55.960327] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.458 [2024-07-24 22:29:55.960350] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18de3c0, cid 0, qid 0 00:20:30.458 [2024-07-24 22:29:55.960460] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:30.458 [2024-07-24 22:29:55.960473] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:30.458 [2024-07-24 22:29:55.960489] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:30.458 [2024-07-24 22:29:55.960498] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18de3c0) on tqpair=0x187e400 00:20:30.458 [2024-07-24 22:29:55.960506] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:20:30.458 [2024-07-24 22:29:55.960515] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:20:30.458 [2024-07-24 22:29:55.960530] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:30.458 [2024-07-24 22:29:55.960640] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:20:30.458 [2024-07-24 22:29:55.960648] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:30.458 [2024-07-24 22:29:55.960662] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:30.458 [2024-07-24 22:29:55.960670] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:30.458 [2024-07-24 22:29:55.960677] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x187e400) 00:20:30.458 [2024-07-24 22:29:55.960688] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.459 [2024-07-24 22:29:55.960711] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18de3c0, cid 0, qid 0 00:20:30.459 [2024-07-24 22:29:55.960823] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:30.459 [2024-07-24 22:29:55.960837] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:30.459 [2024-07-24 22:29:55.960844] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:30.459 [2024-07-24 22:29:55.960851] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18de3c0) on tqpair=0x187e400 00:20:30.459 [2024-07-24 22:29:55.960860] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:30.459 [2024-07-24 22:29:55.960877] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:30.459 [2024-07-24 22:29:55.960887] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:30.459 [2024-07-24 22:29:55.960894] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x187e400) 00:20:30.459 [2024-07-24 22:29:55.960905] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.459 [2024-07-24 22:29:55.960927] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18de3c0, cid 0, qid 0 00:20:30.459 [2024-07-24 22:29:55.961039] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:30.459 [2024-07-24 22:29:55.961054] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:30.459 [2024-07-24 22:29:55.961061] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:30.459 [2024-07-24 22:29:55.961069] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18de3c0) on tqpair=0x187e400 00:20:30.459 [2024-07-24 22:29:55.961077] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:30.459 [2024-07-24 22:29:55.961086] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:20:30.459 [2024-07-24 22:29:55.961100] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:20:30.459 [2024-07-24 22:29:55.961118] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:20:30.459 [2024-07-24 22:29:55.961134] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:30.459 [2024-07-24 22:29:55.961142] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x187e400) 00:20:30.459 [2024-07-24 22:29:55.961154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.459 [2024-07-24 22:29:55.961176] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18de3c0, cid 0, qid 0 00:20:30.459 [2024-07-24 22:29:55.961330] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:30.459 [2024-07-24 22:29:55.961343] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:30.459 [2024-07-24 22:29:55.961351] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:30.459 [2024-07-24 22:29:55.961358] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x187e400): datao=0, datal=4096, cccid=0 00:20:30.459 [2024-07-24 22:29:55.961366] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18de3c0) on tqpair(0x187e400): expected_datao=0, payload_size=4096 00:20:30.459 [2024-07-24 22:29:55.961375] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:30.459 [2024-07-24 22:29:55.961393] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:30.459 [2024-07-24 22:29:55.961402] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:30.459 [2024-07-24 22:29:56.001599] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:30.459 [2024-07-24 22:29:56.001619] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:30.459 [2024-07-24 22:29:56.001627] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:30.459 [2024-07-24 22:29:56.001635] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18de3c0) on tqpair=0x187e400 00:20:30.459 [2024-07-24 22:29:56.001648] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:20:30.459 [2024-07-24 22:29:56.001657] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:20:30.459 [2024-07-24 22:29:56.001666] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:20:30.459 [2024-07-24 22:29:56.001673] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:20:30.459 [2024-07-24 22:29:56.001682] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:20:30.459 [2024-07-24 22:29:56.001691] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:20:30.459 [2024-07-24 22:29:56.001706] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:20:30.459 [2024-07-24 22:29:56.001724] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:30.459 [2024-07-24 22:29:56.001734] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:30.459 [2024-07-24 22:29:56.001741] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x187e400) 00:20:30.459 [2024-07-24 22:29:56.001753] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:30.459 [2024-07-24 22:29:56.001779] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18de3c0, cid 0, qid 0 00:20:30.459 [2024-07-24 22:29:56.001893] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:30.459 [2024-07-24 22:29:56.001906] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:30.459 [2024-07-24 22:29:56.001913] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:30.459 [2024-07-24 22:29:56.001921] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18de3c0) on tqpair=0x187e400 00:20:30.459 [2024-07-24 22:29:56.001937] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:30.459 [2024-07-24 22:29:56.001945] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:30.459 [2024-07-24 22:29:56.001953] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x187e400) 00:20:30.459 [2024-07-24 22:29:56.001963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:30.459 [2024-07-24 22:29:56.001974] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:30.459 [2024-07-24 22:29:56.001982] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:30.459 [2024-07-24 22:29:56.001989] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x187e400) 00:20:30.459 [2024-07-24 22:29:56.001998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:30.459 [2024-07-24 22:29:56.002009] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:30.459 [2024-07-24 22:29:56.002016] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:30.459 [2024-07-24 22:29:56.002023] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x187e400) 00:20:30.459 [2024-07-24 22:29:56.002033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:30.459 [2024-07-24 22:29:56.002044] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:30.459 [2024-07-24 22:29:56.002051] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:30.459 [2024-07-24 22:29:56.002058] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x187e400) 00:20:30.459 [2024-07-24 22:29:56.002068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:30.459 [2024-07-24 22:29:56.002077] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:30.459 [2024-07-24 22:29:56.002100] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:30.459 [2024-07-24 22:29:56.002114] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:30.459 [2024-07-24 22:29:56.002122] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x187e400) 00:20:30.459 [2024-07-24 22:29:56.002133] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.459 [2024-07-24 22:29:56.002157] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18de3c0, cid 0, qid 0 00:20:30.459 [2024-07-24 22:29:56.002170] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18de540, cid 1, qid 0 00:20:30.459 [2024-07-24 22:29:56.002178] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18de6c0, cid 2, qid 0 00:20:30.459 [2024-07-24 22:29:56.002187] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18de840, cid 3, qid 0 00:20:30.459 [2024-07-24 22:29:56.002196] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18de9c0, cid 4, qid 0 00:20:30.459 [2024-07-24 22:29:56.002338] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:30.459 [2024-07-24 22:29:56.002353] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:30.459 [2024-07-24 22:29:56.002361] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:30.459 [2024-07-24 22:29:56.002368] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18de9c0) on tqpair=0x187e400 00:20:30.459 [2024-07-24 22:29:56.002377] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:20:30.459 [2024-07-24 22:29:56.002387] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:30.459 [2024-07-24 22:29:56.002407] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:20:30.459 [2024-07-24 22:29:56.002424] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:20:30.459 [2024-07-24 22:29:56.002437] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:30.459 [2024-07-24 22:29:56.002445] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:30.459 [2024-07-24 22:29:56.002452] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x187e400) 00:20:30.459 [2024-07-24 22:29:56.002464] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:30.459 [2024-07-24 22:29:56.002494] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18de9c0, cid 4, qid 0 00:20:30.459 [2024-07-24 22:29:56.002601] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:30.459 [2024-07-24 22:29:56.002615] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:30.459 [2024-07-24 22:29:56.002622] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:30.459 [2024-07-24 22:29:56.002629] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18de9c0) on tqpair=0x187e400 00:20:30.459 [2024-07-24 22:29:56.002710] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:20:30.459 [2024-07-24 22:29:56.002731] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:20:30.459 [2024-07-24 22:29:56.002747] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:30.459 [2024-07-24 22:29:56.002755] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x187e400) 00:20:30.460 [2024-07-24 22:29:56.002767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.460 [2024-07-24 22:29:56.002789] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18de9c0, cid 4, qid 0 00:20:30.460 [2024-07-24 22:29:56.002920] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:30.460 [2024-07-24 22:29:56.002933] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:30.460 [2024-07-24 22:29:56.002940] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:30.460 [2024-07-24 22:29:56.002948] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x187e400): datao=0, datal=4096, cccid=4 00:20:30.460 [2024-07-24 22:29:56.002956] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18de9c0) on tqpair(0x187e400): expected_datao=0, payload_size=4096 00:20:30.460 [2024-07-24 22:29:56.002964] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:30.460 [2024-07-24 22:29:56.002976] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:30.460 [2024-07-24 22:29:56.002984] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:30.460 [2024-07-24 22:29:56.002997] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:30.460 [2024-07-24 22:29:56.003008] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:30.460 [2024-07-24 22:29:56.003015] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:30.460 [2024-07-24 22:29:56.003022] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18de9c0) on tqpair=0x187e400 00:20:30.460 [2024-07-24 22:29:56.003041] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:20:30.460 [2024-07-24 22:29:56.003064] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:20:30.460 [2024-07-24 22:29:56.003083] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:20:30.460 [2024-07-24 22:29:56.003098] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:30.460 [2024-07-24 22:29:56.003106] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x187e400) 00:20:30.460 [2024-07-24 22:29:56.003117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.460 [2024-07-24 22:29:56.003144] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18de9c0, cid 4, qid 0 00:20:30.460 [2024-07-24 22:29:56.003283] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:30.460 [2024-07-24 22:29:56.003296] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:30.460 [2024-07-24 22:29:56.003304] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:30.460 [2024-07-24 22:29:56.003311] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x187e400): datao=0, datal=4096, cccid=4 00:20:30.460 [2024-07-24 22:29:56.003319] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18de9c0) on tqpair(0x187e400): expected_datao=0, payload_size=4096 00:20:30.460 [2024-07-24 22:29:56.003327] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:30.460 [2024-07-24 22:29:56.003345] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:30.460 [2024-07-24 22:29:56.003354] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:30.460 [2024-07-24 22:29:56.047492] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:30.460 [2024-07-24 22:29:56.047518] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:30.460 [2024-07-24 22:29:56.047526] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:30.460 [2024-07-24 22:29:56.047534] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18de9c0) on tqpair=0x187e400 00:20:30.460 [2024-07-24 22:29:56.047578] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:30.460 [2024-07-24 22:29:56.047601] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:30.460 [2024-07-24 22:29:56.047617] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:30.460 [2024-07-24 22:29:56.047625] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x187e400) 00:20:30.460 [2024-07-24 22:29:56.047638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.460 [2024-07-24 22:29:56.047662] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18de9c0, cid 4, qid 0 00:20:30.460 [2024-07-24 22:29:56.047782] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:30.460 [2024-07-24 22:29:56.047799] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:30.460 [2024-07-24 22:29:56.047806] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:30.460 [2024-07-24 22:29:56.047814] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x187e400): datao=0, datal=4096, cccid=4 00:20:30.460 [2024-07-24 22:29:56.047822] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18de9c0) on tqpair(0x187e400): expected_datao=0, payload_size=4096 00:20:30.460 [2024-07-24 22:29:56.047831] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:30.460 [2024-07-24 22:29:56.047850] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:30.460 [2024-07-24 22:29:56.047859] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:30.460 [2024-07-24 22:29:56.088589] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:30.460 [2024-07-24 22:29:56.088608] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:30.460 [2024-07-24 22:29:56.088616] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:30.460 [2024-07-24 22:29:56.088624] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18de9c0) on tqpair=0x187e400 00:20:30.460 [2024-07-24 22:29:56.088642] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:30.460 [2024-07-24 22:29:56.088659] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:20:30.460 [2024-07-24 22:29:56.088676] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:20:30.460 [2024-07-24 22:29:56.088695] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:20:30.460 [2024-07-24 22:29:56.088706] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:30.460 [2024-07-24 22:29:56.088715] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:20:30.460 [2024-07-24 22:29:56.088725] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:20:30.460 [2024-07-24 22:29:56.088734] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:20:30.460 [2024-07-24 22:29:56.088743] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:20:30.460 [2024-07-24 22:29:56.088765] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:30.460 [2024-07-24 22:29:56.088774] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x187e400) 00:20:30.460 [2024-07-24 22:29:56.088786] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.460 [2024-07-24 22:29:56.088798] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:30.460 [2024-07-24 22:29:56.088806] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:30.460 [2024-07-24 22:29:56.088813] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x187e400) 00:20:30.460 [2024-07-24 22:29:56.088823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:30.460 [2024-07-24 22:29:56.088851] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18de9c0, cid 4, qid 0 00:20:30.460 [2024-07-24 22:29:56.088864] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18deb40, cid 5, qid 0 00:20:30.460 [2024-07-24 22:29:56.088977] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:30.460 [2024-07-24 22:29:56.088992] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:30.460 [2024-07-24 22:29:56.088999] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:30.460 [2024-07-24 22:29:56.089007] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18de9c0) on tqpair=0x187e400 00:20:30.460 [2024-07-24 22:29:56.089019] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:30.460 [2024-07-24 22:29:56.089029] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:30.460 [2024-07-24 22:29:56.089036] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:30.460 [2024-07-24 22:29:56.089044] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18deb40) on tqpair=0x187e400 00:20:30.460 [2024-07-24 22:29:56.089061] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:30.460 [2024-07-24 22:29:56.089070] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x187e400) 00:20:30.460 [2024-07-24 22:29:56.089082] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.460 [2024-07-24 22:29:56.089105] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18deb40, cid 5, qid 0 00:20:30.460 [2024-07-24 22:29:56.089218] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:30.460 [2024-07-24 22:29:56.089234] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:30.460 [2024-07-24 22:29:56.089241] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:30.460 [2024-07-24 22:29:56.089249] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18deb40) on tqpair=0x187e400 00:20:30.460 [2024-07-24 22:29:56.089266] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:30.460 [2024-07-24 22:29:56.089275] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x187e400) 00:20:30.460 [2024-07-24 22:29:56.089291] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.460 [2024-07-24 22:29:56.089314] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18deb40, cid 5, qid 0 00:20:30.460 [2024-07-24 22:29:56.089444] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:30.460 [2024-07-24 22:29:56.089457] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:30.460 [2024-07-24 22:29:56.089464] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:30.460 [2024-07-24 22:29:56.089472] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18deb40) on tqpair=0x187e400 00:20:30.460 [2024-07-24 22:29:56.089501] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:30.460 [2024-07-24 22:29:56.089512] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x187e400) 00:20:30.460 [2024-07-24 22:29:56.089524] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.460 [2024-07-24 22:29:56.089547] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18deb40, cid 5, qid 0 00:20:30.460 [2024-07-24 22:29:56.089665] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:30.460 [2024-07-24 22:29:56.089679] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:30.460 [2024-07-24 22:29:56.089686] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:30.460 [2024-07-24 22:29:56.089693] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18deb40) on tqpair=0x187e400 00:20:30.461 [2024-07-24 22:29:56.089719] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:30.461 [2024-07-24 22:29:56.089730] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x187e400) 00:20:30.461 [2024-07-24 22:29:56.089742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.461 [2024-07-24 22:29:56.089755] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:30.461 [2024-07-24 22:29:56.089763] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x187e400) 00:20:30.461 [2024-07-24 22:29:56.089774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.461 [2024-07-24 22:29:56.089787] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:30.461 [2024-07-24 22:29:56.089795] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x187e400) 00:20:30.461 [2024-07-24 22:29:56.089805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.461 [2024-07-24 22:29:56.089819] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:30.461 [2024-07-24 22:29:56.089827] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x187e400) 00:20:30.461 [2024-07-24 22:29:56.089837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.461 [2024-07-24 22:29:56.089860] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18deb40, cid 5, qid 0 00:20:30.461 [2024-07-24 22:29:56.089872] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18de9c0, cid 4, qid 0 00:20:30.461 [2024-07-24 22:29:56.089881] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18decc0, cid 6, qid 0 00:20:30.461 [2024-07-24 22:29:56.089890] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18dee40, cid 7, qid 0 00:20:30.461 [2024-07-24 22:29:56.090094] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:30.461 [2024-07-24 22:29:56.090107] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:30.461 [2024-07-24 22:29:56.090115] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:30.461 [2024-07-24 22:29:56.090126] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x187e400): datao=0, datal=8192, cccid=5 00:20:30.461 [2024-07-24 22:29:56.090135] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18deb40) on tqpair(0x187e400): expected_datao=0, payload_size=8192 00:20:30.461 [2024-07-24 22:29:56.090143] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:30.461 [2024-07-24 22:29:56.090164] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:30.461 [2024-07-24 22:29:56.090174] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:30.461 [2024-07-24 22:29:56.090187] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:30.461 [2024-07-24 22:29:56.090198] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:30.461 [2024-07-24 22:29:56.090205] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:30.461 [2024-07-24 22:29:56.090212] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x187e400): datao=0, datal=512, cccid=4 00:20:30.461 [2024-07-24 22:29:56.090220] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18de9c0) on tqpair(0x187e400): expected_datao=0, payload_size=512 00:20:30.461 [2024-07-24 22:29:56.090229] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:30.461 [2024-07-24 22:29:56.090239] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:30.461 [2024-07-24 22:29:56.090247] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:30.461 [2024-07-24 22:29:56.090257] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:30.461 [2024-07-24 22:29:56.090267] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:30.461 [2024-07-24 22:29:56.090274] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:30.461 [2024-07-24 22:29:56.090281] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x187e400): datao=0, datal=512, cccid=6 00:20:30.461 [2024-07-24 22:29:56.090289] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18decc0) on tqpair(0x187e400): expected_datao=0, payload_size=512 00:20:30.461 [2024-07-24 22:29:56.090297] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:30.461 [2024-07-24 22:29:56.090307] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:30.461 [2024-07-24 22:29:56.090315] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:30.461 [2024-07-24 22:29:56.090325] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:30.461 [2024-07-24 22:29:56.090341] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:30.461 [2024-07-24 22:29:56.090348] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:30.461 [2024-07-24 22:29:56.090355] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x187e400): datao=0, datal=4096, cccid=7 00:20:30.461 [2024-07-24 22:29:56.090363] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18dee40) on tqpair(0x187e400): expected_datao=0, payload_size=4096 00:20:30.461 [2024-07-24 22:29:56.090371] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:30.461 [2024-07-24 22:29:56.090382] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:30.461 [2024-07-24 22:29:56.090390] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:30.461 [2024-07-24 22:29:56.090403] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:30.461 [2024-07-24 22:29:56.090413] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:30.461 [2024-07-24 22:29:56.090420] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:30.461 [2024-07-24 22:29:56.090428] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18deb40) on tqpair=0x187e400 00:20:30.461 [2024-07-24 22:29:56.090447] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:30.461 [2024-07-24 22:29:56.090465] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:30.461 [2024-07-24 22:29:56.090472] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:30.461 [2024-07-24 22:29:56.090486] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18de9c0) on tqpair=0x187e400 00:20:30.461 [2024-07-24 22:29:56.090505] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:30.461 [2024-07-24 22:29:56.090519] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:30.461 [2024-07-24 22:29:56.090526] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:30.461 [2024-07-24 22:29:56.090534] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18decc0) on tqpair=0x187e400 00:20:30.461 [2024-07-24 22:29:56.090545] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:30.461 [2024-07-24 22:29:56.090556] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:30.461 [2024-07-24 22:29:56.090563] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:30.461 [2024-07-24 22:29:56.090570] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18dee40) on tqpair=0x187e400 00:20:30.461 ===================================================== 00:20:30.461 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:30.461 ===================================================== 00:20:30.461 Controller Capabilities/Features 00:20:30.461 ================================ 00:20:30.461 Vendor ID: 8086 00:20:30.461 Subsystem Vendor ID: 8086 00:20:30.461 Serial Number: SPDK00000000000001 00:20:30.461 Model Number: SPDK bdev Controller 00:20:30.461 Firmware Version: 24.09 00:20:30.461 Recommended Arb Burst: 6 00:20:30.461 IEEE OUI Identifier: e4 d2 5c 00:20:30.461 Multi-path I/O 00:20:30.461 May have multiple subsystem ports: Yes 00:20:30.461 May have multiple controllers: Yes 00:20:30.461 Associated with SR-IOV VF: No 00:20:30.461 Max Data Transfer Size: 131072 00:20:30.461 Max Number of Namespaces: 32 00:20:30.461 Max Number of I/O Queues: 127 00:20:30.461 NVMe Specification Version (VS): 1.3 00:20:30.461 NVMe Specification Version (Identify): 1.3 00:20:30.461 Maximum Queue Entries: 128 00:20:30.461 Contiguous Queues Required: Yes 00:20:30.461 Arbitration Mechanisms Supported 00:20:30.461 Weighted Round Robin: Not Supported 00:20:30.461 Vendor Specific: Not Supported 00:20:30.461 Reset Timeout: 15000 ms 00:20:30.461 Doorbell Stride: 4 bytes 00:20:30.461 NVM Subsystem Reset: Not Supported 00:20:30.461 Command Sets Supported 00:20:30.461 NVM Command Set: Supported 00:20:30.461 Boot Partition: Not Supported 00:20:30.461 Memory Page Size Minimum: 4096 bytes 00:20:30.461 Memory Page Size Maximum: 4096 bytes 00:20:30.461 Persistent Memory Region: Not Supported 00:20:30.461 Optional Asynchronous Events Supported 00:20:30.461 Namespace Attribute Notices: Supported 00:20:30.461 Firmware Activation Notices: Not Supported 00:20:30.461 ANA Change Notices: Not Supported 00:20:30.461 PLE Aggregate Log Change Notices: Not Supported 00:20:30.461 LBA Status Info Alert Notices: Not Supported 00:20:30.461 EGE Aggregate Log Change Notices: Not Supported 00:20:30.461 Normal NVM Subsystem Shutdown event: Not Supported 00:20:30.461 Zone Descriptor Change Notices: Not Supported 00:20:30.461 Discovery Log Change Notices: Not Supported 00:20:30.461 Controller Attributes 00:20:30.461 128-bit Host Identifier: Supported 00:20:30.461 Non-Operational Permissive Mode: Not Supported 00:20:30.461 NVM Sets: Not Supported 00:20:30.461 Read Recovery Levels: Not Supported 00:20:30.461 Endurance Groups: Not Supported 00:20:30.461 Predictable Latency Mode: Not Supported 00:20:30.461 Traffic Based Keep ALive: Not Supported 00:20:30.461 Namespace Granularity: Not Supported 00:20:30.461 SQ Associations: Not Supported 00:20:30.461 UUID List: Not Supported 00:20:30.461 Multi-Domain Subsystem: Not Supported 00:20:30.461 Fixed Capacity Management: Not Supported 00:20:30.461 Variable Capacity Management: Not Supported 00:20:30.461 Delete Endurance Group: Not Supported 00:20:30.461 Delete NVM Set: Not Supported 00:20:30.461 Extended LBA Formats Supported: Not Supported 00:20:30.461 Flexible Data Placement Supported: Not Supported 00:20:30.461 00:20:30.461 Controller Memory Buffer Support 00:20:30.461 ================================ 00:20:30.461 Supported: No 00:20:30.461 00:20:30.461 Persistent Memory Region Support 00:20:30.461 ================================ 00:20:30.461 Supported: No 00:20:30.461 00:20:30.461 Admin Command Set Attributes 00:20:30.461 ============================ 00:20:30.461 Security Send/Receive: Not Supported 00:20:30.461 Format NVM: Not Supported 00:20:30.462 Firmware Activate/Download: Not Supported 00:20:30.462 Namespace Management: Not Supported 00:20:30.462 Device Self-Test: Not Supported 00:20:30.462 Directives: Not Supported 00:20:30.462 NVMe-MI: Not Supported 00:20:30.462 Virtualization Management: Not Supported 00:20:30.462 Doorbell Buffer Config: Not Supported 00:20:30.462 Get LBA Status Capability: Not Supported 00:20:30.462 Command & Feature Lockdown Capability: Not Supported 00:20:30.462 Abort Command Limit: 4 00:20:30.462 Async Event Request Limit: 4 00:20:30.462 Number of Firmware Slots: N/A 00:20:30.462 Firmware Slot 1 Read-Only: N/A 00:20:30.462 Firmware Activation Without Reset: N/A 00:20:30.462 Multiple Update Detection Support: N/A 00:20:30.462 Firmware Update Granularity: No Information Provided 00:20:30.462 Per-Namespace SMART Log: No 00:20:30.462 Asymmetric Namespace Access Log Page: Not Supported 00:20:30.462 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:20:30.462 Command Effects Log Page: Supported 00:20:30.462 Get Log Page Extended Data: Supported 00:20:30.462 Telemetry Log Pages: Not Supported 00:20:30.462 Persistent Event Log Pages: Not Supported 00:20:30.462 Supported Log Pages Log Page: May Support 00:20:30.462 Commands Supported & Effects Log Page: Not Supported 00:20:30.462 Feature Identifiers & Effects Log Page:May Support 00:20:30.462 NVMe-MI Commands & Effects Log Page: May Support 00:20:30.462 Data Area 4 for Telemetry Log: Not Supported 00:20:30.462 Error Log Page Entries Supported: 128 00:20:30.462 Keep Alive: Supported 00:20:30.462 Keep Alive Granularity: 10000 ms 00:20:30.462 00:20:30.462 NVM Command Set Attributes 00:20:30.462 ========================== 00:20:30.462 Submission Queue Entry Size 00:20:30.462 Max: 64 00:20:30.462 Min: 64 00:20:30.462 Completion Queue Entry Size 00:20:30.462 Max: 16 00:20:30.462 Min: 16 00:20:30.462 Number of Namespaces: 32 00:20:30.462 Compare Command: Supported 00:20:30.462 Write Uncorrectable Command: Not Supported 00:20:30.462 Dataset Management Command: Supported 00:20:30.462 Write Zeroes Command: Supported 00:20:30.462 Set Features Save Field: Not Supported 00:20:30.462 Reservations: Supported 00:20:30.462 Timestamp: Not Supported 00:20:30.462 Copy: Supported 00:20:30.462 Volatile Write Cache: Present 00:20:30.462 Atomic Write Unit (Normal): 1 00:20:30.462 Atomic Write Unit (PFail): 1 00:20:30.462 Atomic Compare & Write Unit: 1 00:20:30.462 Fused Compare & Write: Supported 00:20:30.462 Scatter-Gather List 00:20:30.462 SGL Command Set: Supported 00:20:30.462 SGL Keyed: Supported 00:20:30.462 SGL Bit Bucket Descriptor: Not Supported 00:20:30.462 SGL Metadata Pointer: Not Supported 00:20:30.462 Oversized SGL: Not Supported 00:20:30.462 SGL Metadata Address: Not Supported 00:20:30.462 SGL Offset: Supported 00:20:30.462 Transport SGL Data Block: Not Supported 00:20:30.462 Replay Protected Memory Block: Not Supported 00:20:30.462 00:20:30.462 Firmware Slot Information 00:20:30.462 ========================= 00:20:30.462 Active slot: 1 00:20:30.462 Slot 1 Firmware Revision: 24.09 00:20:30.462 00:20:30.462 00:20:30.462 Commands Supported and Effects 00:20:30.462 ============================== 00:20:30.462 Admin Commands 00:20:30.462 -------------- 00:20:30.462 Get Log Page (02h): Supported 00:20:30.462 Identify (06h): Supported 00:20:30.462 Abort (08h): Supported 00:20:30.462 Set Features (09h): Supported 00:20:30.462 Get Features (0Ah): Supported 00:20:30.462 Asynchronous Event Request (0Ch): Supported 00:20:30.462 Keep Alive (18h): Supported 00:20:30.462 I/O Commands 00:20:30.462 ------------ 00:20:30.462 Flush (00h): Supported LBA-Change 00:20:30.462 Write (01h): Supported LBA-Change 00:20:30.462 Read (02h): Supported 00:20:30.462 Compare (05h): Supported 00:20:30.462 Write Zeroes (08h): Supported LBA-Change 00:20:30.462 Dataset Management (09h): Supported LBA-Change 00:20:30.462 Copy (19h): Supported LBA-Change 00:20:30.462 00:20:30.462 Error Log 00:20:30.462 ========= 00:20:30.462 00:20:30.462 Arbitration 00:20:30.462 =========== 00:20:30.462 Arbitration Burst: 1 00:20:30.462 00:20:30.462 Power Management 00:20:30.462 ================ 00:20:30.462 Number of Power States: 1 00:20:30.462 Current Power State: Power State #0 00:20:30.462 Power State #0: 00:20:30.462 Max Power: 0.00 W 00:20:30.462 Non-Operational State: Operational 00:20:30.462 Entry Latency: Not Reported 00:20:30.462 Exit Latency: Not Reported 00:20:30.462 Relative Read Throughput: 0 00:20:30.462 Relative Read Latency: 0 00:20:30.462 Relative Write Throughput: 0 00:20:30.462 Relative Write Latency: 0 00:20:30.462 Idle Power: Not Reported 00:20:30.462 Active Power: Not Reported 00:20:30.462 Non-Operational Permissive Mode: Not Supported 00:20:30.462 00:20:30.462 Health Information 00:20:30.462 ================== 00:20:30.462 Critical Warnings: 00:20:30.462 Available Spare Space: OK 00:20:30.462 Temperature: OK 00:20:30.462 Device Reliability: OK 00:20:30.462 Read Only: No 00:20:30.462 Volatile Memory Backup: OK 00:20:30.462 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:30.462 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:20:30.462 Available Spare: 0% 00:20:30.462 Available Spare Threshold: 0% 00:20:30.462 Life Percentage Used:[2024-07-24 22:29:56.090704] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:30.462 [2024-07-24 22:29:56.090716] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x187e400) 00:20:30.462 [2024-07-24 22:29:56.090728] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.462 [2024-07-24 22:29:56.090752] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18dee40, cid 7, qid 0 00:20:30.462 [2024-07-24 22:29:56.090872] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:30.462 [2024-07-24 22:29:56.090888] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:30.462 [2024-07-24 22:29:56.090895] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:30.462 [2024-07-24 22:29:56.090903] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18dee40) on tqpair=0x187e400 00:20:30.462 [2024-07-24 22:29:56.090950] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:20:30.462 [2024-07-24 22:29:56.090971] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18de3c0) on tqpair=0x187e400 00:20:30.462 [2024-07-24 22:29:56.090982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.462 [2024-07-24 22:29:56.090992] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18de540) on tqpair=0x187e400 00:20:30.462 [2024-07-24 22:29:56.091001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.462 [2024-07-24 22:29:56.091010] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18de6c0) on tqpair=0x187e400 00:20:30.462 [2024-07-24 22:29:56.091019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.462 [2024-07-24 22:29:56.091028] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18de840) on tqpair=0x187e400 00:20:30.462 [2024-07-24 22:29:56.091036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:30.462 [2024-07-24 22:29:56.091050] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:30.462 [2024-07-24 22:29:56.091059] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:30.462 [2024-07-24 22:29:56.091066] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x187e400) 00:20:30.463 [2024-07-24 22:29:56.091078] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.463 [2024-07-24 22:29:56.091102] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18de840, cid 3, qid 0 00:20:30.463 [2024-07-24 22:29:56.091218] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:30.463 [2024-07-24 22:29:56.091233] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:30.463 [2024-07-24 22:29:56.091241] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:30.463 [2024-07-24 22:29:56.091248] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18de840) on tqpair=0x187e400 00:20:30.463 [2024-07-24 22:29:56.091260] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:30.463 [2024-07-24 22:29:56.091269] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:30.463 [2024-07-24 22:29:56.091279] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x187e400) 00:20:30.463 [2024-07-24 22:29:56.091292] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.463 [2024-07-24 22:29:56.091319] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18de840, cid 3, qid 0 00:20:30.463 [2024-07-24 22:29:56.091446] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:30.463 [2024-07-24 22:29:56.091460] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:30.463 [2024-07-24 22:29:56.091467] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:30.463 [2024-07-24 22:29:56.091475] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18de840) on tqpair=0x187e400 00:20:30.463 [2024-07-24 22:29:56.095501] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:20:30.463 [2024-07-24 22:29:56.095513] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:20:30.463 [2024-07-24 22:29:56.095533] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:30.463 [2024-07-24 22:29:56.095543] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:30.463 [2024-07-24 22:29:56.095550] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x187e400) 00:20:30.463 [2024-07-24 22:29:56.095570] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:30.463 [2024-07-24 22:29:56.095601] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18de840, cid 3, qid 0 00:20:30.463 [2024-07-24 22:29:56.095727] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:30.463 [2024-07-24 22:29:56.095740] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:30.463 [2024-07-24 22:29:56.095748] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:30.463 [2024-07-24 22:29:56.095755] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18de840) on tqpair=0x187e400 00:20:30.463 [2024-07-24 22:29:56.095769] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 0 milliseconds 00:20:30.463 0% 00:20:30.463 Data Units Read: 0 00:20:30.463 Data Units Written: 0 00:20:30.463 Host Read Commands: 0 00:20:30.463 Host Write Commands: 0 00:20:30.463 Controller Busy Time: 0 minutes 00:20:30.463 Power Cycles: 0 00:20:30.463 Power On Hours: 0 hours 00:20:30.463 Unsafe Shutdowns: 0 00:20:30.463 Unrecoverable Media Errors: 0 00:20:30.463 Lifetime Error Log Entries: 0 00:20:30.463 Warning Temperature Time: 0 minutes 00:20:30.463 Critical Temperature Time: 0 minutes 00:20:30.463 00:20:30.463 Number of Queues 00:20:30.463 ================ 00:20:30.463 Number of I/O Submission Queues: 127 00:20:30.463 Number of I/O Completion Queues: 127 00:20:30.463 00:20:30.463 Active Namespaces 00:20:30.463 ================= 00:20:30.463 Namespace ID:1 00:20:30.463 Error Recovery Timeout: Unlimited 00:20:30.463 Command Set Identifier: NVM (00h) 00:20:30.463 Deallocate: Supported 00:20:30.463 Deallocated/Unwritten Error: Not Supported 00:20:30.463 Deallocated Read Value: Unknown 00:20:30.463 Deallocate in Write Zeroes: Not Supported 00:20:30.463 Deallocated Guard Field: 0xFFFF 00:20:30.463 Flush: Supported 00:20:30.463 Reservation: Supported 00:20:30.463 Namespace Sharing Capabilities: Multiple Controllers 00:20:30.463 Size (in LBAs): 131072 (0GiB) 00:20:30.463 Capacity (in LBAs): 131072 (0GiB) 00:20:30.463 Utilization (in LBAs): 131072 (0GiB) 00:20:30.463 NGUID: ABCDEF0123456789ABCDEF0123456789 00:20:30.463 EUI64: ABCDEF0123456789 00:20:30.463 UUID: dd70b050-c69e-44c7-b967-9a2e0ee43cfe 00:20:30.463 Thin Provisioning: Not Supported 00:20:30.463 Per-NS Atomic Units: Yes 00:20:30.463 Atomic Boundary Size (Normal): 0 00:20:30.463 Atomic Boundary Size (PFail): 0 00:20:30.463 Atomic Boundary Offset: 0 00:20:30.463 Maximum Single Source Range Length: 65535 00:20:30.463 Maximum Copy Length: 65535 00:20:30.463 Maximum Source Range Count: 1 00:20:30.463 NGUID/EUI64 Never Reused: No 00:20:30.463 Namespace Write Protected: No 00:20:30.463 Number of LBA Formats: 1 00:20:30.463 Current LBA Format: LBA Format #00 00:20:30.463 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:30.463 00:20:30.463 22:29:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:20:30.463 22:29:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:30.463 22:29:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.463 22:29:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:30.463 22:29:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.463 22:29:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:20:30.463 22:29:56 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:20:30.463 22:29:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:30.463 22:29:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:20:30.463 22:29:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:30.463 22:29:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:20:30.463 22:29:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:30.463 22:29:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:30.463 rmmod nvme_tcp 00:20:30.463 rmmod nvme_fabrics 00:20:30.721 rmmod nvme_keyring 00:20:30.721 22:29:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:30.721 22:29:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:20:30.721 22:29:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:20:30.721 22:29:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 3887298 ']' 00:20:30.721 22:29:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 3887298 00:20:30.721 22:29:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 3887298 ']' 00:20:30.721 22:29:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 3887298 00:20:30.721 22:29:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:20:30.721 22:29:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:30.721 22:29:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3887298 00:20:30.721 22:29:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:30.721 22:29:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:30.721 22:29:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3887298' 00:20:30.721 killing process with pid 3887298 00:20:30.721 22:29:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@967 -- # kill 3887298 00:20:30.721 22:29:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # wait 3887298 00:20:30.980 22:29:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:30.980 22:29:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:30.980 22:29:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:30.980 22:29:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:30.980 22:29:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:30.980 22:29:56 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:30.980 22:29:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:30.980 22:29:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:32.931 22:29:58 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:32.931 00:20:32.931 real 0m5.171s 00:20:32.931 user 0m4.852s 00:20:32.931 sys 0m1.585s 00:20:32.931 22:29:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:32.931 22:29:58 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:32.931 ************************************ 00:20:32.931 END TEST nvmf_identify 00:20:32.931 ************************************ 00:20:32.931 22:29:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:32.931 22:29:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:32.931 22:29:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:32.931 22:29:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:32.931 ************************************ 00:20:32.931 START TEST nvmf_perf 00:20:32.931 ************************************ 00:20:32.931 22:29:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:32.931 * Looking for test storage... 00:20:32.931 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:32.931 22:29:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:32.931 22:29:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:20:32.931 22:29:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:32.931 22:29:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:32.931 22:29:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:32.931 22:29:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:32.931 22:29:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:32.931 22:29:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:32.931 22:29:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:32.931 22:29:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:32.931 22:29:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:32.931 22:29:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:32.931 22:29:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:32.931 22:29:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:20:32.931 22:29:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:32.931 22:29:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:32.931 22:29:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:32.931 22:29:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:32.931 22:29:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:32.931 22:29:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:32.931 22:29:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:32.931 22:29:58 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:32.931 22:29:58 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.932 22:29:58 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.932 22:29:58 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.932 22:29:58 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:20:32.932 22:29:58 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.932 22:29:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:20:32.932 22:29:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:32.932 22:29:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:32.932 22:29:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:32.932 22:29:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:32.932 22:29:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:32.932 22:29:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:32.932 22:29:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:32.932 22:29:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:33.189 22:29:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:33.190 22:29:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:33.190 22:29:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:33.190 22:29:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:20:33.190 22:29:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:33.190 22:29:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:33.190 22:29:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:33.190 22:29:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:33.190 22:29:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:33.190 22:29:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:33.190 22:29:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:33.190 22:29:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:33.190 22:29:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:33.190 22:29:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:33.190 22:29:58 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:20:33.190 22:29:58 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:34.567 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:34.567 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:20:34.567 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:34.567 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:34.567 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:34.567 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:34.567 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:34.567 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:20:34.567 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:34.567 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:20:34.567 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:20:34.567 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:20:34.567 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:20:34.567 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:20:34.567 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:20:34.567 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:34.567 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:34.567 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:34.567 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:34.567 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:34.567 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:34.567 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:34.567 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:34.567 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:34.567 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:34.567 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:34.567 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:34.567 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:34.567 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:34.567 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:34.567 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:34.567 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:34.567 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:34.567 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:20:34.567 Found 0000:08:00.0 (0x8086 - 0x159b) 00:20:34.567 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:34.567 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:34.567 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:34.567 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:34.567 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:34.567 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:34.567 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:20:34.567 Found 0000:08:00.1 (0x8086 - 0x159b) 00:20:34.567 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:34.567 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:34.567 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:34.567 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:34.567 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:34.567 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:34.567 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:34.567 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:34.567 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:34.567 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:34.567 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:34.567 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:34.567 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:34.567 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:34.567 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:34.567 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:20:34.567 Found net devices under 0000:08:00.0: cvl_0_0 00:20:34.567 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:34.567 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:34.567 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:34.567 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:34.567 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:34.567 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:34.567 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:34.567 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:34.567 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:20:34.567 Found net devices under 0000:08:00.1: cvl_0_1 00:20:34.567 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:34.826 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:34.826 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:20:34.826 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:34.826 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:34.826 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:34.826 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:34.826 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:34.826 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:34.826 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:34.826 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:34.826 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:34.826 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:34.826 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:34.826 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:34.826 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:34.826 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:34.826 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:34.826 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:34.826 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:34.826 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:34.826 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:34.826 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:34.826 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:34.826 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:34.826 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:34.826 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:34.826 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.387 ms 00:20:34.826 00:20:34.826 --- 10.0.0.2 ping statistics --- 00:20:34.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:34.826 rtt min/avg/max/mdev = 0.387/0.387/0.387/0.000 ms 00:20:34.826 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:34.826 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:34.826 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:20:34.826 00:20:34.826 --- 10.0.0.1 ping statistics --- 00:20:34.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:34.826 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:20:34.826 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:34.826 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:20:34.826 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:34.826 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:34.826 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:34.826 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:34.826 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:34.826 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:34.826 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:34.826 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:20:34.826 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:34.826 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:34.826 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:34.826 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=3888941 00:20:34.826 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:34.826 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 3888941 00:20:34.826 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 3888941 ']' 00:20:34.826 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:34.826 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:34.826 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:34.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:34.826 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:34.826 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:34.826 [2024-07-24 22:30:00.464995] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:20:34.826 [2024-07-24 22:30:00.465096] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:34.826 EAL: No free 2048 kB hugepages reported on node 1 00:20:34.826 [2024-07-24 22:30:00.529657] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:35.085 [2024-07-24 22:30:00.646949] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:35.085 [2024-07-24 22:30:00.647008] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:35.085 [2024-07-24 22:30:00.647034] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:35.085 [2024-07-24 22:30:00.647048] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:35.085 [2024-07-24 22:30:00.647061] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:35.085 [2024-07-24 22:30:00.647145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:35.085 [2024-07-24 22:30:00.647199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:35.085 [2024-07-24 22:30:00.647492] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:35.085 [2024-07-24 22:30:00.647498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:35.085 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:35.085 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:20:35.085 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:35.085 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:35.085 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:35.085 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:35.343 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:20:35.343 22:30:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:20:38.629 22:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:20:38.629 22:30:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:20:38.629 22:30:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:84:00.0 00:20:38.629 22:30:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:38.887 22:30:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:20:38.887 22:30:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:84:00.0 ']' 00:20:38.887 22:30:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:20:38.887 22:30:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:20:38.887 22:30:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:39.145 [2024-07-24 22:30:04.797471] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:39.145 22:30:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:39.403 22:30:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:39.403 22:30:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:39.661 22:30:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:39.661 22:30:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:20:39.919 22:30:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:40.177 [2024-07-24 22:30:05.781067] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:40.177 22:30:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:40.434 22:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:84:00.0 ']' 00:20:40.434 22:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:84:00.0' 00:20:40.434 22:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:20:40.434 22:30:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:84:00.0' 00:20:41.813 Initializing NVMe Controllers 00:20:41.813 Attached to NVMe Controller at 0000:84:00.0 [8086:0a54] 00:20:41.813 Associating PCIE (0000:84:00.0) NSID 1 with lcore 0 00:20:41.813 Initialization complete. Launching workers. 00:20:41.813 ======================================================== 00:20:41.813 Latency(us) 00:20:41.813 Device Information : IOPS MiB/s Average min max 00:20:41.813 PCIE (0000:84:00.0) NSID 1 from core 0: 66866.03 261.20 477.86 55.00 4396.82 00:20:41.813 ======================================================== 00:20:41.813 Total : 66866.03 261.20 477.86 55.00 4396.82 00:20:41.813 00:20:41.813 22:30:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:41.813 EAL: No free 2048 kB hugepages reported on node 1 00:20:43.193 Initializing NVMe Controllers 00:20:43.193 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:43.193 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:43.193 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:43.193 Initialization complete. Launching workers. 00:20:43.193 ======================================================== 00:20:43.193 Latency(us) 00:20:43.193 Device Information : IOPS MiB/s Average min max 00:20:43.193 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 117.58 0.46 8833.82 187.48 46008.26 00:20:43.193 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 65.77 0.26 15325.13 5977.92 50880.87 00:20:43.193 ======================================================== 00:20:43.193 Total : 183.35 0.72 11162.22 187.48 50880.87 00:20:43.193 00:20:43.193 22:30:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:43.193 EAL: No free 2048 kB hugepages reported on node 1 00:20:44.573 Initializing NVMe Controllers 00:20:44.573 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:44.573 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:44.573 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:44.573 Initialization complete. Launching workers. 00:20:44.573 ======================================================== 00:20:44.573 Latency(us) 00:20:44.573 Device Information : IOPS MiB/s Average min max 00:20:44.573 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7346.79 28.70 4355.47 494.42 10156.41 00:20:44.573 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3872.87 15.13 8287.28 5111.36 17158.10 00:20:44.573 ======================================================== 00:20:44.573 Total : 11219.66 43.83 5712.68 494.42 17158.10 00:20:44.573 00:20:44.573 22:30:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:20:44.573 22:30:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:20:44.573 22:30:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:44.573 EAL: No free 2048 kB hugepages reported on node 1 00:20:47.109 Initializing NVMe Controllers 00:20:47.109 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:47.109 Controller IO queue size 128, less than required. 00:20:47.109 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:47.109 Controller IO queue size 128, less than required. 00:20:47.109 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:47.109 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:47.109 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:47.109 Initialization complete. Launching workers. 00:20:47.109 ======================================================== 00:20:47.109 Latency(us) 00:20:47.109 Device Information : IOPS MiB/s Average min max 00:20:47.109 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1267.00 316.75 102991.59 61820.96 137431.17 00:20:47.109 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 575.32 143.83 230653.80 95241.16 350540.88 00:20:47.109 ======================================================== 00:20:47.109 Total : 1842.31 460.58 142857.91 61820.96 350540.88 00:20:47.109 00:20:47.109 22:30:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:20:47.109 EAL: No free 2048 kB hugepages reported on node 1 00:20:47.109 No valid NVMe controllers or AIO or URING devices found 00:20:47.110 Initializing NVMe Controllers 00:20:47.110 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:47.110 Controller IO queue size 128, less than required. 00:20:47.110 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:47.110 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:20:47.110 Controller IO queue size 128, less than required. 00:20:47.110 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:47.110 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:20:47.110 WARNING: Some requested NVMe devices were skipped 00:20:47.110 22:30:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:20:47.110 EAL: No free 2048 kB hugepages reported on node 1 00:20:49.649 Initializing NVMe Controllers 00:20:49.650 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:49.650 Controller IO queue size 128, less than required. 00:20:49.650 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:49.650 Controller IO queue size 128, less than required. 00:20:49.650 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:49.650 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:49.650 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:49.650 Initialization complete. Launching workers. 00:20:49.650 00:20:49.650 ==================== 00:20:49.650 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:20:49.650 TCP transport: 00:20:49.650 polls: 16802 00:20:49.650 idle_polls: 5260 00:20:49.650 sock_completions: 11542 00:20:49.650 nvme_completions: 4869 00:20:49.650 submitted_requests: 7316 00:20:49.650 queued_requests: 1 00:20:49.650 00:20:49.650 ==================== 00:20:49.650 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:20:49.650 TCP transport: 00:20:49.650 polls: 19593 00:20:49.650 idle_polls: 9061 00:20:49.650 sock_completions: 10532 00:20:49.650 nvme_completions: 4653 00:20:49.650 submitted_requests: 6962 00:20:49.650 queued_requests: 1 00:20:49.650 ======================================================== 00:20:49.650 Latency(us) 00:20:49.650 Device Information : IOPS MiB/s Average min max 00:20:49.650 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1216.95 304.24 107209.00 61810.91 157326.68 00:20:49.650 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1162.96 290.74 111908.41 42184.77 157313.13 00:20:49.650 ======================================================== 00:20:49.650 Total : 2379.91 594.98 109505.40 42184.77 157326.68 00:20:49.650 00:20:49.650 22:30:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:20:49.650 22:30:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:49.650 22:30:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:20:49.650 22:30:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:20:49.650 22:30:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:20:49.908 22:30:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:49.908 22:30:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:20:49.908 22:30:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:49.908 22:30:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:20:49.908 22:30:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:49.908 22:30:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:49.908 rmmod nvme_tcp 00:20:49.908 rmmod nvme_fabrics 00:20:49.908 rmmod nvme_keyring 00:20:49.908 22:30:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:49.908 22:30:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:20:49.908 22:30:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:20:49.908 22:30:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 3888941 ']' 00:20:49.908 22:30:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 3888941 00:20:49.908 22:30:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 3888941 ']' 00:20:49.908 22:30:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 3888941 00:20:49.908 22:30:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:20:49.908 22:30:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:49.908 22:30:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3888941 00:20:49.908 22:30:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:49.908 22:30:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:49.908 22:30:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3888941' 00:20:49.908 killing process with pid 3888941 00:20:49.908 22:30:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@967 -- # kill 3888941 00:20:49.908 22:30:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # wait 3888941 00:20:51.289 22:30:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:51.289 22:30:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:51.289 22:30:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:51.289 22:30:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:51.289 22:30:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:51.289 22:30:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:51.289 22:30:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:51.289 22:30:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:53.828 22:30:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:53.828 00:20:53.828 real 0m20.475s 00:20:53.828 user 1m4.313s 00:20:53.828 sys 0m4.677s 00:20:53.828 22:30:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:53.828 22:30:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:53.828 ************************************ 00:20:53.828 END TEST nvmf_perf 00:20:53.828 ************************************ 00:20:53.828 22:30:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:53.828 22:30:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:53.828 22:30:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:53.828 22:30:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.828 ************************************ 00:20:53.828 START TEST nvmf_fio_host 00:20:53.828 ************************************ 00:20:53.828 22:30:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:53.828 * Looking for test storage... 00:20:53.828 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:53.828 22:30:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:53.828 22:30:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:53.828 22:30:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:53.828 22:30:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:53.828 22:30:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.828 22:30:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.828 22:30:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.828 22:30:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:20:53.828 22:30:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.829 22:30:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:53.829 22:30:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:20:53.829 22:30:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:53.829 22:30:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:53.829 22:30:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:53.829 22:30:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:53.829 22:30:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:53.829 22:30:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:53.829 22:30:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:53.829 22:30:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:53.829 22:30:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:53.829 22:30:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:53.829 22:30:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:20:53.829 22:30:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:20:53.829 22:30:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:53.829 22:30:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:53.829 22:30:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:53.829 22:30:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:53.829 22:30:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:53.829 22:30:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:53.829 22:30:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:53.829 22:30:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:53.829 22:30:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.829 22:30:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.829 22:30:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.829 22:30:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:20:53.829 22:30:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.829 22:30:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:20:53.829 22:30:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:53.829 22:30:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:53.829 22:30:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:53.829 22:30:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:53.829 22:30:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:53.829 22:30:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:53.829 22:30:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:53.829 22:30:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:53.829 22:30:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:53.829 22:30:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:20:53.829 22:30:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:53.829 22:30:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:53.829 22:30:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:53.829 22:30:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:53.829 22:30:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:53.829 22:30:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:53.829 22:30:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:53.829 22:30:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:53.829 22:30:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:53.829 22:30:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:53.829 22:30:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:20:53.829 22:30:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:20:55.206 Found 0000:08:00.0 (0x8086 - 0x159b) 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:20:55.206 Found 0000:08:00.1 (0x8086 - 0x159b) 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:20:55.206 Found net devices under 0000:08:00.0: cvl_0_0 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:20:55.206 Found net devices under 0000:08:00.1: cvl_0_1 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:55.206 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:55.206 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:55.206 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.319 ms 00:20:55.206 00:20:55.207 --- 10.0.0.2 ping statistics --- 00:20:55.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.207 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:20:55.207 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:55.207 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:55.207 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:20:55.207 00:20:55.207 --- 10.0.0.1 ping statistics --- 00:20:55.207 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.207 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:20:55.207 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:55.207 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:20:55.207 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:55.207 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:55.207 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:55.207 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:55.207 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:55.207 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:55.207 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:55.465 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:20:55.465 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:20:55.465 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:55.465 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.465 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3892512 00:20:55.465 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:55.465 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:55.465 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3892512 00:20:55.465 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 3892512 ']' 00:20:55.465 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:55.465 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:55.465 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:55.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:55.465 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:55.465 22:30:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.465 [2024-07-24 22:30:20.975825] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:20:55.465 [2024-07-24 22:30:20.975918] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:55.465 EAL: No free 2048 kB hugepages reported on node 1 00:20:55.465 [2024-07-24 22:30:21.040453] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:55.465 [2024-07-24 22:30:21.157748] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:55.465 [2024-07-24 22:30:21.157810] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:55.465 [2024-07-24 22:30:21.157825] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:55.465 [2024-07-24 22:30:21.157838] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:55.465 [2024-07-24 22:30:21.157850] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:55.465 [2024-07-24 22:30:21.157951] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:55.465 [2024-07-24 22:30:21.158034] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:55.465 [2024-07-24 22:30:21.158087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:55.465 [2024-07-24 22:30:21.158090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:55.723 22:30:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:55.723 22:30:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:20:55.724 22:30:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:55.981 [2024-07-24 22:30:21.563655] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:55.981 22:30:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:20:55.981 22:30:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:55.981 22:30:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.981 22:30:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:20:56.239 Malloc1 00:20:56.239 22:30:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:56.804 22:30:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:57.062 22:30:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:57.319 [2024-07-24 22:30:22.809955] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:57.319 22:30:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:57.577 22:30:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:20:57.577 22:30:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:57.577 22:30:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:57.577 22:30:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:20:57.577 22:30:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:57.577 22:30:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local sanitizers 00:20:57.577 22:30:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:57.577 22:30:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # shift 00:20:57.577 22:30:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local asan_lib= 00:20:57.577 22:30:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:20:57.577 22:30:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:57.577 22:30:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # grep libasan 00:20:57.577 22:30:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:20:57.577 22:30:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # asan_lib= 00:20:57.577 22:30:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:20:57.577 22:30:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:20:57.577 22:30:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:57.577 22:30:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # grep libclang_rt.asan 00:20:57.577 22:30:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:20:57.577 22:30:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # asan_lib= 00:20:57.577 22:30:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:20:57.577 22:30:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:20:57.577 22:30:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:57.834 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:20:57.834 fio-3.35 00:20:57.834 Starting 1 thread 00:20:57.834 EAL: No free 2048 kB hugepages reported on node 1 00:21:00.364 00:21:00.364 test: (groupid=0, jobs=1): err= 0: pid=3892859: Wed Jul 24 22:30:25 2024 00:21:00.364 read: IOPS=7038, BW=27.5MiB/s (28.8MB/s)(55.2MiB/2008msec) 00:21:00.364 slat (usec): min=2, max=145, avg= 2.88, stdev= 1.80 00:21:00.364 clat (usec): min=3178, max=16781, avg=9979.99, stdev=784.62 00:21:00.364 lat (usec): min=3205, max=16783, avg=9982.88, stdev=784.51 00:21:00.364 clat percentiles (usec): 00:21:00.364 | 1.00th=[ 8291], 5.00th=[ 8717], 10.00th=[ 9110], 20.00th=[ 9372], 00:21:00.364 | 30.00th=[ 9634], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10159], 00:21:00.364 | 70.00th=[10421], 80.00th=[10552], 90.00th=[10945], 95.00th=[11207], 00:21:00.364 | 99.00th=[11731], 99.50th=[11994], 99.90th=[13042], 99.95th=[15139], 00:21:00.364 | 99.99th=[15533] 00:21:00.365 bw ( KiB/s): min=26952, max=28576, per=99.85%, avg=28114.00, stdev=776.46, samples=4 00:21:00.365 iops : min= 6738, max= 7144, avg=7028.50, stdev=194.12, samples=4 00:21:00.365 write: IOPS=7038, BW=27.5MiB/s (28.8MB/s)(55.2MiB/2008msec); 0 zone resets 00:21:00.365 slat (usec): min=2, max=139, avg= 3.04, stdev= 1.59 00:21:00.365 clat (usec): min=1479, max=14224, avg=8140.15, stdev=699.31 00:21:00.365 lat (usec): min=1489, max=14227, avg=8143.19, stdev=699.28 00:21:00.365 clat percentiles (usec): 00:21:00.365 | 1.00th=[ 6652], 5.00th=[ 7111], 10.00th=[ 7373], 20.00th=[ 7635], 00:21:00.365 | 30.00th=[ 7832], 40.00th=[ 7963], 50.00th=[ 8160], 60.00th=[ 8291], 00:21:00.365 | 70.00th=[ 8455], 80.00th=[ 8586], 90.00th=[ 8979], 95.00th=[ 9241], 00:21:00.365 | 99.00th=[ 9765], 99.50th=[10028], 99.90th=[12780], 99.95th=[13960], 00:21:00.365 | 99.99th=[14222] 00:21:00.365 bw ( KiB/s): min=27840, max=28624, per=100.00%, avg=28166.00, stdev=340.63, samples=4 00:21:00.365 iops : min= 6960, max= 7156, avg=7041.50, stdev=85.16, samples=4 00:21:00.365 lat (msec) : 2=0.01%, 4=0.07%, 10=75.44%, 20=24.47% 00:21:00.365 cpu : usr=65.12%, sys=32.49%, ctx=79, majf=0, minf=40 00:21:00.365 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:00.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:00.365 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:00.365 issued rwts: total=14134,14133,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:00.365 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:00.365 00:21:00.365 Run status group 0 (all jobs): 00:21:00.365 READ: bw=27.5MiB/s (28.8MB/s), 27.5MiB/s-27.5MiB/s (28.8MB/s-28.8MB/s), io=55.2MiB (57.9MB), run=2008-2008msec 00:21:00.365 WRITE: bw=27.5MiB/s (28.8MB/s), 27.5MiB/s-27.5MiB/s (28.8MB/s-28.8MB/s), io=55.2MiB (57.9MB), run=2008-2008msec 00:21:00.365 22:30:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:00.365 22:30:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:00.365 22:30:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:21:00.365 22:30:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:00.365 22:30:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local sanitizers 00:21:00.365 22:30:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:00.365 22:30:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # shift 00:21:00.365 22:30:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local asan_lib= 00:21:00.365 22:30:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:21:00.365 22:30:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:00.365 22:30:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # grep libasan 00:21:00.365 22:30:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:21:00.365 22:30:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # asan_lib= 00:21:00.365 22:30:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:21:00.365 22:30:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:21:00.365 22:30:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:00.365 22:30:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # grep libclang_rt.asan 00:21:00.365 22:30:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:21:00.365 22:30:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # asan_lib= 00:21:00.365 22:30:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:21:00.365 22:30:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:21:00.365 22:30:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:00.365 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:21:00.365 fio-3.35 00:21:00.365 Starting 1 thread 00:21:00.365 EAL: No free 2048 kB hugepages reported on node 1 00:21:02.960 00:21:02.961 test: (groupid=0, jobs=1): err= 0: pid=3893110: Wed Jul 24 22:30:28 2024 00:21:02.961 read: IOPS=7499, BW=117MiB/s (123MB/s)(236MiB/2010msec) 00:21:02.961 slat (nsec): min=3055, max=95803, avg=4063.15, stdev=1421.98 00:21:02.961 clat (usec): min=2022, max=19267, avg=9839.15, stdev=2136.88 00:21:02.961 lat (usec): min=2026, max=19271, avg=9843.21, stdev=2136.85 00:21:02.961 clat percentiles (usec): 00:21:02.961 | 1.00th=[ 5211], 5.00th=[ 6390], 10.00th=[ 7308], 20.00th=[ 8160], 00:21:02.961 | 30.00th=[ 8848], 40.00th=[ 9241], 50.00th=[ 9765], 60.00th=[10159], 00:21:02.961 | 70.00th=[10683], 80.00th=[11469], 90.00th=[12649], 95.00th=[13698], 00:21:02.961 | 99.00th=[15270], 99.50th=[15926], 99.90th=[17695], 99.95th=[18220], 00:21:02.961 | 99.99th=[18744] 00:21:02.961 bw ( KiB/s): min=46944, max=70464, per=50.40%, avg=60472.00, stdev=11258.74, samples=4 00:21:02.961 iops : min= 2934, max= 4404, avg=3779.50, stdev=703.67, samples=4 00:21:02.961 write: IOPS=4431, BW=69.2MiB/s (72.6MB/s)(124MiB/1795msec); 0 zone resets 00:21:02.961 slat (usec): min=32, max=173, avg=36.98, stdev= 5.68 00:21:02.961 clat (usec): min=6374, max=22580, avg=12916.90, stdev=2328.92 00:21:02.961 lat (usec): min=6406, max=22612, avg=12953.88, stdev=2328.40 00:21:02.961 clat percentiles (usec): 00:21:02.961 | 1.00th=[ 8356], 5.00th=[ 9503], 10.00th=[10159], 20.00th=[10945], 00:21:02.961 | 30.00th=[11469], 40.00th=[12125], 50.00th=[12780], 60.00th=[13304], 00:21:02.961 | 70.00th=[13960], 80.00th=[14877], 90.00th=[15926], 95.00th=[16909], 00:21:02.961 | 99.00th=[19006], 99.50th=[20841], 99.90th=[22152], 99.95th=[22414], 00:21:02.961 | 99.99th=[22676] 00:21:02.961 bw ( KiB/s): min=50080, max=73120, per=88.90%, avg=63032.00, stdev=11262.83, samples=4 00:21:02.961 iops : min= 3130, max= 4570, avg=3939.50, stdev=703.93, samples=4 00:21:02.961 lat (msec) : 4=0.22%, 10=39.80%, 20=59.78%, 50=0.20% 00:21:02.961 cpu : usr=78.06%, sys=19.70%, ctx=30, majf=0, minf=66 00:21:02.961 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:21:02.961 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:02.961 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:02.961 issued rwts: total=15073,7954,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:02.961 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:02.961 00:21:02.961 Run status group 0 (all jobs): 00:21:02.961 READ: bw=117MiB/s (123MB/s), 117MiB/s-117MiB/s (123MB/s-123MB/s), io=236MiB (247MB), run=2010-2010msec 00:21:02.961 WRITE: bw=69.2MiB/s (72.6MB/s), 69.2MiB/s-69.2MiB/s (72.6MB/s-72.6MB/s), io=124MiB (130MB), run=1795-1795msec 00:21:02.961 22:30:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:02.961 22:30:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:21:02.961 22:30:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:02.961 22:30:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:21:02.961 22:30:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:21:02.961 22:30:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:02.961 22:30:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:21:02.961 22:30:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:02.961 22:30:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:21:02.961 22:30:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:02.961 22:30:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:02.961 rmmod nvme_tcp 00:21:02.961 rmmod nvme_fabrics 00:21:02.961 rmmod nvme_keyring 00:21:02.961 22:30:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:02.961 22:30:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:21:02.961 22:30:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:21:02.961 22:30:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 3892512 ']' 00:21:02.961 22:30:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 3892512 00:21:02.961 22:30:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 3892512 ']' 00:21:02.961 22:30:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 3892512 00:21:02.961 22:30:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:21:02.961 22:30:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:02.961 22:30:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3892512 00:21:03.220 22:30:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:03.220 22:30:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:03.220 22:30:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3892512' 00:21:03.220 killing process with pid 3892512 00:21:03.220 22:30:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 3892512 00:21:03.220 22:30:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 3892512 00:21:03.220 22:30:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:03.220 22:30:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:03.220 22:30:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:03.220 22:30:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:03.220 22:30:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:03.220 22:30:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:03.220 22:30:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:03.220 22:30:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:05.757 22:30:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:05.757 00:21:05.757 real 0m11.881s 00:21:05.757 user 0m35.270s 00:21:05.757 sys 0m4.070s 00:21:05.757 22:30:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:05.757 22:30:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.757 ************************************ 00:21:05.757 END TEST nvmf_fio_host 00:21:05.757 ************************************ 00:21:05.757 22:30:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:05.757 22:30:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:05.757 22:30:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:05.757 22:30:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.757 ************************************ 00:21:05.757 START TEST nvmf_failover 00:21:05.757 ************************************ 00:21:05.757 22:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:05.757 * Looking for test storage... 00:21:05.757 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:05.757 22:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:05.757 22:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:21:05.757 22:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:05.757 22:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:05.757 22:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:05.757 22:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:05.757 22:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:05.757 22:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:05.757 22:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:05.757 22:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:05.757 22:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:05.757 22:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:05.757 22:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:05.757 22:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:21:05.757 22:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:05.757 22:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:05.757 22:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:05.757 22:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:05.757 22:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:05.757 22:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:05.757 22:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:05.757 22:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:05.757 22:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.757 22:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.757 22:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.757 22:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:21:05.757 22:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.757 22:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:21:05.757 22:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:05.757 22:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:05.757 22:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:05.757 22:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:05.757 22:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:05.757 22:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:05.757 22:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:05.757 22:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:05.757 22:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:05.757 22:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:05.757 22:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:05.757 22:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:05.757 22:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:21:05.757 22:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:05.757 22:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:05.757 22:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:05.757 22:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:05.757 22:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:05.757 22:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:05.757 22:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:05.757 22:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:05.757 22:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:05.757 22:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:05.757 22:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:21:05.758 22:30:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:21:07.136 Found 0000:08:00.0 (0x8086 - 0x159b) 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:21:07.136 Found 0000:08:00.1 (0x8086 - 0x159b) 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:21:07.136 Found net devices under 0000:08:00.0: cvl_0_0 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:21:07.136 Found net devices under 0000:08:00.1: cvl_0_1 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:07.136 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:07.136 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:21:07.136 00:21:07.136 --- 10.0.0.2 ping statistics --- 00:21:07.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:07.136 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:07.136 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:07.136 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:21:07.136 00:21:07.136 --- 10.0.0.1 ping statistics --- 00:21:07.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:07.136 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:07.136 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:07.137 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:07.137 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:07.137 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:07.137 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:07.137 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:07.137 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:21:07.137 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:07.137 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:07.137 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:07.137 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=3894810 00:21:07.137 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:07.137 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 3894810 00:21:07.137 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 3894810 ']' 00:21:07.137 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:07.137 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:07.137 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:07.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:07.137 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:07.137 22:30:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:07.394 [2024-07-24 22:30:32.839940] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:21:07.394 [2024-07-24 22:30:32.840037] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:07.394 EAL: No free 2048 kB hugepages reported on node 1 00:21:07.394 [2024-07-24 22:30:32.905561] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:07.394 [2024-07-24 22:30:33.021923] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:07.394 [2024-07-24 22:30:33.021983] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:07.394 [2024-07-24 22:30:33.021999] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:07.394 [2024-07-24 22:30:33.022013] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:07.394 [2024-07-24 22:30:33.022024] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:07.394 [2024-07-24 22:30:33.022107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:07.394 [2024-07-24 22:30:33.022161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:07.394 [2024-07-24 22:30:33.022164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:07.651 22:30:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:07.651 22:30:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:21:07.651 22:30:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:07.651 22:30:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:07.651 22:30:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:07.651 22:30:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:07.651 22:30:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:07.909 [2024-07-24 22:30:33.422752] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:07.909 22:30:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:08.165 Malloc0 00:21:08.165 22:30:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:08.422 22:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:08.679 22:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:08.937 [2024-07-24 22:30:34.634876] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:09.194 22:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:09.452 [2024-07-24 22:30:34.935800] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:09.452 22:30:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:09.709 [2024-07-24 22:30:35.232676] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:09.709 22:30:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3895117 00:21:09.709 22:30:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:21:09.709 22:30:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:09.709 22:30:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3895117 /var/tmp/bdevperf.sock 00:21:09.709 22:30:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 3895117 ']' 00:21:09.709 22:30:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:09.709 22:30:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:09.709 22:30:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:09.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:09.709 22:30:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:09.709 22:30:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:09.967 22:30:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:09.967 22:30:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:21:09.967 22:30:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:10.532 NVMe0n1 00:21:10.532 22:30:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:10.789 00:21:10.789 22:30:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3895229 00:21:10.789 22:30:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:10.789 22:30:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:21:12.163 22:30:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:12.163 22:30:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:21:15.443 22:30:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:15.443 00:21:15.701 22:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:15.701 [2024-07-24 22:30:41.389470] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0340 is same with the state(5) to be set 00:21:15.701 [2024-07-24 22:30:41.389568] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0340 is same with the state(5) to be set 00:21:15.701 [2024-07-24 22:30:41.389585] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0340 is same with the state(5) to be set 00:21:15.701 [2024-07-24 22:30:41.389598] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0340 is same with the state(5) to be set 00:21:15.701 [2024-07-24 22:30:41.389612] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0340 is same with the state(5) to be set 00:21:15.701 [2024-07-24 22:30:41.389625] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0340 is same with the state(5) to be set 00:21:15.701 [2024-07-24 22:30:41.389645] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0340 is same with the state(5) to be set 00:21:15.701 [2024-07-24 22:30:41.389659] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0340 is same with the state(5) to be set 00:21:15.701 [2024-07-24 22:30:41.389672] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0340 is same with the state(5) to be set 00:21:15.701 [2024-07-24 22:30:41.389684] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0340 is same with the state(5) to be set 00:21:15.701 [2024-07-24 22:30:41.389705] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0340 is same with the state(5) to be set 00:21:15.701 [2024-07-24 22:30:41.389717] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0340 is same with the state(5) to be set 00:21:15.702 [2024-07-24 22:30:41.389730] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0340 is same with the state(5) to be set 00:21:15.702 [2024-07-24 22:30:41.389743] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0340 is same with the state(5) to be set 00:21:15.702 [2024-07-24 22:30:41.389756] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0340 is same with the state(5) to be set 00:21:15.702 [2024-07-24 22:30:41.389769] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0340 is same with the state(5) to be set 00:21:15.702 [2024-07-24 22:30:41.389783] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0340 is same with the state(5) to be set 00:21:15.702 [2024-07-24 22:30:41.389796] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0340 is same with the state(5) to be set 00:21:15.702 [2024-07-24 22:30:41.389809] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0340 is same with the state(5) to be set 00:21:15.702 [2024-07-24 22:30:41.389822] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0340 is same with the state(5) to be set 00:21:15.702 [2024-07-24 22:30:41.389835] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0340 is same with the state(5) to be set 00:21:15.702 [2024-07-24 22:30:41.389850] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0340 is same with the state(5) to be set 00:21:15.702 [2024-07-24 22:30:41.389862] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0340 is same with the state(5) to be set 00:21:15.702 [2024-07-24 22:30:41.389896] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0340 is same with the state(5) to be set 00:21:15.702 [2024-07-24 22:30:41.389909] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0340 is same with the state(5) to be set 00:21:15.702 [2024-07-24 22:30:41.389922] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0340 is same with the state(5) to be set 00:21:15.702 [2024-07-24 22:30:41.389936] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0340 is same with the state(5) to be set 00:21:15.702 [2024-07-24 22:30:41.390061] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0340 is same with the state(5) to be set 00:21:15.702 [2024-07-24 22:30:41.390081] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0340 is same with the state(5) to be set 00:21:15.702 [2024-07-24 22:30:41.390095] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0340 is same with the state(5) to be set 00:21:15.702 [2024-07-24 22:30:41.390108] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0340 is same with the state(5) to be set 00:21:15.702 [2024-07-24 22:30:41.390121] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0340 is same with the state(5) to be set 00:21:15.702 [2024-07-24 22:30:41.390134] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0340 is same with the state(5) to be set 00:21:15.702 [2024-07-24 22:30:41.390147] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0340 is same with the state(5) to be set 00:21:15.702 [2024-07-24 22:30:41.390160] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0340 is same with the state(5) to be set 00:21:15.702 [2024-07-24 22:30:41.390173] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0340 is same with the state(5) to be set 00:21:15.702 [2024-07-24 22:30:41.390186] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0340 is same with the state(5) to be set 00:21:15.702 [2024-07-24 22:30:41.390199] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0340 is same with the state(5) to be set 00:21:15.702 [2024-07-24 22:30:41.390212] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0340 is same with the state(5) to be set 00:21:15.702 [2024-07-24 22:30:41.390225] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0340 is same with the state(5) to be set 00:21:15.702 [2024-07-24 22:30:41.390238] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0340 is same with the state(5) to be set 00:21:15.702 [2024-07-24 22:30:41.390251] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0340 is same with the state(5) to be set 00:21:15.702 [2024-07-24 22:30:41.390264] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0340 is same with the state(5) to be set 00:21:15.702 [2024-07-24 22:30:41.390277] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0340 is same with the state(5) to be set 00:21:15.702 [2024-07-24 22:30:41.390289] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0340 is same with the state(5) to be set 00:21:15.702 [2024-07-24 22:30:41.390302] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0340 is same with the state(5) to be set 00:21:15.702 [2024-07-24 22:30:41.390315] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0340 is same with the state(5) to be set 00:21:15.702 [2024-07-24 22:30:41.390328] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0340 is same with the state(5) to be set 00:21:15.702 [2024-07-24 22:30:41.390340] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0340 is same with the state(5) to be set 00:21:15.702 [2024-07-24 22:30:41.390353] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0340 is same with the state(5) to be set 00:21:15.702 [2024-07-24 22:30:41.390366] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0340 is same with the state(5) to be set 00:21:15.702 [2024-07-24 22:30:41.390385] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0340 is same with the state(5) to be set 00:21:15.702 [2024-07-24 22:30:41.390398] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0340 is same with the state(5) to be set 00:21:15.702 [2024-07-24 22:30:41.390411] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0340 is same with the state(5) to be set 00:21:15.702 [2024-07-24 22:30:41.390424] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0340 is same with the state(5) to be set 00:21:15.702 [2024-07-24 22:30:41.390436] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0340 is same with the state(5) to be set 00:21:15.702 [2024-07-24 22:30:41.390449] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0340 is same with the state(5) to be set 00:21:15.702 [2024-07-24 22:30:41.390462] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0340 is same with the state(5) to be set 00:21:15.702 [2024-07-24 22:30:41.390475] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0340 is same with the state(5) to be set 00:21:15.702 [2024-07-24 22:30:41.390496] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0340 is same with the state(5) to be set 00:21:15.702 [2024-07-24 22:30:41.390510] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0340 is same with the state(5) to be set 00:21:15.702 [2024-07-24 22:30:41.390523] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0340 is same with the state(5) to be set 00:21:15.702 [2024-07-24 22:30:41.390536] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc0340 is same with the state(5) to be set 00:21:15.960 22:30:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:21:19.241 22:30:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:19.241 [2024-07-24 22:30:44.694172] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:19.241 22:30:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:21:20.172 22:30:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:20.430 [2024-07-24 22:30:45.941341] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc10d0 is same with the state(5) to be set 00:21:20.430 [2024-07-24 22:30:45.941404] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc10d0 is same with the state(5) to be set 00:21:20.430 [2024-07-24 22:30:45.941420] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc10d0 is same with the state(5) to be set 00:21:20.430 [2024-07-24 22:30:45.941433] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc10d0 is same with the state(5) to be set 00:21:20.430 [2024-07-24 22:30:45.941447] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc10d0 is same with the state(5) to be set 00:21:20.430 [2024-07-24 22:30:45.941461] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc10d0 is same with the state(5) to be set 00:21:20.430 [2024-07-24 22:30:45.941474] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc10d0 is same with the state(5) to be set 00:21:20.430 [2024-07-24 22:30:45.941497] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc10d0 is same with the state(5) to be set 00:21:20.430 [2024-07-24 22:30:45.941511] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc10d0 is same with the state(5) to be set 00:21:20.430 [2024-07-24 22:30:45.941529] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc10d0 is same with the state(5) to be set 00:21:20.430 [2024-07-24 22:30:45.941560] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc10d0 is same with the state(5) to be set 00:21:20.430 [2024-07-24 22:30:45.941584] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc10d0 is same with the state(5) to be set 00:21:20.430 [2024-07-24 22:30:45.941599] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc10d0 is same with the state(5) to be set 00:21:20.430 [2024-07-24 22:30:45.941612] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc10d0 is same with the state(5) to be set 00:21:20.430 [2024-07-24 22:30:45.941626] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc10d0 is same with the state(5) to be set 00:21:20.430 [2024-07-24 22:30:45.941639] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc10d0 is same with the state(5) to be set 00:21:20.430 [2024-07-24 22:30:45.941653] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc10d0 is same with the state(5) to be set 00:21:20.430 [2024-07-24 22:30:45.941666] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc10d0 is same with the state(5) to be set 00:21:20.430 [2024-07-24 22:30:45.941679] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc10d0 is same with the state(5) to be set 00:21:20.430 [2024-07-24 22:30:45.941693] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc10d0 is same with the state(5) to be set 00:21:20.430 [2024-07-24 22:30:45.941708] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc10d0 is same with the state(5) to be set 00:21:20.430 [2024-07-24 22:30:45.941721] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc10d0 is same with the state(5) to be set 00:21:20.430 [2024-07-24 22:30:45.941735] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc10d0 is same with the state(5) to be set 00:21:20.430 [2024-07-24 22:30:45.941749] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc10d0 is same with the state(5) to be set 00:21:20.430 [2024-07-24 22:30:45.941763] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc10d0 is same with the state(5) to be set 00:21:20.430 [2024-07-24 22:30:45.941776] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc10d0 is same with the state(5) to be set 00:21:20.430 [2024-07-24 22:30:45.941790] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc10d0 is same with the state(5) to be set 00:21:20.430 [2024-07-24 22:30:45.941803] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc10d0 is same with the state(5) to be set 00:21:20.430 [2024-07-24 22:30:45.941816] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc10d0 is same with the state(5) to be set 00:21:20.431 [2024-07-24 22:30:45.941829] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc10d0 is same with the state(5) to be set 00:21:20.431 22:30:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 3895229 00:21:26.995 0 00:21:26.995 22:30:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 3895117 00:21:26.995 22:30:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 3895117 ']' 00:21:26.995 22:30:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 3895117 00:21:26.995 22:30:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:21:26.995 22:30:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:26.995 22:30:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3895117 00:21:26.995 22:30:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:26.995 22:30:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:26.995 22:30:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3895117' 00:21:26.995 killing process with pid 3895117 00:21:26.995 22:30:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@967 -- # kill 3895117 00:21:26.995 22:30:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # wait 3895117 00:21:26.995 22:30:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:26.995 [2024-07-24 22:30:35.301234] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:21:26.995 [2024-07-24 22:30:35.301338] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3895117 ] 00:21:26.995 EAL: No free 2048 kB hugepages reported on node 1 00:21:26.995 [2024-07-24 22:30:35.362328] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:26.995 [2024-07-24 22:30:35.480378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:26.995 Running I/O for 15 seconds... 00:21:26.995 [2024-07-24 22:30:37.711840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:70752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.995 [2024-07-24 22:30:37.711908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.995 [2024-07-24 22:30:37.711941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:70760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.995 [2024-07-24 22:30:37.711958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.995 [2024-07-24 22:30:37.711977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:70768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.995 [2024-07-24 22:30:37.711993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.995 [2024-07-24 22:30:37.712011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:70776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.995 [2024-07-24 22:30:37.712026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.995 [2024-07-24 22:30:37.712044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:70784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.995 [2024-07-24 22:30:37.712060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.995 [2024-07-24 22:30:37.712079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:70792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.995 [2024-07-24 22:30:37.712095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.995 [2024-07-24 22:30:37.712112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:70800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.995 [2024-07-24 22:30:37.712127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.995 [2024-07-24 22:30:37.712146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:70808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.995 [2024-07-24 22:30:37.712162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.995 [2024-07-24 22:30:37.712180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:70816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.995 [2024-07-24 22:30:37.712196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.995 [2024-07-24 22:30:37.712214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:70824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.995 [2024-07-24 22:30:37.712230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.995 [2024-07-24 22:30:37.712247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:70832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.995 [2024-07-24 22:30:37.712262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.995 [2024-07-24 22:30:37.712292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:70840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.995 [2024-07-24 22:30:37.712308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.995 [2024-07-24 22:30:37.712325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:70848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.995 [2024-07-24 22:30:37.712339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.995 [2024-07-24 22:30:37.712356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:70856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.995 [2024-07-24 22:30:37.712372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.995 [2024-07-24 22:30:37.712389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:70864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.995 [2024-07-24 22:30:37.712405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.995 [2024-07-24 22:30:37.712422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:70056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.995 [2024-07-24 22:30:37.712437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.995 [2024-07-24 22:30:37.712455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:70064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.995 [2024-07-24 22:30:37.712470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.995 [2024-07-24 22:30:37.712496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:70072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.995 [2024-07-24 22:30:37.712512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.995 [2024-07-24 22:30:37.712529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:70080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.996 [2024-07-24 22:30:37.712546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.996 [2024-07-24 22:30:37.712563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:70088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.996 [2024-07-24 22:30:37.712578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.996 [2024-07-24 22:30:37.712595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:70096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.996 [2024-07-24 22:30:37.712610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.996 [2024-07-24 22:30:37.712628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:70104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.996 [2024-07-24 22:30:37.712642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.996 [2024-07-24 22:30:37.712660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:70112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.996 [2024-07-24 22:30:37.712675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.996 [2024-07-24 22:30:37.712692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:70120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.996 [2024-07-24 22:30:37.712711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.996 [2024-07-24 22:30:37.712729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:70128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.996 [2024-07-24 22:30:37.712744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.996 [2024-07-24 22:30:37.712760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:70136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.996 [2024-07-24 22:30:37.712776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.996 [2024-07-24 22:30:37.712792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:70144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.996 [2024-07-24 22:30:37.712807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.996 [2024-07-24 22:30:37.712824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:70152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.996 [2024-07-24 22:30:37.712839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.996 [2024-07-24 22:30:37.712856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:70160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.996 [2024-07-24 22:30:37.712871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.996 [2024-07-24 22:30:37.712887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:70168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.996 [2024-07-24 22:30:37.712903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.996 [2024-07-24 22:30:37.712919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:70176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.996 [2024-07-24 22:30:37.712934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.996 [2024-07-24 22:30:37.712951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:70872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.996 [2024-07-24 22:30:37.712966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.996 [2024-07-24 22:30:37.712984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:70880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.996 [2024-07-24 22:30:37.712998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.996 [2024-07-24 22:30:37.713015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:70888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.996 [2024-07-24 22:30:37.713030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.996 [2024-07-24 22:30:37.713047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:70896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.996 [2024-07-24 22:30:37.713062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.996 [2024-07-24 22:30:37.713079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:70904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.996 [2024-07-24 22:30:37.713094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.996 [2024-07-24 22:30:37.713114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:70912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.996 [2024-07-24 22:30:37.713130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.996 [2024-07-24 22:30:37.713147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:70920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.996 [2024-07-24 22:30:37.713162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.996 [2024-07-24 22:30:37.713179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:70928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.996 [2024-07-24 22:30:37.713193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.996 [2024-07-24 22:30:37.713210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:70936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.996 [2024-07-24 22:30:37.713225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.996 [2024-07-24 22:30:37.713242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:70944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.996 [2024-07-24 22:30:37.713256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.996 [2024-07-24 22:30:37.713273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:70952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.996 [2024-07-24 22:30:37.713288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.996 [2024-07-24 22:30:37.713305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:70960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.996 [2024-07-24 22:30:37.713320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.996 [2024-07-24 22:30:37.713337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:70968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.996 [2024-07-24 22:30:37.713352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.996 [2024-07-24 22:30:37.713369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:70976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.996 [2024-07-24 22:30:37.713383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.996 [2024-07-24 22:30:37.713400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:70984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.996 [2024-07-24 22:30:37.713415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.996 [2024-07-24 22:30:37.713432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:70992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.996 [2024-07-24 22:30:37.713446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.996 [2024-07-24 22:30:37.713463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:71000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.996 [2024-07-24 22:30:37.713478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.996 [2024-07-24 22:30:37.713528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:71008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.996 [2024-07-24 22:30:37.713553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.996 [2024-07-24 22:30:37.713571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:71016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.996 [2024-07-24 22:30:37.713586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.996 [2024-07-24 22:30:37.713603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:71024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.996 [2024-07-24 22:30:37.713619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.996 [2024-07-24 22:30:37.713637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:71032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.996 [2024-07-24 22:30:37.713653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.996 [2024-07-24 22:30:37.713670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:71040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.996 [2024-07-24 22:30:37.713685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.996 [2024-07-24 22:30:37.713703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:71048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.996 [2024-07-24 22:30:37.713718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.996 [2024-07-24 22:30:37.713735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:71056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.996 [2024-07-24 22:30:37.713751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.996 [2024-07-24 22:30:37.713768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:70184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.996 [2024-07-24 22:30:37.713783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.996 [2024-07-24 22:30:37.713801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:70192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.996 [2024-07-24 22:30:37.713817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.996 [2024-07-24 22:30:37.713833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:70200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.996 [2024-07-24 22:30:37.713849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.997 [2024-07-24 22:30:37.713866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:70208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.997 [2024-07-24 22:30:37.713881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.997 [2024-07-24 22:30:37.713899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:70216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.997 [2024-07-24 22:30:37.713915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.997 [2024-07-24 22:30:37.713934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:70224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.997 [2024-07-24 22:30:37.713950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.997 [2024-07-24 22:30:37.713967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:70232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.997 [2024-07-24 22:30:37.713987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.997 [2024-07-24 22:30:37.714006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:70240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.997 [2024-07-24 22:30:37.714022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.997 [2024-07-24 22:30:37.714039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:70248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.997 [2024-07-24 22:30:37.714054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.997 [2024-07-24 22:30:37.714072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:70256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.997 [2024-07-24 22:30:37.714087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.997 [2024-07-24 22:30:37.714103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:70264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.997 [2024-07-24 22:30:37.714118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.997 [2024-07-24 22:30:37.714135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:70272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.997 [2024-07-24 22:30:37.714151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.997 [2024-07-24 22:30:37.714169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:70280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.997 [2024-07-24 22:30:37.714184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.997 [2024-07-24 22:30:37.714201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:70288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.997 [2024-07-24 22:30:37.714216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.997 [2024-07-24 22:30:37.714233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:70296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.997 [2024-07-24 22:30:37.714248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.997 [2024-07-24 22:30:37.714265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:70304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.997 [2024-07-24 22:30:37.714280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.997 [2024-07-24 22:30:37.714298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:70312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.997 [2024-07-24 22:30:37.714313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.997 [2024-07-24 22:30:37.714330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:70320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.997 [2024-07-24 22:30:37.714344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.997 [2024-07-24 22:30:37.714361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:70328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.997 [2024-07-24 22:30:37.714376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.997 [2024-07-24 22:30:37.714397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:70336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.997 [2024-07-24 22:30:37.714413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.997 [2024-07-24 22:30:37.714429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:70344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.997 [2024-07-24 22:30:37.714444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.997 [2024-07-24 22:30:37.714462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:70352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.997 [2024-07-24 22:30:37.714477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.997 [2024-07-24 22:30:37.714501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:70360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.997 [2024-07-24 22:30:37.714517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.997 [2024-07-24 22:30:37.714534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:70368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.997 [2024-07-24 22:30:37.714549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.997 [2024-07-24 22:30:37.714567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:70376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.997 [2024-07-24 22:30:37.714582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.997 [2024-07-24 22:30:37.714599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:70384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.997 [2024-07-24 22:30:37.714614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.997 [2024-07-24 22:30:37.714631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:70392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.997 [2024-07-24 22:30:37.714646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.997 [2024-07-24 22:30:37.714664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:70400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.997 [2024-07-24 22:30:37.714679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.997 [2024-07-24 22:30:37.714696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:70408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.997 [2024-07-24 22:30:37.714711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.997 [2024-07-24 22:30:37.714729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:70416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.997 [2024-07-24 22:30:37.714744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.997 [2024-07-24 22:30:37.714761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:70424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.997 [2024-07-24 22:30:37.714776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.997 [2024-07-24 22:30:37.714793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:70432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.997 [2024-07-24 22:30:37.714812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.997 [2024-07-24 22:30:37.714829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:70440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.997 [2024-07-24 22:30:37.714846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.997 [2024-07-24 22:30:37.714863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:70448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.997 [2024-07-24 22:30:37.714878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.997 [2024-07-24 22:30:37.714895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:70456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.997 [2024-07-24 22:30:37.714911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.997 [2024-07-24 22:30:37.714928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:70464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.997 [2024-07-24 22:30:37.714942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.997 [2024-07-24 22:30:37.714959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:70472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.997 [2024-07-24 22:30:37.714974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.997 [2024-07-24 22:30:37.714991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:70480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.997 [2024-07-24 22:30:37.715007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.997 [2024-07-24 22:30:37.715024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:70488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.997 [2024-07-24 22:30:37.715039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.997 [2024-07-24 22:30:37.715056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:70496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.997 [2024-07-24 22:30:37.715071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.997 [2024-07-24 22:30:37.715088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:70504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.997 [2024-07-24 22:30:37.715104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.997 [2024-07-24 22:30:37.715120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:70512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.997 [2024-07-24 22:30:37.715135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.997 [2024-07-24 22:30:37.715152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:70520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.998 [2024-07-24 22:30:37.715167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.998 [2024-07-24 22:30:37.715185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:70528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.998 [2024-07-24 22:30:37.715200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.998 [2024-07-24 22:30:37.715221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:70536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.998 [2024-07-24 22:30:37.715236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.998 [2024-07-24 22:30:37.715254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:70544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.998 [2024-07-24 22:30:37.715269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.998 [2024-07-24 22:30:37.715286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:70552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.998 [2024-07-24 22:30:37.715301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.998 [2024-07-24 22:30:37.715319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:70560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.998 [2024-07-24 22:30:37.715334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.998 [2024-07-24 22:30:37.715351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:70568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.998 [2024-07-24 22:30:37.715366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.998 [2024-07-24 22:30:37.715383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:70576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.998 [2024-07-24 22:30:37.715398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.998 [2024-07-24 22:30:37.715415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:70584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.998 [2024-07-24 22:30:37.715430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.998 [2024-07-24 22:30:37.715447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:70592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.998 [2024-07-24 22:30:37.715462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.998 [2024-07-24 22:30:37.715488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:70600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.998 [2024-07-24 22:30:37.715505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.998 [2024-07-24 22:30:37.715523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:70608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.998 [2024-07-24 22:30:37.715538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.998 [2024-07-24 22:30:37.715555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:70616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.998 [2024-07-24 22:30:37.715571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.998 [2024-07-24 22:30:37.715587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:70624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.998 [2024-07-24 22:30:37.715602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.998 [2024-07-24 22:30:37.715620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:70632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.998 [2024-07-24 22:30:37.715639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.998 [2024-07-24 22:30:37.715658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:70640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.998 [2024-07-24 22:30:37.715672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.998 [2024-07-24 22:30:37.715690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:70648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.998 [2024-07-24 22:30:37.715705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.998 [2024-07-24 22:30:37.715722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:70656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.998 [2024-07-24 22:30:37.715737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.998 [2024-07-24 22:30:37.715754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:70664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.998 [2024-07-24 22:30:37.715769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.998 [2024-07-24 22:30:37.715786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:70672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.998 [2024-07-24 22:30:37.715801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.998 [2024-07-24 22:30:37.715818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:70680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.998 [2024-07-24 22:30:37.715833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.998 [2024-07-24 22:30:37.715850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:71064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.998 [2024-07-24 22:30:37.715865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.998 [2024-07-24 22:30:37.715882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:71072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.998 [2024-07-24 22:30:37.715897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.998 [2024-07-24 22:30:37.715914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:70688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.998 [2024-07-24 22:30:37.715929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.998 [2024-07-24 22:30:37.715947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:70696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.998 [2024-07-24 22:30:37.715962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.998 [2024-07-24 22:30:37.715979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:70704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.998 [2024-07-24 22:30:37.715994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.998 [2024-07-24 22:30:37.716011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:70712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.998 [2024-07-24 22:30:37.716027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.998 [2024-07-24 22:30:37.716044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:70720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.998 [2024-07-24 22:30:37.716063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.998 [2024-07-24 22:30:37.716080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:70728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.998 [2024-07-24 22:30:37.716096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.998 [2024-07-24 22:30:37.716114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:70736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.998 [2024-07-24 22:30:37.716129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.998 [2024-07-24 22:30:37.716145] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cf980 is same with the state(5) to be set 00:21:26.998 [2024-07-24 22:30:37.716164] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:26.998 [2024-07-24 22:30:37.716177] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:26.998 [2024-07-24 22:30:37.716191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:70744 len:8 PRP1 0x0 PRP2 0x0 00:21:26.998 [2024-07-24 22:30:37.716205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.998 [2024-07-24 22:30:37.716264] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x22cf980 was disconnected and freed. reset controller. 00:21:26.998 [2024-07-24 22:30:37.716288] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:26.998 [2024-07-24 22:30:37.716326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.998 [2024-07-24 22:30:37.716345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.998 [2024-07-24 22:30:37.716362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.998 [2024-07-24 22:30:37.716376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.998 [2024-07-24 22:30:37.716392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.998 [2024-07-24 22:30:37.716408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.998 [2024-07-24 22:30:37.716423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:26.998 [2024-07-24 22:30:37.716438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.998 [2024-07-24 22:30:37.716452] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:26.998 [2024-07-24 22:30:37.720576] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:26.998 [2024-07-24 22:30:37.720620] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22a9430 (9): Bad file descriptor 00:21:26.998 [2024-07-24 22:30:37.760136] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:26.998 [2024-07-24 22:30:41.392209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:32072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.999 [2024-07-24 22:30:41.392255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.999 [2024-07-24 22:30:41.392285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:32080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.999 [2024-07-24 22:30:41.392309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.999 [2024-07-24 22:30:41.392329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:32088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.999 [2024-07-24 22:30:41.392345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.999 [2024-07-24 22:30:41.392362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.999 [2024-07-24 22:30:41.392377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.999 [2024-07-24 22:30:41.392394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:32104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.999 [2024-07-24 22:30:41.392409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.999 [2024-07-24 22:30:41.392425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:32112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.999 [2024-07-24 22:30:41.392441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.999 [2024-07-24 22:30:41.392458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:32120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.999 [2024-07-24 22:30:41.392473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.999 [2024-07-24 22:30:41.392498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:32128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.999 [2024-07-24 22:30:41.392515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.999 [2024-07-24 22:30:41.392531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:32136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.999 [2024-07-24 22:30:41.392546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.999 [2024-07-24 22:30:41.392563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:32144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.999 [2024-07-24 22:30:41.392577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.999 [2024-07-24 22:30:41.392594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.999 [2024-07-24 22:30:41.392609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.999 [2024-07-24 22:30:41.392625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:32160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.999 [2024-07-24 22:30:41.392640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.999 [2024-07-24 22:30:41.392656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:32168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.999 [2024-07-24 22:30:41.392671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.999 [2024-07-24 22:30:41.392687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:32176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.999 [2024-07-24 22:30:41.392702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.999 [2024-07-24 22:30:41.392723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:32184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.999 [2024-07-24 22:30:41.392739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.999 [2024-07-24 22:30:41.392755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:32192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.999 [2024-07-24 22:30:41.392770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.999 [2024-07-24 22:30:41.392786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:32200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.999 [2024-07-24 22:30:41.392801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.999 [2024-07-24 22:30:41.392818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.999 [2024-07-24 22:30:41.392833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.999 [2024-07-24 22:30:41.392849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:32336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.999 [2024-07-24 22:30:41.392864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.999 [2024-07-24 22:30:41.392880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:32344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.999 [2024-07-24 22:30:41.392895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.999 [2024-07-24 22:30:41.392911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:32352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.999 [2024-07-24 22:30:41.392926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.999 [2024-07-24 22:30:41.392942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:32360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.999 [2024-07-24 22:30:41.392957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.999 [2024-07-24 22:30:41.392973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:32368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.999 [2024-07-24 22:30:41.392988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.999 [2024-07-24 22:30:41.393004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:32376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.999 [2024-07-24 22:30:41.393019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.999 [2024-07-24 22:30:41.393035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:32384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.999 [2024-07-24 22:30:41.393050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.999 [2024-07-24 22:30:41.393066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:32392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.999 [2024-07-24 22:30:41.393081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.999 [2024-07-24 22:30:41.393098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:32400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.999 [2024-07-24 22:30:41.393117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.999 [2024-07-24 22:30:41.393134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:32408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.999 [2024-07-24 22:30:41.393149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.999 [2024-07-24 22:30:41.393165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:32416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.999 [2024-07-24 22:30:41.393180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.999 [2024-07-24 22:30:41.393196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:32424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.999 [2024-07-24 22:30:41.393211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.999 [2024-07-24 22:30:41.393227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:32432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.999 [2024-07-24 22:30:41.393242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.999 [2024-07-24 22:30:41.393258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:32440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:26.999 [2024-07-24 22:30:41.393273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:26.999 [2024-07-24 22:30:41.393290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:32448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.000 [2024-07-24 22:30:41.393305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.000 [2024-07-24 22:30:41.393322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:32456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.000 [2024-07-24 22:30:41.393337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.000 [2024-07-24 22:30:41.393354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:32464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.000 [2024-07-24 22:30:41.393368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.000 [2024-07-24 22:30:41.393385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:32472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.000 [2024-07-24 22:30:41.393400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.000 [2024-07-24 22:30:41.393417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:32480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.000 [2024-07-24 22:30:41.393433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.000 [2024-07-24 22:30:41.393449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:32488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.000 [2024-07-24 22:30:41.393464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.000 [2024-07-24 22:30:41.393489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:32496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.000 [2024-07-24 22:30:41.393506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.000 [2024-07-24 22:30:41.393523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:32504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.000 [2024-07-24 22:30:41.393542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.000 [2024-07-24 22:30:41.393559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:32512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.000 [2024-07-24 22:30:41.393573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.000 [2024-07-24 22:30:41.393590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:32520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.000 [2024-07-24 22:30:41.393605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.000 [2024-07-24 22:30:41.393622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:32528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.000 [2024-07-24 22:30:41.393637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.000 [2024-07-24 22:30:41.393653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:32536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.000 [2024-07-24 22:30:41.393668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.000 [2024-07-24 22:30:41.393684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:32544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.000 [2024-07-24 22:30:41.393700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.000 [2024-07-24 22:30:41.393716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:32552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.000 [2024-07-24 22:30:41.393731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.000 [2024-07-24 22:30:41.393747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.000 [2024-07-24 22:30:41.393762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.000 [2024-07-24 22:30:41.393779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:32568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.000 [2024-07-24 22:30:41.393794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.000 [2024-07-24 22:30:41.393810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:32576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.000 [2024-07-24 22:30:41.393826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.000 [2024-07-24 22:30:41.393843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:32584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.000 [2024-07-24 22:30:41.393858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.000 [2024-07-24 22:30:41.393875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:32592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.000 [2024-07-24 22:30:41.393889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.000 [2024-07-24 22:30:41.393906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:32600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.000 [2024-07-24 22:30:41.393921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.000 [2024-07-24 22:30:41.393946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.000 [2024-07-24 22:30:41.393962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.000 [2024-07-24 22:30:41.393979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:32616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.000 [2024-07-24 22:30:41.393994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.000 [2024-07-24 22:30:41.394011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:32624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.000 [2024-07-24 22:30:41.394026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.000 [2024-07-24 22:30:41.394043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:32632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.000 [2024-07-24 22:30:41.394058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.000 [2024-07-24 22:30:41.394074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:32640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.000 [2024-07-24 22:30:41.394089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.000 [2024-07-24 22:30:41.394106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:32648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.000 [2024-07-24 22:30:41.394121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.000 [2024-07-24 22:30:41.394137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:32656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.000 [2024-07-24 22:30:41.394152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.000 [2024-07-24 22:30:41.394169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:32664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.000 [2024-07-24 22:30:41.394184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.000 [2024-07-24 22:30:41.394201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:32672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.000 [2024-07-24 22:30:41.394223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.000 [2024-07-24 22:30:41.394240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:32680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.000 [2024-07-24 22:30:41.394255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.000 [2024-07-24 22:30:41.394271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.000 [2024-07-24 22:30:41.394286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.000 [2024-07-24 22:30:41.394303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:32696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.000 [2024-07-24 22:30:41.394318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.000 [2024-07-24 22:30:41.394335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:32704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.000 [2024-07-24 22:30:41.394353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.000 [2024-07-24 22:30:41.394370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:32712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.000 [2024-07-24 22:30:41.394385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.000 [2024-07-24 22:30:41.394402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.000 [2024-07-24 22:30:41.394416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.000 [2024-07-24 22:30:41.394434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:32728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.000 [2024-07-24 22:30:41.394449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.000 [2024-07-24 22:30:41.394466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:32736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.000 [2024-07-24 22:30:41.394486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.000 [2024-07-24 22:30:41.394505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:32744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.000 [2024-07-24 22:30:41.394520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.000 [2024-07-24 22:30:41.394536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:32752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.000 [2024-07-24 22:30:41.394550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.000 [2024-07-24 22:30:41.394566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:32760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.000 [2024-07-24 22:30:41.394581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.000 [2024-07-24 22:30:41.394598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:32768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.001 [2024-07-24 22:30:41.394612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.001 [2024-07-24 22:30:41.394628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:32776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.001 [2024-07-24 22:30:41.394643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.001 [2024-07-24 22:30:41.394659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:32784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.001 [2024-07-24 22:30:41.394674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.001 [2024-07-24 22:30:41.394690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:32792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.001 [2024-07-24 22:30:41.394705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.001 [2024-07-24 22:30:41.394722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:32800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.001 [2024-07-24 22:30:41.394742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.001 [2024-07-24 22:30:41.394762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:32808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.001 [2024-07-24 22:30:41.394778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.001 [2024-07-24 22:30:41.394795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:32816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.001 [2024-07-24 22:30:41.394810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.001 [2024-07-24 22:30:41.394827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:32824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.001 [2024-07-24 22:30:41.394842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.001 [2024-07-24 22:30:41.394859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:32832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.001 [2024-07-24 22:30:41.394874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.001 [2024-07-24 22:30:41.394891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:32208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.001 [2024-07-24 22:30:41.394906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.001 [2024-07-24 22:30:41.394923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:32216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.001 [2024-07-24 22:30:41.394938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.001 [2024-07-24 22:30:41.394954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:32224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.001 [2024-07-24 22:30:41.394969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.001 [2024-07-24 22:30:41.394986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:32232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.001 [2024-07-24 22:30:41.395001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.001 [2024-07-24 22:30:41.395017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:32240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.001 [2024-07-24 22:30:41.395032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.001 [2024-07-24 22:30:41.395049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:32248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.001 [2024-07-24 22:30:41.395065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.001 [2024-07-24 22:30:41.395082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:32256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.001 [2024-07-24 22:30:41.395099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.001 [2024-07-24 22:30:41.395116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:32840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.001 [2024-07-24 22:30:41.395132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.001 [2024-07-24 22:30:41.395149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:32848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.001 [2024-07-24 22:30:41.395164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.001 [2024-07-24 22:30:41.395185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:32856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.001 [2024-07-24 22:30:41.395201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.001 [2024-07-24 22:30:41.395218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:32864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.001 [2024-07-24 22:30:41.395234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.001 [2024-07-24 22:30:41.395252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.001 [2024-07-24 22:30:41.395272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.001 [2024-07-24 22:30:41.395290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:32880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.001 [2024-07-24 22:30:41.395306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.001 [2024-07-24 22:30:41.395323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:32888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.001 [2024-07-24 22:30:41.395338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.001 [2024-07-24 22:30:41.395354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:32896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.001 [2024-07-24 22:30:41.395369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.001 [2024-07-24 22:30:41.395386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:32904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.001 [2024-07-24 22:30:41.395401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.001 [2024-07-24 22:30:41.395418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.001 [2024-07-24 22:30:41.395433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.001 [2024-07-24 22:30:41.395449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.001 [2024-07-24 22:30:41.395464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.001 [2024-07-24 22:30:41.395488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:32928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.001 [2024-07-24 22:30:41.395504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.001 [2024-07-24 22:30:41.395522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:32936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.001 [2024-07-24 22:30:41.395537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.001 [2024-07-24 22:30:41.395571] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.001 [2024-07-24 22:30:41.395589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32944 len:8 PRP1 0x0 PRP2 0x0 00:21:27.001 [2024-07-24 22:30:41.395603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.001 [2024-07-24 22:30:41.395663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:27.001 [2024-07-24 22:30:41.395684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.001 [2024-07-24 22:30:41.395701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:27.001 [2024-07-24 22:30:41.395716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.001 [2024-07-24 22:30:41.395731] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:27.001 [2024-07-24 22:30:41.395746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.001 [2024-07-24 22:30:41.395761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:27.001 [2024-07-24 22:30:41.395776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.001 [2024-07-24 22:30:41.395790] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a9430 is same with the state(5) to be set 00:21:27.001 [2024-07-24 22:30:41.396075] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.001 [2024-07-24 22:30:41.396095] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.001 [2024-07-24 22:30:41.396109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32952 len:8 PRP1 0x0 PRP2 0x0 00:21:27.001 [2024-07-24 22:30:41.396129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.001 [2024-07-24 22:30:41.396149] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.001 [2024-07-24 22:30:41.396162] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.001 [2024-07-24 22:30:41.396175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32960 len:8 PRP1 0x0 PRP2 0x0 00:21:27.001 [2024-07-24 22:30:41.396189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.001 [2024-07-24 22:30:41.396204] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.001 [2024-07-24 22:30:41.396216] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.001 [2024-07-24 22:30:41.396228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32968 len:8 PRP1 0x0 PRP2 0x0 00:21:27.001 [2024-07-24 22:30:41.396248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.001 [2024-07-24 22:30:41.396263] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.002 [2024-07-24 22:30:41.396275] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.002 [2024-07-24 22:30:41.396288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32976 len:8 PRP1 0x0 PRP2 0x0 00:21:27.002 [2024-07-24 22:30:41.396302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.002 [2024-07-24 22:30:41.396316] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.002 [2024-07-24 22:30:41.396328] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.002 [2024-07-24 22:30:41.396340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32984 len:8 PRP1 0x0 PRP2 0x0 00:21:27.002 [2024-07-24 22:30:41.396354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.002 [2024-07-24 22:30:41.396373] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.002 [2024-07-24 22:30:41.396385] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.002 [2024-07-24 22:30:41.396397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32992 len:8 PRP1 0x0 PRP2 0x0 00:21:27.002 [2024-07-24 22:30:41.396411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.002 [2024-07-24 22:30:41.396425] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.002 [2024-07-24 22:30:41.396437] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.002 [2024-07-24 22:30:41.396449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33000 len:8 PRP1 0x0 PRP2 0x0 00:21:27.002 [2024-07-24 22:30:41.396463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.002 [2024-07-24 22:30:41.396477] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.002 [2024-07-24 22:30:41.396499] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.002 [2024-07-24 22:30:41.396512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33008 len:8 PRP1 0x0 PRP2 0x0 00:21:27.002 [2024-07-24 22:30:41.396526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.002 [2024-07-24 22:30:41.396541] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.002 [2024-07-24 22:30:41.396552] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.002 [2024-07-24 22:30:41.396564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33016 len:8 PRP1 0x0 PRP2 0x0 00:21:27.002 [2024-07-24 22:30:41.396584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.002 [2024-07-24 22:30:41.396599] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.002 [2024-07-24 22:30:41.396610] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.002 [2024-07-24 22:30:41.396622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33024 len:8 PRP1 0x0 PRP2 0x0 00:21:27.002 [2024-07-24 22:30:41.396636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.002 [2024-07-24 22:30:41.396651] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.002 [2024-07-24 22:30:41.396663] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.002 [2024-07-24 22:30:41.396675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33032 len:8 PRP1 0x0 PRP2 0x0 00:21:27.002 [2024-07-24 22:30:41.396693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.002 [2024-07-24 22:30:41.396708] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.002 [2024-07-24 22:30:41.396721] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.002 [2024-07-24 22:30:41.396733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33040 len:8 PRP1 0x0 PRP2 0x0 00:21:27.002 [2024-07-24 22:30:41.396747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.002 [2024-07-24 22:30:41.396761] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.002 [2024-07-24 22:30:41.396773] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.002 [2024-07-24 22:30:41.396785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33048 len:8 PRP1 0x0 PRP2 0x0 00:21:27.002 [2024-07-24 22:30:41.396803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.002 [2024-07-24 22:30:41.396818] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.002 [2024-07-24 22:30:41.396829] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.002 [2024-07-24 22:30:41.396842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33056 len:8 PRP1 0x0 PRP2 0x0 00:21:27.002 [2024-07-24 22:30:41.396856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.002 [2024-07-24 22:30:41.396870] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.002 [2024-07-24 22:30:41.396882] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.002 [2024-07-24 22:30:41.396895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33064 len:8 PRP1 0x0 PRP2 0x0 00:21:27.002 [2024-07-24 22:30:41.396908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.002 [2024-07-24 22:30:41.396923] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.002 [2024-07-24 22:30:41.396934] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.002 [2024-07-24 22:30:41.396946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33072 len:8 PRP1 0x0 PRP2 0x0 00:21:27.002 [2024-07-24 22:30:41.396960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.002 [2024-07-24 22:30:41.396974] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.002 [2024-07-24 22:30:41.396987] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.002 [2024-07-24 22:30:41.396999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33080 len:8 PRP1 0x0 PRP2 0x0 00:21:27.002 [2024-07-24 22:30:41.397018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.002 [2024-07-24 22:30:41.397033] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.002 [2024-07-24 22:30:41.397045] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.002 [2024-07-24 22:30:41.397057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33088 len:8 PRP1 0x0 PRP2 0x0 00:21:27.002 [2024-07-24 22:30:41.397071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.002 [2024-07-24 22:30:41.397085] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.002 [2024-07-24 22:30:41.397098] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.002 [2024-07-24 22:30:41.397110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32264 len:8 PRP1 0x0 PRP2 0x0 00:21:27.002 [2024-07-24 22:30:41.397128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.002 [2024-07-24 22:30:41.397143] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.002 [2024-07-24 22:30:41.397155] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.002 [2024-07-24 22:30:41.397167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32272 len:8 PRP1 0x0 PRP2 0x0 00:21:27.002 [2024-07-24 22:30:41.397182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.002 [2024-07-24 22:30:41.397196] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.002 [2024-07-24 22:30:41.397208] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.002 [2024-07-24 22:30:41.397223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32280 len:8 PRP1 0x0 PRP2 0x0 00:21:27.002 [2024-07-24 22:30:41.397238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.002 [2024-07-24 22:30:41.397252] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.002 [2024-07-24 22:30:41.397264] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.002 [2024-07-24 22:30:41.397276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32288 len:8 PRP1 0x0 PRP2 0x0 00:21:27.002 [2024-07-24 22:30:41.397290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.002 [2024-07-24 22:30:41.397304] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.002 [2024-07-24 22:30:41.397316] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.002 [2024-07-24 22:30:41.397328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32296 len:8 PRP1 0x0 PRP2 0x0 00:21:27.002 [2024-07-24 22:30:41.414062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.002 [2024-07-24 22:30:41.414105] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.002 [2024-07-24 22:30:41.414121] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.002 [2024-07-24 22:30:41.414135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32304 len:8 PRP1 0x0 PRP2 0x0 00:21:27.002 [2024-07-24 22:30:41.414150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.002 [2024-07-24 22:30:41.414164] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.002 [2024-07-24 22:30:41.414176] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.002 [2024-07-24 22:30:41.414189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32312 len:8 PRP1 0x0 PRP2 0x0 00:21:27.002 [2024-07-24 22:30:41.414205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.002 [2024-07-24 22:30:41.414219] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.002 [2024-07-24 22:30:41.414231] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.002 [2024-07-24 22:30:41.414243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32320 len:8 PRP1 0x0 PRP2 0x0 00:21:27.002 [2024-07-24 22:30:41.414257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.002 [2024-07-24 22:30:41.414272] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.002 [2024-07-24 22:30:41.414284] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.003 [2024-07-24 22:30:41.414297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32072 len:8 PRP1 0x0 PRP2 0x0 00:21:27.003 [2024-07-24 22:30:41.414312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.003 [2024-07-24 22:30:41.414327] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.003 [2024-07-24 22:30:41.414339] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.003 [2024-07-24 22:30:41.414351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32080 len:8 PRP1 0x0 PRP2 0x0 00:21:27.003 [2024-07-24 22:30:41.414365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.003 [2024-07-24 22:30:41.414381] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.003 [2024-07-24 22:30:41.414400] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.003 [2024-07-24 22:30:41.414413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32088 len:8 PRP1 0x0 PRP2 0x0 00:21:27.003 [2024-07-24 22:30:41.414427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.003 [2024-07-24 22:30:41.414442] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.003 [2024-07-24 22:30:41.414454] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.003 [2024-07-24 22:30:41.414467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32096 len:8 PRP1 0x0 PRP2 0x0 00:21:27.003 [2024-07-24 22:30:41.414491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.003 [2024-07-24 22:30:41.414507] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.003 [2024-07-24 22:30:41.414519] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.003 [2024-07-24 22:30:41.414538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32104 len:8 PRP1 0x0 PRP2 0x0 00:21:27.003 [2024-07-24 22:30:41.414552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.003 [2024-07-24 22:30:41.414567] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.003 [2024-07-24 22:30:41.414579] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.003 [2024-07-24 22:30:41.414592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32112 len:8 PRP1 0x0 PRP2 0x0 00:21:27.003 [2024-07-24 22:30:41.414606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.003 [2024-07-24 22:30:41.414621] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.003 [2024-07-24 22:30:41.414633] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.003 [2024-07-24 22:30:41.414647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32120 len:8 PRP1 0x0 PRP2 0x0 00:21:27.003 [2024-07-24 22:30:41.414661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.003 [2024-07-24 22:30:41.414676] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.003 [2024-07-24 22:30:41.414688] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.003 [2024-07-24 22:30:41.414700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32128 len:8 PRP1 0x0 PRP2 0x0 00:21:27.003 [2024-07-24 22:30:41.414715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.003 [2024-07-24 22:30:41.414730] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.003 [2024-07-24 22:30:41.414742] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.003 [2024-07-24 22:30:41.414754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32136 len:8 PRP1 0x0 PRP2 0x0 00:21:27.003 [2024-07-24 22:30:41.414768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.003 [2024-07-24 22:30:41.414783] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.003 [2024-07-24 22:30:41.414795] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.003 [2024-07-24 22:30:41.414807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32144 len:8 PRP1 0x0 PRP2 0x0 00:21:27.003 [2024-07-24 22:30:41.414821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.003 [2024-07-24 22:30:41.414840] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.003 [2024-07-24 22:30:41.414852] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.003 [2024-07-24 22:30:41.414865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32152 len:8 PRP1 0x0 PRP2 0x0 00:21:27.003 [2024-07-24 22:30:41.414879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.003 [2024-07-24 22:30:41.414893] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.003 [2024-07-24 22:30:41.414905] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.003 [2024-07-24 22:30:41.414917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32160 len:8 PRP1 0x0 PRP2 0x0 00:21:27.003 [2024-07-24 22:30:41.414931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.003 [2024-07-24 22:30:41.414945] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.003 [2024-07-24 22:30:41.414957] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.003 [2024-07-24 22:30:41.414969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32168 len:8 PRP1 0x0 PRP2 0x0 00:21:27.003 [2024-07-24 22:30:41.414983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.003 [2024-07-24 22:30:41.414997] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.003 [2024-07-24 22:30:41.415010] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.003 [2024-07-24 22:30:41.415023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32176 len:8 PRP1 0x0 PRP2 0x0 00:21:27.003 [2024-07-24 22:30:41.415037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.003 [2024-07-24 22:30:41.415051] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.003 [2024-07-24 22:30:41.415063] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.003 [2024-07-24 22:30:41.415075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32184 len:8 PRP1 0x0 PRP2 0x0 00:21:27.003 [2024-07-24 22:30:41.415089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.003 [2024-07-24 22:30:41.415104] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.003 [2024-07-24 22:30:41.415115] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.003 [2024-07-24 22:30:41.415128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32192 len:8 PRP1 0x0 PRP2 0x0 00:21:27.003 [2024-07-24 22:30:41.415142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.003 [2024-07-24 22:30:41.415157] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.003 [2024-07-24 22:30:41.415169] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.003 [2024-07-24 22:30:41.415181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32200 len:8 PRP1 0x0 PRP2 0x0 00:21:27.003 [2024-07-24 22:30:41.415195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.003 [2024-07-24 22:30:41.415209] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.003 [2024-07-24 22:30:41.415221] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.003 [2024-07-24 22:30:41.415233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32328 len:8 PRP1 0x0 PRP2 0x0 00:21:27.003 [2024-07-24 22:30:41.415250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.003 [2024-07-24 22:30:41.415265] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.003 [2024-07-24 22:30:41.415277] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.003 [2024-07-24 22:30:41.415290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32336 len:8 PRP1 0x0 PRP2 0x0 00:21:27.003 [2024-07-24 22:30:41.415304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.003 [2024-07-24 22:30:41.415319] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.003 [2024-07-24 22:30:41.415331] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.003 [2024-07-24 22:30:41.415344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32344 len:8 PRP1 0x0 PRP2 0x0 00:21:27.003 [2024-07-24 22:30:41.415358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.003 [2024-07-24 22:30:41.415373] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.003 [2024-07-24 22:30:41.415384] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.003 [2024-07-24 22:30:41.415396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32352 len:8 PRP1 0x0 PRP2 0x0 00:21:27.003 [2024-07-24 22:30:41.415410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.003 [2024-07-24 22:30:41.415424] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.003 [2024-07-24 22:30:41.415436] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.003 [2024-07-24 22:30:41.415448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32360 len:8 PRP1 0x0 PRP2 0x0 00:21:27.003 [2024-07-24 22:30:41.415462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.003 [2024-07-24 22:30:41.415476] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.003 [2024-07-24 22:30:41.415495] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.003 [2024-07-24 22:30:41.415508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32368 len:8 PRP1 0x0 PRP2 0x0 00:21:27.003 [2024-07-24 22:30:41.415524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.003 [2024-07-24 22:30:41.415538] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.003 [2024-07-24 22:30:41.415551] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.004 [2024-07-24 22:30:41.415563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32376 len:8 PRP1 0x0 PRP2 0x0 00:21:27.004 [2024-07-24 22:30:41.415585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.004 [2024-07-24 22:30:41.415600] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.004 [2024-07-24 22:30:41.415612] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.004 [2024-07-24 22:30:41.415625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32384 len:8 PRP1 0x0 PRP2 0x0 00:21:27.004 [2024-07-24 22:30:41.415639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.004 [2024-07-24 22:30:41.415653] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.004 [2024-07-24 22:30:41.415665] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.004 [2024-07-24 22:30:41.415681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32392 len:8 PRP1 0x0 PRP2 0x0 00:21:27.004 [2024-07-24 22:30:41.415695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.004 [2024-07-24 22:30:41.415710] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.004 [2024-07-24 22:30:41.415723] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.004 [2024-07-24 22:30:41.415735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32400 len:8 PRP1 0x0 PRP2 0x0 00:21:27.004 [2024-07-24 22:30:41.415749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.004 [2024-07-24 22:30:41.415764] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.004 [2024-07-24 22:30:41.415776] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.004 [2024-07-24 22:30:41.415788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32408 len:8 PRP1 0x0 PRP2 0x0 00:21:27.004 [2024-07-24 22:30:41.415802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.004 [2024-07-24 22:30:41.415817] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.004 [2024-07-24 22:30:41.415829] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.004 [2024-07-24 22:30:41.415841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32416 len:8 PRP1 0x0 PRP2 0x0 00:21:27.004 [2024-07-24 22:30:41.415855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.004 [2024-07-24 22:30:41.415869] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.004 [2024-07-24 22:30:41.415881] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.004 [2024-07-24 22:30:41.415894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32424 len:8 PRP1 0x0 PRP2 0x0 00:21:27.004 [2024-07-24 22:30:41.415908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.004 [2024-07-24 22:30:41.415922] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.004 [2024-07-24 22:30:41.415934] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.004 [2024-07-24 22:30:41.415947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32432 len:8 PRP1 0x0 PRP2 0x0 00:21:27.004 [2024-07-24 22:30:41.415961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.004 [2024-07-24 22:30:41.415975] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.004 [2024-07-24 22:30:41.415987] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.004 [2024-07-24 22:30:41.416000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32440 len:8 PRP1 0x0 PRP2 0x0 00:21:27.004 [2024-07-24 22:30:41.416014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.004 [2024-07-24 22:30:41.416028] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.004 [2024-07-24 22:30:41.416040] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.004 [2024-07-24 22:30:41.416053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32448 len:8 PRP1 0x0 PRP2 0x0 00:21:27.004 [2024-07-24 22:30:41.416067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.004 [2024-07-24 22:30:41.416084] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.004 [2024-07-24 22:30:41.416096] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.004 [2024-07-24 22:30:41.416109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32456 len:8 PRP1 0x0 PRP2 0x0 00:21:27.004 [2024-07-24 22:30:41.416123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.004 [2024-07-24 22:30:41.416138] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.004 [2024-07-24 22:30:41.416150] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.004 [2024-07-24 22:30:41.416162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32464 len:8 PRP1 0x0 PRP2 0x0 00:21:27.004 [2024-07-24 22:30:41.416176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.004 [2024-07-24 22:30:41.416191] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.004 [2024-07-24 22:30:41.416203] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.004 [2024-07-24 22:30:41.416215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32472 len:8 PRP1 0x0 PRP2 0x0 00:21:27.004 [2024-07-24 22:30:41.416229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.004 [2024-07-24 22:30:41.416243] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.004 [2024-07-24 22:30:41.416255] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.004 [2024-07-24 22:30:41.416268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32480 len:8 PRP1 0x0 PRP2 0x0 00:21:27.004 [2024-07-24 22:30:41.416282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.004 [2024-07-24 22:30:41.416296] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.004 [2024-07-24 22:30:41.416308] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.004 [2024-07-24 22:30:41.416320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32488 len:8 PRP1 0x0 PRP2 0x0 00:21:27.004 [2024-07-24 22:30:41.416335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.004 [2024-07-24 22:30:41.416349] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.004 [2024-07-24 22:30:41.416360] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.004 [2024-07-24 22:30:41.416378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32496 len:8 PRP1 0x0 PRP2 0x0 00:21:27.004 [2024-07-24 22:30:41.416393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.004 [2024-07-24 22:30:41.416406] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.004 [2024-07-24 22:30:41.416418] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.004 [2024-07-24 22:30:41.416430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32504 len:8 PRP1 0x0 PRP2 0x0 00:21:27.004 [2024-07-24 22:30:41.416444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.004 [2024-07-24 22:30:41.416458] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.004 [2024-07-24 22:30:41.416470] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.004 [2024-07-24 22:30:41.416489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32512 len:8 PRP1 0x0 PRP2 0x0 00:21:27.004 [2024-07-24 22:30:41.416513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.004 [2024-07-24 22:30:41.416531] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.004 [2024-07-24 22:30:41.416543] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.004 [2024-07-24 22:30:41.416555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32520 len:8 PRP1 0x0 PRP2 0x0 00:21:27.004 [2024-07-24 22:30:41.416569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.004 [2024-07-24 22:30:41.416592] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.004 [2024-07-24 22:30:41.416603] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.004 [2024-07-24 22:30:41.416615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32528 len:8 PRP1 0x0 PRP2 0x0 00:21:27.004 [2024-07-24 22:30:41.416630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.004 [2024-07-24 22:30:41.416644] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.004 [2024-07-24 22:30:41.416656] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.004 [2024-07-24 22:30:41.416668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32536 len:8 PRP1 0x0 PRP2 0x0 00:21:27.005 [2024-07-24 22:30:41.416682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.005 [2024-07-24 22:30:41.416696] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.005 [2024-07-24 22:30:41.416708] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.005 [2024-07-24 22:30:41.416720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32544 len:8 PRP1 0x0 PRP2 0x0 00:21:27.005 [2024-07-24 22:30:41.416734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.005 [2024-07-24 22:30:41.416748] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.005 [2024-07-24 22:30:41.416760] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.005 [2024-07-24 22:30:41.416772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32552 len:8 PRP1 0x0 PRP2 0x0 00:21:27.005 [2024-07-24 22:30:41.416786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.005 [2024-07-24 22:30:41.416801] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.005 [2024-07-24 22:30:41.416813] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.005 [2024-07-24 22:30:41.416830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32560 len:8 PRP1 0x0 PRP2 0x0 00:21:27.005 [2024-07-24 22:30:41.416844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.005 [2024-07-24 22:30:41.416859] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.005 [2024-07-24 22:30:41.416871] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.005 [2024-07-24 22:30:41.416883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32568 len:8 PRP1 0x0 PRP2 0x0 00:21:27.005 [2024-07-24 22:30:41.416896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.005 [2024-07-24 22:30:41.416910] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.005 [2024-07-24 22:30:41.416922] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.005 [2024-07-24 22:30:41.416938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32576 len:8 PRP1 0x0 PRP2 0x0 00:21:27.005 [2024-07-24 22:30:41.416956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.005 [2024-07-24 22:30:41.416971] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.005 [2024-07-24 22:30:41.416983] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.005 [2024-07-24 22:30:41.416996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32584 len:8 PRP1 0x0 PRP2 0x0 00:21:27.005 [2024-07-24 22:30:41.417010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.005 [2024-07-24 22:30:41.417024] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.005 [2024-07-24 22:30:41.417037] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.005 [2024-07-24 22:30:41.417049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32592 len:8 PRP1 0x0 PRP2 0x0 00:21:27.005 [2024-07-24 22:30:41.417063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.005 [2024-07-24 22:30:41.417078] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.005 [2024-07-24 22:30:41.417090] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.005 [2024-07-24 22:30:41.417102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32600 len:8 PRP1 0x0 PRP2 0x0 00:21:27.005 [2024-07-24 22:30:41.417116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.005 [2024-07-24 22:30:41.417130] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.005 [2024-07-24 22:30:41.417143] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.005 [2024-07-24 22:30:41.417155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32608 len:8 PRP1 0x0 PRP2 0x0 00:21:27.005 [2024-07-24 22:30:41.417169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.005 [2024-07-24 22:30:41.417183] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.005 [2024-07-24 22:30:41.417195] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.005 [2024-07-24 22:30:41.417207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32616 len:8 PRP1 0x0 PRP2 0x0 00:21:27.005 [2024-07-24 22:30:41.417222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.005 [2024-07-24 22:30:41.417236] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.005 [2024-07-24 22:30:41.417248] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.005 [2024-07-24 22:30:41.417264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32624 len:8 PRP1 0x0 PRP2 0x0 00:21:27.005 [2024-07-24 22:30:41.417279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.005 [2024-07-24 22:30:41.417293] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.005 [2024-07-24 22:30:41.417304] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.005 [2024-07-24 22:30:41.417316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32632 len:8 PRP1 0x0 PRP2 0x0 00:21:27.005 [2024-07-24 22:30:41.417330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.005 [2024-07-24 22:30:41.417344] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.005 [2024-07-24 22:30:41.417362] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.005 [2024-07-24 22:30:41.417375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32640 len:8 PRP1 0x0 PRP2 0x0 00:21:27.005 [2024-07-24 22:30:41.417390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.005 [2024-07-24 22:30:41.417404] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.005 [2024-07-24 22:30:41.417416] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.005 [2024-07-24 22:30:41.417429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32648 len:8 PRP1 0x0 PRP2 0x0 00:21:27.005 [2024-07-24 22:30:41.417442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.005 [2024-07-24 22:30:41.417457] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.005 [2024-07-24 22:30:41.417468] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.005 [2024-07-24 22:30:41.417492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32656 len:8 PRP1 0x0 PRP2 0x0 00:21:27.005 [2024-07-24 22:30:41.417508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.005 [2024-07-24 22:30:41.417529] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.005 [2024-07-24 22:30:41.417541] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.005 [2024-07-24 22:30:41.417553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32664 len:8 PRP1 0x0 PRP2 0x0 00:21:27.005 [2024-07-24 22:30:41.417567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.005 [2024-07-24 22:30:41.417581] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.005 [2024-07-24 22:30:41.417598] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.005 [2024-07-24 22:30:41.417611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32672 len:8 PRP1 0x0 PRP2 0x0 00:21:27.005 [2024-07-24 22:30:41.417624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.005 [2024-07-24 22:30:41.417639] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.005 [2024-07-24 22:30:41.417651] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.005 [2024-07-24 22:30:41.417664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32680 len:8 PRP1 0x0 PRP2 0x0 00:21:27.005 [2024-07-24 22:30:41.431141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.005 [2024-07-24 22:30:41.431175] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.005 [2024-07-24 22:30:41.431190] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.005 [2024-07-24 22:30:41.431205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32688 len:8 PRP1 0x0 PRP2 0x0 00:21:27.005 [2024-07-24 22:30:41.431220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.005 [2024-07-24 22:30:41.431234] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.005 [2024-07-24 22:30:41.431246] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.005 [2024-07-24 22:30:41.431258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32696 len:8 PRP1 0x0 PRP2 0x0 00:21:27.005 [2024-07-24 22:30:41.431272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.005 [2024-07-24 22:30:41.431293] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.005 [2024-07-24 22:30:41.431305] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.005 [2024-07-24 22:30:41.431318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32704 len:8 PRP1 0x0 PRP2 0x0 00:21:27.005 [2024-07-24 22:30:41.431332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.005 [2024-07-24 22:30:41.431346] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.005 [2024-07-24 22:30:41.431358] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.005 [2024-07-24 22:30:41.431371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32712 len:8 PRP1 0x0 PRP2 0x0 00:21:27.005 [2024-07-24 22:30:41.431384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.005 [2024-07-24 22:30:41.431398] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.005 [2024-07-24 22:30:41.431410] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.005 [2024-07-24 22:30:41.431423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32720 len:8 PRP1 0x0 PRP2 0x0 00:21:27.005 [2024-07-24 22:30:41.431437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.006 [2024-07-24 22:30:41.431452] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.006 [2024-07-24 22:30:41.431472] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.006 [2024-07-24 22:30:41.431496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32728 len:8 PRP1 0x0 PRP2 0x0 00:21:27.006 [2024-07-24 22:30:41.431512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.006 [2024-07-24 22:30:41.431532] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.006 [2024-07-24 22:30:41.431544] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.006 [2024-07-24 22:30:41.431556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32736 len:8 PRP1 0x0 PRP2 0x0 00:21:27.006 [2024-07-24 22:30:41.431570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.006 [2024-07-24 22:30:41.431585] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.006 [2024-07-24 22:30:41.431597] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.006 [2024-07-24 22:30:41.431609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32744 len:8 PRP1 0x0 PRP2 0x0 00:21:27.006 [2024-07-24 22:30:41.431623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.006 [2024-07-24 22:30:41.431637] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.006 [2024-07-24 22:30:41.431649] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.006 [2024-07-24 22:30:41.431662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32752 len:8 PRP1 0x0 PRP2 0x0 00:21:27.006 [2024-07-24 22:30:41.431676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.006 [2024-07-24 22:30:41.431691] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.006 [2024-07-24 22:30:41.431703] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.006 [2024-07-24 22:30:41.431715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32760 len:8 PRP1 0x0 PRP2 0x0 00:21:27.006 [2024-07-24 22:30:41.431734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.006 [2024-07-24 22:30:41.431749] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.006 [2024-07-24 22:30:41.431761] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.006 [2024-07-24 22:30:41.431773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:8 PRP1 0x0 PRP2 0x0 00:21:27.006 [2024-07-24 22:30:41.431787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.006 [2024-07-24 22:30:41.431802] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.006 [2024-07-24 22:30:41.431814] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.006 [2024-07-24 22:30:41.431826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32776 len:8 PRP1 0x0 PRP2 0x0 00:21:27.006 [2024-07-24 22:30:41.431840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.006 [2024-07-24 22:30:41.431854] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.006 [2024-07-24 22:30:41.431866] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.006 [2024-07-24 22:30:41.431879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32784 len:8 PRP1 0x0 PRP2 0x0 00:21:27.006 [2024-07-24 22:30:41.431893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.006 [2024-07-24 22:30:41.431907] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.006 [2024-07-24 22:30:41.431919] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.006 [2024-07-24 22:30:41.431931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32792 len:8 PRP1 0x0 PRP2 0x0 00:21:27.006 [2024-07-24 22:30:41.431945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.006 [2024-07-24 22:30:41.431959] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.006 [2024-07-24 22:30:41.431971] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.006 [2024-07-24 22:30:41.431983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32800 len:8 PRP1 0x0 PRP2 0x0 00:21:27.006 [2024-07-24 22:30:41.431997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.006 [2024-07-24 22:30:41.432012] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.006 [2024-07-24 22:30:41.432024] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.006 [2024-07-24 22:30:41.432036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32808 len:8 PRP1 0x0 PRP2 0x0 00:21:27.006 [2024-07-24 22:30:41.432050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.006 [2024-07-24 22:30:41.432064] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.006 [2024-07-24 22:30:41.432076] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.006 [2024-07-24 22:30:41.432089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32816 len:8 PRP1 0x0 PRP2 0x0 00:21:27.006 [2024-07-24 22:30:41.432103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.006 [2024-07-24 22:30:41.432117] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.006 [2024-07-24 22:30:41.432129] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.006 [2024-07-24 22:30:41.432144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32824 len:8 PRP1 0x0 PRP2 0x0 00:21:27.006 [2024-07-24 22:30:41.432159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.006 [2024-07-24 22:30:41.432184] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.006 [2024-07-24 22:30:41.432197] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.006 [2024-07-24 22:30:41.432211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32832 len:8 PRP1 0x0 PRP2 0x0 00:21:27.006 [2024-07-24 22:30:41.432226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.006 [2024-07-24 22:30:41.432242] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.006 [2024-07-24 22:30:41.432254] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.006 [2024-07-24 22:30:41.432266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32208 len:8 PRP1 0x0 PRP2 0x0 00:21:27.006 [2024-07-24 22:30:41.432279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.006 [2024-07-24 22:30:41.432293] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.006 [2024-07-24 22:30:41.432306] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.006 [2024-07-24 22:30:41.432318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32216 len:8 PRP1 0x0 PRP2 0x0 00:21:27.006 [2024-07-24 22:30:41.432332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.006 [2024-07-24 22:30:41.432346] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.006 [2024-07-24 22:30:41.432358] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.006 [2024-07-24 22:30:41.432371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32224 len:8 PRP1 0x0 PRP2 0x0 00:21:27.006 [2024-07-24 22:30:41.432385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.006 [2024-07-24 22:30:41.432399] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.006 [2024-07-24 22:30:41.432411] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.006 [2024-07-24 22:30:41.432426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32232 len:8 PRP1 0x0 PRP2 0x0 00:21:27.006 [2024-07-24 22:30:41.432441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.006 [2024-07-24 22:30:41.432456] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.006 [2024-07-24 22:30:41.432469] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.006 [2024-07-24 22:30:41.432490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32240 len:8 PRP1 0x0 PRP2 0x0 00:21:27.006 [2024-07-24 22:30:41.432506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.006 [2024-07-24 22:30:41.432522] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.006 [2024-07-24 22:30:41.432535] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.006 [2024-07-24 22:30:41.432548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32248 len:8 PRP1 0x0 PRP2 0x0 00:21:27.006 [2024-07-24 22:30:41.432562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.006 [2024-07-24 22:30:41.432580] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.006 [2024-07-24 22:30:41.432592] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.006 [2024-07-24 22:30:41.432605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32256 len:8 PRP1 0x0 PRP2 0x0 00:21:27.006 [2024-07-24 22:30:41.432619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.006 [2024-07-24 22:30:41.432633] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.006 [2024-07-24 22:30:41.432645] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.006 [2024-07-24 22:30:41.432658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32840 len:8 PRP1 0x0 PRP2 0x0 00:21:27.006 [2024-07-24 22:30:41.432674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.006 [2024-07-24 22:30:41.432688] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.006 [2024-07-24 22:30:41.432701] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.006 [2024-07-24 22:30:41.432713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32848 len:8 PRP1 0x0 PRP2 0x0 00:21:27.006 [2024-07-24 22:30:41.432727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.006 [2024-07-24 22:30:41.432741] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.006 [2024-07-24 22:30:41.432753] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.007 [2024-07-24 22:30:41.432765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32856 len:8 PRP1 0x0 PRP2 0x0 00:21:27.007 [2024-07-24 22:30:41.432779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.007 [2024-07-24 22:30:41.432793] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.007 [2024-07-24 22:30:41.432805] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.007 [2024-07-24 22:30:41.432817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32864 len:8 PRP1 0x0 PRP2 0x0 00:21:27.007 [2024-07-24 22:30:41.432831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.007 [2024-07-24 22:30:41.432845] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.007 [2024-07-24 22:30:41.432857] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.007 [2024-07-24 22:30:41.432870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32872 len:8 PRP1 0x0 PRP2 0x0 00:21:27.007 [2024-07-24 22:30:41.432884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.007 [2024-07-24 22:30:41.432898] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.007 [2024-07-24 22:30:41.432910] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.007 [2024-07-24 22:30:41.432922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32880 len:8 PRP1 0x0 PRP2 0x0 00:21:27.007 [2024-07-24 22:30:41.432936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.007 [2024-07-24 22:30:41.432950] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.007 [2024-07-24 22:30:41.432962] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.007 [2024-07-24 22:30:41.432975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32888 len:8 PRP1 0x0 PRP2 0x0 00:21:27.007 [2024-07-24 22:30:41.432992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.007 [2024-07-24 22:30:41.433007] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.007 [2024-07-24 22:30:41.433019] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.007 [2024-07-24 22:30:41.433032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32896 len:8 PRP1 0x0 PRP2 0x0 00:21:27.007 [2024-07-24 22:30:41.433046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.007 [2024-07-24 22:30:41.433060] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.007 [2024-07-24 22:30:41.433072] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.007 [2024-07-24 22:30:41.433085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32904 len:8 PRP1 0x0 PRP2 0x0 00:21:27.007 [2024-07-24 22:30:41.433098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.007 [2024-07-24 22:30:41.433113] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.007 [2024-07-24 22:30:41.433125] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.007 [2024-07-24 22:30:41.433137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32912 len:8 PRP1 0x0 PRP2 0x0 00:21:27.007 [2024-07-24 22:30:41.433152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.007 [2024-07-24 22:30:41.433166] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.007 [2024-07-24 22:30:41.433178] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.007 [2024-07-24 22:30:41.433190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32920 len:8 PRP1 0x0 PRP2 0x0 00:21:27.007 [2024-07-24 22:30:41.433204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.007 [2024-07-24 22:30:41.433218] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.007 [2024-07-24 22:30:41.433231] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.007 [2024-07-24 22:30:41.433244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32928 len:8 PRP1 0x0 PRP2 0x0 00:21:27.007 [2024-07-24 22:30:41.433259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.007 [2024-07-24 22:30:41.433274] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.007 [2024-07-24 22:30:41.433287] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.007 [2024-07-24 22:30:41.433301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32936 len:8 PRP1 0x0 PRP2 0x0 00:21:27.007 [2024-07-24 22:30:41.433315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.007 [2024-07-24 22:30:41.433332] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.007 [2024-07-24 22:30:41.433344] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.007 [2024-07-24 22:30:41.433357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32944 len:8 PRP1 0x0 PRP2 0x0 00:21:27.007 [2024-07-24 22:30:41.433371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.007 [2024-07-24 22:30:41.433435] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x22c8c30 was disconnected and freed. reset controller. 00:21:27.007 [2024-07-24 22:30:41.433460] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:21:27.007 [2024-07-24 22:30:41.433489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:27.007 [2024-07-24 22:30:41.433556] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22a9430 (9): Bad file descriptor 00:21:27.007 [2024-07-24 22:30:41.437641] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:27.007 [2024-07-24 22:30:41.550824] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:27.007 [2024-07-24 22:30:45.943595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:61768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.007 [2024-07-24 22:30:45.943641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.007 [2024-07-24 22:30:45.943671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:61776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.007 [2024-07-24 22:30:45.943689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.007 [2024-07-24 22:30:45.943707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:61784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.007 [2024-07-24 22:30:45.943723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.007 [2024-07-24 22:30:45.943742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:61792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.007 [2024-07-24 22:30:45.943758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.007 [2024-07-24 22:30:45.943775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:61800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.007 [2024-07-24 22:30:45.943791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.007 [2024-07-24 22:30:45.943809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:61808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.007 [2024-07-24 22:30:45.943825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.007 [2024-07-24 22:30:45.943843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:61816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.007 [2024-07-24 22:30:45.943859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.007 [2024-07-24 22:30:45.943876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:61824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.007 [2024-07-24 22:30:45.943891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.007 [2024-07-24 22:30:45.943908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:61832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.007 [2024-07-24 22:30:45.943923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.007 [2024-07-24 22:30:45.943940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:61840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.007 [2024-07-24 22:30:45.943956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.007 [2024-07-24 22:30:45.943973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:61848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.007 [2024-07-24 22:30:45.943988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.007 [2024-07-24 22:30:45.944012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:61856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.007 [2024-07-24 22:30:45.944028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.007 [2024-07-24 22:30:45.944045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:61864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.007 [2024-07-24 22:30:45.944061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.007 [2024-07-24 22:30:45.944077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:61872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.007 [2024-07-24 22:30:45.944093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.007 [2024-07-24 22:30:45.944110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:61880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.007 [2024-07-24 22:30:45.944126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.007 [2024-07-24 22:30:45.944143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:61888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.007 [2024-07-24 22:30:45.944158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.007 [2024-07-24 22:30:45.944175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:61896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.007 [2024-07-24 22:30:45.944190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.007 [2024-07-24 22:30:45.944207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:61904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.008 [2024-07-24 22:30:45.944223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.008 [2024-07-24 22:30:45.944239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:61912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.008 [2024-07-24 22:30:45.944255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.008 [2024-07-24 22:30:45.944272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:61920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.008 [2024-07-24 22:30:45.944288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.008 [2024-07-24 22:30:45.944305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:61928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.008 [2024-07-24 22:30:45.944320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.008 [2024-07-24 22:30:45.944337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:61936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.008 [2024-07-24 22:30:45.944353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.008 [2024-07-24 22:30:45.944370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:61944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.008 [2024-07-24 22:30:45.944386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.008 [2024-07-24 22:30:45.944403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:61952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.008 [2024-07-24 22:30:45.944423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.008 [2024-07-24 22:30:45.944441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:61960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.008 [2024-07-24 22:30:45.944456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.008 [2024-07-24 22:30:45.944473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:62024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.008 [2024-07-24 22:30:45.944496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.008 [2024-07-24 22:30:45.944513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:62032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.008 [2024-07-24 22:30:45.944529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.008 [2024-07-24 22:30:45.944546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:61968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.008 [2024-07-24 22:30:45.944562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.008 [2024-07-24 22:30:45.944579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:61976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.008 [2024-07-24 22:30:45.944595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.008 [2024-07-24 22:30:45.944612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:61984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.008 [2024-07-24 22:30:45.944627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.008 [2024-07-24 22:30:45.944644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:61992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.008 [2024-07-24 22:30:45.944659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.008 [2024-07-24 22:30:45.944676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:62000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.008 [2024-07-24 22:30:45.944691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.008 [2024-07-24 22:30:45.944707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:62008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.008 [2024-07-24 22:30:45.944722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.008 [2024-07-24 22:30:45.944739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:62016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:27.008 [2024-07-24 22:30:45.944754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.008 [2024-07-24 22:30:45.944771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:62040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.008 [2024-07-24 22:30:45.944786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.008 [2024-07-24 22:30:45.944803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:62048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.008 [2024-07-24 22:30:45.944817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.008 [2024-07-24 22:30:45.944838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:62056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.008 [2024-07-24 22:30:45.944854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.008 [2024-07-24 22:30:45.944870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:62064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.008 [2024-07-24 22:30:45.944885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.008 [2024-07-24 22:30:45.944902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:62072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.008 [2024-07-24 22:30:45.944918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.008 [2024-07-24 22:30:45.944935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:62080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.008 [2024-07-24 22:30:45.944950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.008 [2024-07-24 22:30:45.944967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:62088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.008 [2024-07-24 22:30:45.944982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.008 [2024-07-24 22:30:45.944999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:62096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.008 [2024-07-24 22:30:45.945015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.008 [2024-07-24 22:30:45.945032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:62104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.008 [2024-07-24 22:30:45.945047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.008 [2024-07-24 22:30:45.945064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:62112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.008 [2024-07-24 22:30:45.945079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.008 [2024-07-24 22:30:45.945096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:62120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.008 [2024-07-24 22:30:45.945111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.008 [2024-07-24 22:30:45.945128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:62128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.008 [2024-07-24 22:30:45.945143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.008 [2024-07-24 22:30:45.945160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:62136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.008 [2024-07-24 22:30:45.945175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.008 [2024-07-24 22:30:45.945192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:62144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.008 [2024-07-24 22:30:45.945207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.008 [2024-07-24 22:30:45.945224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:62152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.008 [2024-07-24 22:30:45.945239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.008 [2024-07-24 22:30:45.945259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:62160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.008 [2024-07-24 22:30:45.945275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.008 [2024-07-24 22:30:45.945292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:62168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.008 [2024-07-24 22:30:45.945309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.008 [2024-07-24 22:30:45.945326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:62176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.008 [2024-07-24 22:30:45.945342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.009 [2024-07-24 22:30:45.945359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:62184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.009 [2024-07-24 22:30:45.945374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.009 [2024-07-24 22:30:45.945390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:62192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.009 [2024-07-24 22:30:45.945405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.009 [2024-07-24 22:30:45.945422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:62200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.009 [2024-07-24 22:30:45.945437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.009 [2024-07-24 22:30:45.945455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:62208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.009 [2024-07-24 22:30:45.945470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.009 [2024-07-24 22:30:45.945494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:62216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.009 [2024-07-24 22:30:45.945511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.009 [2024-07-24 22:30:45.945529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:62224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.009 [2024-07-24 22:30:45.945545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.009 [2024-07-24 22:30:45.945562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:62232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.009 [2024-07-24 22:30:45.945578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.009 [2024-07-24 22:30:45.945594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:62240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.009 [2024-07-24 22:30:45.945610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.009 [2024-07-24 22:30:45.945627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:62248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.009 [2024-07-24 22:30:45.945642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.009 [2024-07-24 22:30:45.945659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:62256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.009 [2024-07-24 22:30:45.945678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.009 [2024-07-24 22:30:45.945697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:62264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.009 [2024-07-24 22:30:45.945712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.009 [2024-07-24 22:30:45.945729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:62272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.009 [2024-07-24 22:30:45.945744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.009 [2024-07-24 22:30:45.945761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:62280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.009 [2024-07-24 22:30:45.945776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.009 [2024-07-24 22:30:45.945793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:62288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.009 [2024-07-24 22:30:45.945809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.009 [2024-07-24 22:30:45.945825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:62296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.009 [2024-07-24 22:30:45.945841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.009 [2024-07-24 22:30:45.945857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:62304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.009 [2024-07-24 22:30:45.945873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.009 [2024-07-24 22:30:45.945890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:62312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.009 [2024-07-24 22:30:45.945905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.009 [2024-07-24 22:30:45.945922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:62320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.009 [2024-07-24 22:30:45.945937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.009 [2024-07-24 22:30:45.945954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:62328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.009 [2024-07-24 22:30:45.945969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.009 [2024-07-24 22:30:45.945986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:62336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.009 [2024-07-24 22:30:45.946001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.009 [2024-07-24 22:30:45.946018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:62344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.009 [2024-07-24 22:30:45.946034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.009 [2024-07-24 22:30:45.946051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:62352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.009 [2024-07-24 22:30:45.946066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.009 [2024-07-24 22:30:45.946086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:62360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.009 [2024-07-24 22:30:45.946102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.009 [2024-07-24 22:30:45.946119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:62368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.009 [2024-07-24 22:30:45.946134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.009 [2024-07-24 22:30:45.946151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:62376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.009 [2024-07-24 22:30:45.946166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.009 [2024-07-24 22:30:45.946184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:62384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.009 [2024-07-24 22:30:45.946199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.009 [2024-07-24 22:30:45.946216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:62392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.009 [2024-07-24 22:30:45.946231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.009 [2024-07-24 22:30:45.946254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.009 [2024-07-24 22:30:45.946270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.009 [2024-07-24 22:30:45.946287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:62408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.009 [2024-07-24 22:30:45.946302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.009 [2024-07-24 22:30:45.946319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:62416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:27.009 [2024-07-24 22:30:45.946334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.009 [2024-07-24 22:30:45.946372] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.009 [2024-07-24 22:30:45.946392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62424 len:8 PRP1 0x0 PRP2 0x0 00:21:27.009 [2024-07-24 22:30:45.946407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.009 [2024-07-24 22:30:45.946427] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.009 [2024-07-24 22:30:45.946440] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.009 [2024-07-24 22:30:45.946452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62432 len:8 PRP1 0x0 PRP2 0x0 00:21:27.009 [2024-07-24 22:30:45.946466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.009 [2024-07-24 22:30:45.946487] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.009 [2024-07-24 22:30:45.946501] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.009 [2024-07-24 22:30:45.946514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62440 len:8 PRP1 0x0 PRP2 0x0 00:21:27.009 [2024-07-24 22:30:45.946528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.009 [2024-07-24 22:30:45.946546] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.009 [2024-07-24 22:30:45.946559] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.009 [2024-07-24 22:30:45.946572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62448 len:8 PRP1 0x0 PRP2 0x0 00:21:27.009 [2024-07-24 22:30:45.946585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.009 [2024-07-24 22:30:45.946600] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.009 [2024-07-24 22:30:45.946612] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.009 [2024-07-24 22:30:45.946625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62456 len:8 PRP1 0x0 PRP2 0x0 00:21:27.009 [2024-07-24 22:30:45.946639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.009 [2024-07-24 22:30:45.946653] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.010 [2024-07-24 22:30:45.946665] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.010 [2024-07-24 22:30:45.946677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62464 len:8 PRP1 0x0 PRP2 0x0 00:21:27.010 [2024-07-24 22:30:45.946692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.010 [2024-07-24 22:30:45.946706] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.010 [2024-07-24 22:30:45.946718] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.010 [2024-07-24 22:30:45.946731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62472 len:8 PRP1 0x0 PRP2 0x0 00:21:27.010 [2024-07-24 22:30:45.946750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.010 [2024-07-24 22:30:45.946765] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.010 [2024-07-24 22:30:45.946777] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.010 [2024-07-24 22:30:45.946789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62480 len:8 PRP1 0x0 PRP2 0x0 00:21:27.010 [2024-07-24 22:30:45.946803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.010 [2024-07-24 22:30:45.946818] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.010 [2024-07-24 22:30:45.946829] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.010 [2024-07-24 22:30:45.946842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62488 len:8 PRP1 0x0 PRP2 0x0 00:21:27.010 [2024-07-24 22:30:45.946856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.010 [2024-07-24 22:30:45.946871] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.010 [2024-07-24 22:30:45.946883] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.010 [2024-07-24 22:30:45.946896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62496 len:8 PRP1 0x0 PRP2 0x0 00:21:27.010 [2024-07-24 22:30:45.946910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.010 [2024-07-24 22:30:45.946924] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.010 [2024-07-24 22:30:45.946936] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.010 [2024-07-24 22:30:45.946948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62504 len:8 PRP1 0x0 PRP2 0x0 00:21:27.010 [2024-07-24 22:30:45.946971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.010 [2024-07-24 22:30:45.946986] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.010 [2024-07-24 22:30:45.946998] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.010 [2024-07-24 22:30:45.947010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62512 len:8 PRP1 0x0 PRP2 0x0 00:21:27.010 [2024-07-24 22:30:45.947024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.010 [2024-07-24 22:30:45.947039] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.010 [2024-07-24 22:30:45.947051] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.010 [2024-07-24 22:30:45.947063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62520 len:8 PRP1 0x0 PRP2 0x0 00:21:27.010 [2024-07-24 22:30:45.947077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.010 [2024-07-24 22:30:45.947091] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.010 [2024-07-24 22:30:45.947103] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.010 [2024-07-24 22:30:45.947116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62528 len:8 PRP1 0x0 PRP2 0x0 00:21:27.010 [2024-07-24 22:30:45.947130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.010 [2024-07-24 22:30:45.947144] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.010 [2024-07-24 22:30:45.947156] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.010 [2024-07-24 22:30:45.947168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62536 len:8 PRP1 0x0 PRP2 0x0 00:21:27.010 [2024-07-24 22:30:45.947183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.010 [2024-07-24 22:30:45.947198] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.010 [2024-07-24 22:30:45.947210] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.010 [2024-07-24 22:30:45.947223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62544 len:8 PRP1 0x0 PRP2 0x0 00:21:27.010 [2024-07-24 22:30:45.947237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.010 [2024-07-24 22:30:45.947252] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.010 [2024-07-24 22:30:45.947265] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.010 [2024-07-24 22:30:45.947278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62552 len:8 PRP1 0x0 PRP2 0x0 00:21:27.010 [2024-07-24 22:30:45.947292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.010 [2024-07-24 22:30:45.947307] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.010 [2024-07-24 22:30:45.947319] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.010 [2024-07-24 22:30:45.947332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62560 len:8 PRP1 0x0 PRP2 0x0 00:21:27.010 [2024-07-24 22:30:45.947346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.010 [2024-07-24 22:30:45.947360] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.010 [2024-07-24 22:30:45.947372] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.010 [2024-07-24 22:30:45.947387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62568 len:8 PRP1 0x0 PRP2 0x0 00:21:27.010 [2024-07-24 22:30:45.947402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.010 [2024-07-24 22:30:45.947416] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.010 [2024-07-24 22:30:45.947428] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.010 [2024-07-24 22:30:45.947441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62576 len:8 PRP1 0x0 PRP2 0x0 00:21:27.010 [2024-07-24 22:30:45.947455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.010 [2024-07-24 22:30:45.947469] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.010 [2024-07-24 22:30:45.947488] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.010 [2024-07-24 22:30:45.947502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62584 len:8 PRP1 0x0 PRP2 0x0 00:21:27.010 [2024-07-24 22:30:45.947516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.010 [2024-07-24 22:30:45.947531] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.010 [2024-07-24 22:30:45.947543] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.010 [2024-07-24 22:30:45.947555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62592 len:8 PRP1 0x0 PRP2 0x0 00:21:27.010 [2024-07-24 22:30:45.947569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.010 [2024-07-24 22:30:45.947584] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.010 [2024-07-24 22:30:45.947596] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.010 [2024-07-24 22:30:45.947609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62600 len:8 PRP1 0x0 PRP2 0x0 00:21:27.010 [2024-07-24 22:30:45.947623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.010 [2024-07-24 22:30:45.947637] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.010 [2024-07-24 22:30:45.947649] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.010 [2024-07-24 22:30:45.947662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62608 len:8 PRP1 0x0 PRP2 0x0 00:21:27.010 [2024-07-24 22:30:45.947676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.010 [2024-07-24 22:30:45.947690] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.010 [2024-07-24 22:30:45.947702] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.010 [2024-07-24 22:30:45.947715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62616 len:8 PRP1 0x0 PRP2 0x0 00:21:27.010 [2024-07-24 22:30:45.947729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.010 [2024-07-24 22:30:45.947743] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.010 [2024-07-24 22:30:45.947756] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.010 [2024-07-24 22:30:45.947768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62624 len:8 PRP1 0x0 PRP2 0x0 00:21:27.010 [2024-07-24 22:30:45.947782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.010 [2024-07-24 22:30:45.947796] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.010 [2024-07-24 22:30:45.947812] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.010 [2024-07-24 22:30:45.947825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62632 len:8 PRP1 0x0 PRP2 0x0 00:21:27.010 [2024-07-24 22:30:45.947839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.010 [2024-07-24 22:30:45.947853] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.010 [2024-07-24 22:30:45.947865] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.010 [2024-07-24 22:30:45.947878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62640 len:8 PRP1 0x0 PRP2 0x0 00:21:27.010 [2024-07-24 22:30:45.947892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.010 [2024-07-24 22:30:45.947906] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.010 [2024-07-24 22:30:45.947919] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.011 [2024-07-24 22:30:45.947931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62648 len:8 PRP1 0x0 PRP2 0x0 00:21:27.011 [2024-07-24 22:30:45.947945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.011 [2024-07-24 22:30:45.947960] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.011 [2024-07-24 22:30:45.947972] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.011 [2024-07-24 22:30:45.947985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62656 len:8 PRP1 0x0 PRP2 0x0 00:21:27.011 [2024-07-24 22:30:45.947999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.011 [2024-07-24 22:30:45.948013] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.011 [2024-07-24 22:30:45.948025] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.011 [2024-07-24 22:30:45.948037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62664 len:8 PRP1 0x0 PRP2 0x0 00:21:27.011 [2024-07-24 22:30:45.948051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.011 [2024-07-24 22:30:45.948066] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.011 [2024-07-24 22:30:45.948077] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.011 [2024-07-24 22:30:45.948090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62672 len:8 PRP1 0x0 PRP2 0x0 00:21:27.011 [2024-07-24 22:30:45.948104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.011 [2024-07-24 22:30:45.948118] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.011 [2024-07-24 22:30:45.948131] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.011 [2024-07-24 22:30:45.948143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62680 len:8 PRP1 0x0 PRP2 0x0 00:21:27.011 [2024-07-24 22:30:45.948157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.011 [2024-07-24 22:30:45.948172] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.011 [2024-07-24 22:30:45.948189] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.011 [2024-07-24 22:30:45.948202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62688 len:8 PRP1 0x0 PRP2 0x0 00:21:27.011 [2024-07-24 22:30:45.948216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.011 [2024-07-24 22:30:45.948234] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.011 [2024-07-24 22:30:45.948247] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.011 [2024-07-24 22:30:45.948260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62696 len:8 PRP1 0x0 PRP2 0x0 00:21:27.011 [2024-07-24 22:30:45.948274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.011 [2024-07-24 22:30:45.948289] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.011 [2024-07-24 22:30:45.948301] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.011 [2024-07-24 22:30:45.948314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62704 len:8 PRP1 0x0 PRP2 0x0 00:21:27.011 [2024-07-24 22:30:45.948328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.011 [2024-07-24 22:30:45.948343] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.011 [2024-07-24 22:30:45.948355] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.011 [2024-07-24 22:30:45.948367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62712 len:8 PRP1 0x0 PRP2 0x0 00:21:27.011 [2024-07-24 22:30:45.948381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.011 [2024-07-24 22:30:45.948396] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.011 [2024-07-24 22:30:45.948408] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.011 [2024-07-24 22:30:45.948420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62720 len:8 PRP1 0x0 PRP2 0x0 00:21:27.011 [2024-07-24 22:30:45.948435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.011 [2024-07-24 22:30:45.948449] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.011 [2024-07-24 22:30:45.948461] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.011 [2024-07-24 22:30:45.948473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62728 len:8 PRP1 0x0 PRP2 0x0 00:21:27.011 [2024-07-24 22:30:45.948498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.011 [2024-07-24 22:30:45.948514] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.011 [2024-07-24 22:30:45.948526] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.011 [2024-07-24 22:30:45.948539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62736 len:8 PRP1 0x0 PRP2 0x0 00:21:27.011 [2024-07-24 22:30:45.948553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.011 [2024-07-24 22:30:45.948569] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.011 [2024-07-24 22:30:45.948581] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.011 [2024-07-24 22:30:45.948593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62744 len:8 PRP1 0x0 PRP2 0x0 00:21:27.011 [2024-07-24 22:30:45.948608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.011 [2024-07-24 22:30:45.948622] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.011 [2024-07-24 22:30:45.948639] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.011 [2024-07-24 22:30:45.948652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62752 len:8 PRP1 0x0 PRP2 0x0 00:21:27.011 [2024-07-24 22:30:45.948670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.011 [2024-07-24 22:30:45.948685] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.011 [2024-07-24 22:30:45.948697] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.011 [2024-07-24 22:30:45.948710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62760 len:8 PRP1 0x0 PRP2 0x0 00:21:27.011 [2024-07-24 22:30:45.948724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.011 [2024-07-24 22:30:45.948739] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.011 [2024-07-24 22:30:45.948751] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.011 [2024-07-24 22:30:45.948764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62768 len:8 PRP1 0x0 PRP2 0x0 00:21:27.011 [2024-07-24 22:30:45.948779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.011 [2024-07-24 22:30:45.948793] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.011 [2024-07-24 22:30:45.948805] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.011 [2024-07-24 22:30:45.948818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62776 len:8 PRP1 0x0 PRP2 0x0 00:21:27.011 [2024-07-24 22:30:45.948832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.011 [2024-07-24 22:30:45.948847] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:27.011 [2024-07-24 22:30:45.948859] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:27.011 [2024-07-24 22:30:45.948871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62784 len:8 PRP1 0x0 PRP2 0x0 00:21:27.011 [2024-07-24 22:30:45.948885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.011 [2024-07-24 22:30:45.948948] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x22d9210 was disconnected and freed. reset controller. 00:21:27.011 [2024-07-24 22:30:45.948971] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:21:27.011 [2024-07-24 22:30:45.949011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:27.011 [2024-07-24 22:30:45.949030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.011 [2024-07-24 22:30:45.949047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:27.011 [2024-07-24 22:30:45.949062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.011 [2024-07-24 22:30:45.949078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:27.011 [2024-07-24 22:30:45.949092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.011 [2024-07-24 22:30:45.949107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:27.011 [2024-07-24 22:30:45.949123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:27.011 [2024-07-24 22:30:45.949138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:27.011 [2024-07-24 22:30:45.949198] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22a9430 (9): Bad file descriptor 00:21:27.011 [2024-07-24 22:30:45.953241] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:27.011 [2024-07-24 22:30:45.994116] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:27.011 00:21:27.011 Latency(us) 00:21:27.011 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:27.011 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:27.011 Verification LBA range: start 0x0 length 0x4000 00:21:27.011 NVMe0n1 : 15.02 7366.21 28.77 385.48 0.00 16478.50 658.39 53205.52 00:21:27.011 =================================================================================================================== 00:21:27.011 Total : 7366.21 28.77 385.48 0.00 16478.50 658.39 53205.52 00:21:27.011 Received shutdown signal, test time was about 15.000000 seconds 00:21:27.011 00:21:27.011 Latency(us) 00:21:27.012 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:27.012 =================================================================================================================== 00:21:27.012 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:27.012 22:30:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:21:27.012 22:30:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:21:27.012 22:30:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:21:27.012 22:30:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3896627 00:21:27.012 22:30:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:21:27.012 22:30:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3896627 /var/tmp/bdevperf.sock 00:21:27.012 22:30:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 3896627 ']' 00:21:27.012 22:30:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:27.012 22:30:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:27.012 22:30:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:27.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:27.012 22:30:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:27.012 22:30:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:27.012 22:30:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:27.012 22:30:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:21:27.012 22:30:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:27.012 [2024-07-24 22:30:52.433927] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:27.012 22:30:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:27.270 [2024-07-24 22:30:52.730762] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:27.270 22:30:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:27.527 NVMe0n1 00:21:27.527 22:30:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:28.094 00:21:28.094 22:30:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:28.352 00:21:28.352 22:30:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:28.352 22:30:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:21:28.917 22:30:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:29.174 22:30:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:21:32.483 22:30:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:32.483 22:30:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:21:32.483 22:30:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3897142 00:21:32.483 22:30:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:32.483 22:30:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 3897142 00:21:33.434 0 00:21:33.434 22:30:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:33.434 [2024-07-24 22:30:51.883262] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:21:33.434 [2024-07-24 22:30:51.883361] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3896627 ] 00:21:33.434 EAL: No free 2048 kB hugepages reported on node 1 00:21:33.434 [2024-07-24 22:30:51.944335] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:33.434 [2024-07-24 22:30:52.061002] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:33.434 [2024-07-24 22:30:54.627124] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:33.434 [2024-07-24 22:30:54.627217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:33.434 [2024-07-24 22:30:54.627241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.434 [2024-07-24 22:30:54.627261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:33.434 [2024-07-24 22:30:54.627276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.434 [2024-07-24 22:30:54.627291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:33.434 [2024-07-24 22:30:54.627305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.434 [2024-07-24 22:30:54.627320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:33.434 [2024-07-24 22:30:54.627334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:33.434 [2024-07-24 22:30:54.627350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:33.434 [2024-07-24 22:30:54.627402] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:33.434 [2024-07-24 22:30:54.627436] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x246a430 (9): Bad file descriptor 00:21:33.434 [2024-07-24 22:30:54.638944] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:33.434 Running I/O for 1 seconds... 00:21:33.434 00:21:33.434 Latency(us) 00:21:33.434 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:33.434 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:33.434 Verification LBA range: start 0x0 length 0x4000 00:21:33.434 NVMe0n1 : 1.01 7556.29 29.52 0.00 0.00 16859.27 3568.07 13786.83 00:21:33.434 =================================================================================================================== 00:21:33.434 Total : 7556.29 29.52 0.00 0.00 16859.27 3568.07 13786.83 00:21:33.434 22:30:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:33.434 22:30:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:21:33.692 22:30:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:34.258 22:30:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:34.258 22:30:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:21:34.258 22:30:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:34.515 22:31:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:21:37.790 22:31:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:37.790 22:31:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:21:37.790 22:31:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 3896627 00:21:37.790 22:31:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 3896627 ']' 00:21:37.790 22:31:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 3896627 00:21:37.790 22:31:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:21:37.790 22:31:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:37.790 22:31:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3896627 00:21:37.790 22:31:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:37.790 22:31:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:37.790 22:31:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3896627' 00:21:37.790 killing process with pid 3896627 00:21:37.790 22:31:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@967 -- # kill 3896627 00:21:37.790 22:31:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # wait 3896627 00:21:38.047 22:31:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:21:38.047 22:31:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:38.304 22:31:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:21:38.304 22:31:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:38.304 22:31:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:21:38.304 22:31:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:38.304 22:31:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:21:38.304 22:31:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:38.304 22:31:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:21:38.304 22:31:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:38.304 22:31:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:38.304 rmmod nvme_tcp 00:21:38.561 rmmod nvme_fabrics 00:21:38.561 rmmod nvme_keyring 00:21:38.561 22:31:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:38.561 22:31:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:21:38.561 22:31:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:21:38.562 22:31:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 3894810 ']' 00:21:38.562 22:31:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 3894810 00:21:38.562 22:31:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 3894810 ']' 00:21:38.562 22:31:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 3894810 00:21:38.562 22:31:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:21:38.562 22:31:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:38.562 22:31:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3894810 00:21:38.562 22:31:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:38.562 22:31:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:38.562 22:31:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3894810' 00:21:38.562 killing process with pid 3894810 00:21:38.562 22:31:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@967 -- # kill 3894810 00:21:38.562 22:31:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # wait 3894810 00:21:38.820 22:31:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:38.820 22:31:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:38.820 22:31:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:38.820 22:31:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:38.820 22:31:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:38.820 22:31:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:38.821 22:31:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:38.821 22:31:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:40.723 22:31:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:40.723 00:21:40.723 real 0m35.355s 00:21:40.723 user 2m6.334s 00:21:40.723 sys 0m5.624s 00:21:40.723 22:31:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:40.723 22:31:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:40.723 ************************************ 00:21:40.723 END TEST nvmf_failover 00:21:40.723 ************************************ 00:21:40.723 22:31:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:40.723 22:31:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:40.723 22:31:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:40.723 22:31:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:40.723 ************************************ 00:21:40.723 START TEST nvmf_host_discovery 00:21:40.723 ************************************ 00:21:40.723 22:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:40.982 * Looking for test storage... 00:21:40.982 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:40.982 22:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:40.982 22:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:21:40.982 22:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:40.982 22:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:40.982 22:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:40.982 22:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:40.982 22:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:40.982 22:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:40.982 22:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:40.982 22:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:40.982 22:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:40.982 22:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:40.982 22:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:40.982 22:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:21:40.982 22:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:40.982 22:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:40.982 22:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:40.982 22:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:40.982 22:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:40.982 22:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:40.982 22:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:40.982 22:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:40.982 22:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.982 22:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.982 22:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.982 22:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:21:40.983 22:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.983 22:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:21:40.983 22:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:40.983 22:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:40.983 22:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:40.983 22:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:40.983 22:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:40.983 22:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:40.983 22:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:40.983 22:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:40.983 22:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:21:40.983 22:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:21:40.983 22:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:21:40.983 22:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:21:40.983 22:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:21:40.983 22:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:21:40.983 22:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:21:40.983 22:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:40.983 22:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:40.983 22:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:40.983 22:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:40.983 22:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:40.983 22:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:40.983 22:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:40.983 22:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:40.983 22:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:40.983 22:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:40.983 22:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:21:40.983 22:31:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:42.888 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:42.888 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:21:42.888 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:42.888 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:42.888 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:42.888 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:42.888 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:42.888 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:21:42.888 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:42.888 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:21:42.888 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:21:42.888 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:21:42.888 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:21:42.888 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:21:42.888 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:21:42.888 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:42.888 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:42.888 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:21:42.889 Found 0000:08:00.0 (0x8086 - 0x159b) 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:21:42.889 Found 0000:08:00.1 (0x8086 - 0x159b) 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:21:42.889 Found net devices under 0000:08:00.0: cvl_0_0 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:21:42.889 Found net devices under 0000:08:00.1: cvl_0_1 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:42.889 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:42.889 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.311 ms 00:21:42.889 00:21:42.889 --- 10.0.0.2 ping statistics --- 00:21:42.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:42.889 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:42.889 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:42.889 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:21:42.889 00:21:42.889 --- 10.0.0.1 ping statistics --- 00:21:42.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:42.889 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=3899231 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 3899231 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 3899231 ']' 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:42.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:42.889 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:42.889 [2024-07-24 22:31:08.266647] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:21:42.890 [2024-07-24 22:31:08.266750] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:42.890 EAL: No free 2048 kB hugepages reported on node 1 00:21:42.890 [2024-07-24 22:31:08.333850] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:42.890 [2024-07-24 22:31:08.452450] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:42.890 [2024-07-24 22:31:08.452528] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:42.890 [2024-07-24 22:31:08.452544] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:42.890 [2024-07-24 22:31:08.452557] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:42.890 [2024-07-24 22:31:08.452569] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:42.890 [2024-07-24 22:31:08.452608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:42.890 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:42.890 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:21:42.890 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:42.890 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:42.890 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:42.890 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:42.890 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:42.890 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.890 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:42.890 [2024-07-24 22:31:08.588640] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:43.148 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.148 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:21:43.148 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.148 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.148 [2024-07-24 22:31:08.596801] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:21:43.148 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.148 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:21:43.148 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.148 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.148 null0 00:21:43.148 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.148 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:21:43.148 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.148 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.148 null1 00:21:43.148 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.148 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:21:43.148 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.148 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.148 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.148 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3899251 00:21:43.148 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3899251 /tmp/host.sock 00:21:43.148 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 3899251 ']' 00:21:43.148 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:21:43.148 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:43.148 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:21:43.148 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:21:43.148 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:21:43.148 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:43.148 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.148 [2024-07-24 22:31:08.682240] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:21:43.148 [2024-07-24 22:31:08.682336] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3899251 ] 00:21:43.148 EAL: No free 2048 kB hugepages reported on node 1 00:21:43.148 [2024-07-24 22:31:08.743265] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:43.406 [2024-07-24 22:31:08.860596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:43.406 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:43.406 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:21:43.406 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:43.406 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:21:43.406 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.406 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.406 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.406 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:21:43.406 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.406 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.406 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.406 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:21:43.406 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:21:43.406 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:43.406 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:43.406 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.406 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.406 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:43.406 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:43.406 22:31:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.406 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:21:43.406 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:21:43.406 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:43.406 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:43.406 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.406 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.406 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:43.406 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:43.406 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.406 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:21:43.406 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:21:43.406 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.406 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.406 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.406 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:21:43.406 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:43.406 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.406 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:43.406 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.406 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:43.406 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:43.406 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.665 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:21:43.665 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:21:43.665 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:43.665 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.665 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:43.665 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.665 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:43.665 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:43.665 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.665 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:21:43.665 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:21:43.665 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.665 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.665 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.665 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:21:43.665 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:43.665 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:43.665 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.665 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:43.665 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.665 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:43.665 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.665 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:21:43.665 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:21:43.665 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:43.665 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:43.665 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.665 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:43.665 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.665 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:43.665 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.665 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:21:43.665 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:43.665 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.665 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.665 [2024-07-24 22:31:09.274659] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:43.665 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.665 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:21:43.665 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:43.665 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:43.665 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.665 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.665 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:43.665 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:43.665 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.665 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:21:43.665 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:21:43.665 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:43.665 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.665 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:43.665 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.665 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:43.665 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:43.665 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.665 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:21:43.665 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:21:43.665 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:43.665 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:43.665 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:43.665 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:43.665 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:43.665 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:43.665 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:43.665 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:43.665 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:43.665 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.665 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.923 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.923 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:43.923 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:21:43.923 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:43.923 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:43.923 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:21:43.923 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.923 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.923 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.923 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:43.923 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:43.923 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:43.923 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:43.923 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:43.923 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:43.923 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:43.923 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.923 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:43.923 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.923 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:43.923 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:43.923 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.923 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:21:43.923 22:31:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:21:44.488 [2024-07-24 22:31:10.036746] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:44.488 [2024-07-24 22:31:10.036804] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:44.488 [2024-07-24 22:31:10.036841] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:44.488 [2024-07-24 22:31:10.123098] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:21:44.745 [2024-07-24 22:31:10.349342] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:44.745 [2024-07-24 22:31:10.349372] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:45.004 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:45.005 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:45.005 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:45.005 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:45.005 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:21:45.005 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:45.005 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.005 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:45.005 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.263 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:21:45.263 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:45.263 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:45.263 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:45.263 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:21:45.263 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.263 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:45.263 [2024-07-24 22:31:10.735145] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:45.263 [2024-07-24 22:31:10.735441] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:45.263 [2024-07-24 22:31:10.735494] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:45.263 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.263 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:45.263 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:45.263 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:45.263 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:45.263 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:45.263 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:45.263 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:45.263 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:45.263 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.263 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:45.263 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:45.263 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:45.263 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.263 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.263 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:45.263 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:45.263 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:45.263 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:45.263 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:45.263 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:45.263 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:45.263 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:45.263 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:45.263 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.263 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:45.263 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:45.263 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:45.263 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.263 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:45.263 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:45.263 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:21:45.263 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:21:45.263 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:45.263 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:45.263 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:21:45.263 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:21:45.263 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:45.263 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:45.263 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.263 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:45.263 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:45.263 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:45.263 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.263 [2024-07-24 22:31:10.864400] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:21:45.263 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:21:45.263 22:31:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:21:45.521 [2024-07-24 22:31:11.173878] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:45.521 [2024-07-24 22:31:11.173931] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:45.521 [2024-07-24 22:31:11.173943] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:46.454 22:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:46.454 22:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:21:46.454 22:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:21:46.454 22:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:46.454 22:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:46.454 22:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.454 22:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.454 22:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:46.454 22:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:46.454 22:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.454 22:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:21:46.454 22:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:46.454 22:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:21:46.454 22:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:46.454 22:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:46.454 22:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:46.454 22:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:46.454 22:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:46.454 22:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:46.454 22:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:46.454 22:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:46.454 22:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:46.454 22:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.454 22:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.454 22:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.454 22:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:46.454 22:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:46.454 22:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:46.454 22:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:46.454 22:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:46.454 22:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.455 22:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.455 [2024-07-24 22:31:11.967286] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:46.455 [2024-07-24 22:31:11.967334] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:46.455 [2024-07-24 22:31:11.970571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.455 [2024-07-24 22:31:11.970615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.455 [2024-07-24 22:31:11.970635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.455 [2024-07-24 22:31:11.970650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.455 [2024-07-24 22:31:11.970666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.455 [2024-07-24 22:31:11.970681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.455 [2024-07-24 22:31:11.970696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.455 [2024-07-24 22:31:11.970711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.455 [2024-07-24 22:31:11.970726] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1585ee0 is same with the state(5) to be set 00:21:46.455 22:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.455 22:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:46.455 22:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:46.455 22:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:46.455 22:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:46.455 22:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:46.455 22:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:46.455 22:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:46.455 22:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:46.455 22:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.455 22:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.455 22:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:46.455 22:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:46.455 [2024-07-24 22:31:11.980575] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1585ee0 (9): Bad file descriptor 00:21:46.455 22:31:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.455 [2024-07-24 22:31:11.990625] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:46.455 [2024-07-24 22:31:11.990885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.455 [2024-07-24 22:31:11.990928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1585ee0 with addr=10.0.0.2, port=4420 00:21:46.455 [2024-07-24 22:31:11.990948] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1585ee0 is same with the state(5) to be set 00:21:46.455 [2024-07-24 22:31:11.990976] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1585ee0 (9): Bad file descriptor 00:21:46.455 [2024-07-24 22:31:11.991015] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:46.455 [2024-07-24 22:31:11.991034] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:46.455 [2024-07-24 22:31:11.991052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:46.455 [2024-07-24 22:31:11.991077] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.455 [2024-07-24 22:31:12.000713] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:46.455 [2024-07-24 22:31:12.000856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.455 [2024-07-24 22:31:12.000886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1585ee0 with addr=10.0.0.2, port=4420 00:21:46.455 [2024-07-24 22:31:12.000903] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1585ee0 is same with the state(5) to be set 00:21:46.455 [2024-07-24 22:31:12.000928] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1585ee0 (9): Bad file descriptor 00:21:46.455 [2024-07-24 22:31:12.000950] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:46.455 [2024-07-24 22:31:12.000965] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:46.455 [2024-07-24 22:31:12.000979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:46.455 [2024-07-24 22:31:12.001000] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.455 [2024-07-24 22:31:12.010790] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:46.455 [2024-07-24 22:31:12.010964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.455 [2024-07-24 22:31:12.010993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1585ee0 with addr=10.0.0.2, port=4420 00:21:46.455 [2024-07-24 22:31:12.011010] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1585ee0 is same with the state(5) to be set 00:21:46.455 [2024-07-24 22:31:12.011034] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1585ee0 (9): Bad file descriptor 00:21:46.455 [2024-07-24 22:31:12.011079] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:46.455 [2024-07-24 22:31:12.011098] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:46.455 [2024-07-24 22:31:12.011113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:46.455 [2024-07-24 22:31:12.011134] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.455 22:31:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.455 22:31:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:46.455 22:31:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:46.455 22:31:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:46.455 22:31:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:46.455 22:31:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:46.455 22:31:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:46.455 22:31:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:46.455 22:31:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:46.455 22:31:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:46.455 22:31:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.455 22:31:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.455 22:31:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:46.455 22:31:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:46.455 [2024-07-24 22:31:12.020871] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:46.455 [2024-07-24 22:31:12.021036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.455 [2024-07-24 22:31:12.021064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1585ee0 with addr=10.0.0.2, port=4420 00:21:46.455 [2024-07-24 22:31:12.021081] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1585ee0 is same with the state(5) to be set 00:21:46.455 [2024-07-24 22:31:12.021106] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1585ee0 (9): Bad file descriptor 00:21:46.455 [2024-07-24 22:31:12.021128] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:46.455 [2024-07-24 22:31:12.021142] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:46.455 [2024-07-24 22:31:12.021158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:46.455 [2024-07-24 22:31:12.021178] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.455 [2024-07-24 22:31:12.030948] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:46.455 [2024-07-24 22:31:12.031102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.455 [2024-07-24 22:31:12.031132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1585ee0 with addr=10.0.0.2, port=4420 00:21:46.455 [2024-07-24 22:31:12.031149] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1585ee0 is same with the state(5) to be set 00:21:46.455 [2024-07-24 22:31:12.031173] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1585ee0 (9): Bad file descriptor 00:21:46.455 [2024-07-24 22:31:12.031225] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:46.455 [2024-07-24 22:31:12.031251] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:46.455 [2024-07-24 22:31:12.031266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:46.455 [2024-07-24 22:31:12.031288] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.455 [2024-07-24 22:31:12.041027] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:46.455 [2024-07-24 22:31:12.041196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.455 [2024-07-24 22:31:12.041225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1585ee0 with addr=10.0.0.2, port=4420 00:21:46.455 [2024-07-24 22:31:12.041241] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1585ee0 is same with the state(5) to be set 00:21:46.455 [2024-07-24 22:31:12.041266] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1585ee0 (9): Bad file descriptor 00:21:46.455 [2024-07-24 22:31:12.041300] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:46.455 [2024-07-24 22:31:12.041317] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:46.456 [2024-07-24 22:31:12.041332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:46.456 [2024-07-24 22:31:12.041353] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.456 22:31:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.456 [2024-07-24 22:31:12.051107] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:46.456 [2024-07-24 22:31:12.051272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.456 [2024-07-24 22:31:12.051302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1585ee0 with addr=10.0.0.2, port=4420 00:21:46.456 [2024-07-24 22:31:12.051319] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1585ee0 is same with the state(5) to be set 00:21:46.456 [2024-07-24 22:31:12.051343] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1585ee0 (9): Bad file descriptor 00:21:46.456 [2024-07-24 22:31:12.051391] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:46.456 [2024-07-24 22:31:12.051410] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:46.456 [2024-07-24 22:31:12.051425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:46.456 [2024-07-24 22:31:12.051447] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.456 [2024-07-24 22:31:12.061182] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:46.456 [2024-07-24 22:31:12.061348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.456 [2024-07-24 22:31:12.061377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1585ee0 with addr=10.0.0.2, port=4420 00:21:46.456 [2024-07-24 22:31:12.061394] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1585ee0 is same with the state(5) to be set 00:21:46.456 [2024-07-24 22:31:12.061418] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1585ee0 (9): Bad file descriptor 00:21:46.456 22:31:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:46.456 [2024-07-24 22:31:12.061453] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:46.456 [2024-07-24 22:31:12.061471] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:46.456 [2024-07-24 22:31:12.061494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:46.456 [2024-07-24 22:31:12.061523] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.456 22:31:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:46.456 22:31:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:21:46.456 22:31:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:21:46.456 22:31:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:46.456 22:31:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:46.456 22:31:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:21:46.456 22:31:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:21:46.456 22:31:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:46.456 22:31:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:46.456 22:31:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.456 22:31:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.456 22:31:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:46.456 22:31:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:46.456 [2024-07-24 22:31:12.071261] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:46.456 [2024-07-24 22:31:12.071448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.456 [2024-07-24 22:31:12.071477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1585ee0 with addr=10.0.0.2, port=4420 00:21:46.456 [2024-07-24 22:31:12.071503] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1585ee0 is same with the state(5) to be set 00:21:46.456 [2024-07-24 22:31:12.071529] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1585ee0 (9): Bad file descriptor 00:21:46.456 [2024-07-24 22:31:12.071577] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:46.456 [2024-07-24 22:31:12.071596] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:46.456 [2024-07-24 22:31:12.071611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:46.456 [2024-07-24 22:31:12.071645] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.456 22:31:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.456 [2024-07-24 22:31:12.081346] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:46.456 [2024-07-24 22:31:12.081509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.456 [2024-07-24 22:31:12.081538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1585ee0 with addr=10.0.0.2, port=4420 00:21:46.456 [2024-07-24 22:31:12.081556] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1585ee0 is same with the state(5) to be set 00:21:46.456 [2024-07-24 22:31:12.081580] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1585ee0 (9): Bad file descriptor 00:21:46.456 [2024-07-24 22:31:12.081602] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:46.456 [2024-07-24 22:31:12.081617] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:46.456 [2024-07-24 22:31:12.081631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:46.456 [2024-07-24 22:31:12.081658] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.456 [2024-07-24 22:31:12.091421] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:46.456 [2024-07-24 22:31:12.091575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.456 [2024-07-24 22:31:12.091603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1585ee0 with addr=10.0.0.2, port=4420 00:21:46.456 [2024-07-24 22:31:12.091620] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1585ee0 is same with the state(5) to be set 00:21:46.456 [2024-07-24 22:31:12.091644] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1585ee0 (9): Bad file descriptor 00:21:46.456 [2024-07-24 22:31:12.091679] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:46.456 [2024-07-24 22:31:12.091696] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:46.456 [2024-07-24 22:31:12.091711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:46.456 [2024-07-24 22:31:12.091732] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.456 [2024-07-24 22:31:12.094776] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:21:46.456 [2024-07-24 22:31:12.094810] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:46.456 22:31:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:21:46.456 22:31:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:21:47.831 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:47.831 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:21:47.831 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:21:47.831 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:47.831 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:47.831 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.831 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:47.831 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:47.831 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:47.831 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.831 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:21:47.831 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:47.831 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:21:47.831 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:47.831 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:47.831 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:47.831 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:47.831 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:47.831 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:47.831 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:47.831 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:47.831 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:47.831 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.831 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:47.831 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.831 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:47.831 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:47.831 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:47.831 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:47.831 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:21:47.831 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.831 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:47.831 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.831 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:21:47.831 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:21:47.831 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:47.831 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:47.831 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:21:47.831 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:21:47.831 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:47.831 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:47.831 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.831 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:47.831 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:47.831 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:47.831 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.831 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:21:47.832 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:47.832 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:21:47.832 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:21:47.832 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:47.832 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:47.832 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:21:47.832 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:21:47.832 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:47.832 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:47.832 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.832 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:47.832 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:47.832 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:47.832 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.832 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:21:47.832 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:47.832 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:21:47.832 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:21:47.832 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:47.832 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:47.832 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:21:47.832 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:21:47.832 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:47.832 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:21:47.832 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:47.832 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:47.832 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.832 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:47.832 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.832 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:21:47.832 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:21:47.832 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:21:47.832 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:21:47.832 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:47.832 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.832 22:31:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:48.765 [2024-07-24 22:31:14.408642] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:48.765 [2024-07-24 22:31:14.408695] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:48.765 [2024-07-24 22:31:14.408725] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:49.024 [2024-07-24 22:31:14.495964] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:21:49.024 [2024-07-24 22:31:14.563060] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:49.024 [2024-07-24 22:31:14.563120] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:49.024 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.024 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:49.024 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:21:49.024 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:49.024 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:49.024 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:49.024 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:49.024 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:49.024 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:49.024 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.024 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:49.024 request: 00:21:49.024 { 00:21:49.024 "name": "nvme", 00:21:49.024 "trtype": "tcp", 00:21:49.024 "traddr": "10.0.0.2", 00:21:49.024 "adrfam": "ipv4", 00:21:49.024 "trsvcid": "8009", 00:21:49.024 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:49.024 "wait_for_attach": true, 00:21:49.024 "method": "bdev_nvme_start_discovery", 00:21:49.024 "req_id": 1 00:21:49.024 } 00:21:49.024 Got JSON-RPC error response 00:21:49.024 response: 00:21:49.024 { 00:21:49.024 "code": -17, 00:21:49.024 "message": "File exists" 00:21:49.024 } 00:21:49.024 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:49.024 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:21:49.024 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:49.024 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:49.024 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:49.024 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:21:49.024 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:49.024 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:49.024 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.024 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:49.024 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:49.024 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:49.024 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.024 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:21:49.024 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:21:49.024 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:49.024 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.024 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:49.024 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:49.024 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:49.024 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:49.024 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.024 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:49.024 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:49.024 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:21:49.024 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:49.024 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:49.024 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:49.024 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:49.024 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:49.024 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:49.024 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.024 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:49.024 request: 00:21:49.024 { 00:21:49.024 "name": "nvme_second", 00:21:49.024 "trtype": "tcp", 00:21:49.024 "traddr": "10.0.0.2", 00:21:49.024 "adrfam": "ipv4", 00:21:49.024 "trsvcid": "8009", 00:21:49.024 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:49.024 "wait_for_attach": true, 00:21:49.024 "method": "bdev_nvme_start_discovery", 00:21:49.024 "req_id": 1 00:21:49.024 } 00:21:49.024 Got JSON-RPC error response 00:21:49.024 response: 00:21:49.024 { 00:21:49.024 "code": -17, 00:21:49.024 "message": "File exists" 00:21:49.024 } 00:21:49.024 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:49.024 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:21:49.024 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:49.024 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:49.024 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:49.024 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:21:49.024 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:49.024 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:49.024 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.024 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:49.024 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:49.024 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:49.024 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.282 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:21:49.282 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:21:49.282 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:49.282 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:49.282 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.282 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:49.282 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:49.282 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:49.282 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.282 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:49.282 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:49.282 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:21:49.282 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:49.282 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:49.282 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:49.282 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:49.282 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:49.282 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:49.282 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.282 22:31:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:50.214 [2024-07-24 22:31:15.782594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:50.214 [2024-07-24 22:31:15.782667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b8660 with addr=10.0.0.2, port=8010 00:21:50.214 [2024-07-24 22:31:15.782697] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:50.214 [2024-07-24 22:31:15.782714] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:50.214 [2024-07-24 22:31:15.782730] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:21:51.147 [2024-07-24 22:31:16.785021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:51.147 [2024-07-24 22:31:16.785085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15b8660 with addr=10.0.0.2, port=8010 00:21:51.147 [2024-07-24 22:31:16.785116] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:51.147 [2024-07-24 22:31:16.785133] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:51.147 [2024-07-24 22:31:16.785148] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:21:52.523 [2024-07-24 22:31:17.787187] bdev_nvme.c:7054:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:21:52.523 request: 00:21:52.523 { 00:21:52.523 "name": "nvme_second", 00:21:52.523 "trtype": "tcp", 00:21:52.523 "traddr": "10.0.0.2", 00:21:52.523 "adrfam": "ipv4", 00:21:52.523 "trsvcid": "8010", 00:21:52.523 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:52.523 "wait_for_attach": false, 00:21:52.523 "attach_timeout_ms": 3000, 00:21:52.523 "method": "bdev_nvme_start_discovery", 00:21:52.523 "req_id": 1 00:21:52.523 } 00:21:52.523 Got JSON-RPC error response 00:21:52.523 response: 00:21:52.523 { 00:21:52.523 "code": -110, 00:21:52.523 "message": "Connection timed out" 00:21:52.523 } 00:21:52.523 22:31:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:52.523 22:31:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:21:52.523 22:31:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:52.523 22:31:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:52.523 22:31:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:52.523 22:31:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:21:52.523 22:31:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:52.523 22:31:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.523 22:31:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:52.523 22:31:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:52.523 22:31:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:52.523 22:31:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:52.523 22:31:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.523 22:31:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:21:52.523 22:31:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:21:52.523 22:31:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3899251 00:21:52.523 22:31:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:21:52.523 22:31:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:52.523 22:31:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:21:52.523 22:31:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:52.523 22:31:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:21:52.523 22:31:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:52.523 22:31:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:52.523 rmmod nvme_tcp 00:21:52.523 rmmod nvme_fabrics 00:21:52.523 rmmod nvme_keyring 00:21:52.523 22:31:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:52.523 22:31:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:21:52.523 22:31:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:21:52.523 22:31:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 3899231 ']' 00:21:52.523 22:31:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 3899231 00:21:52.523 22:31:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 3899231 ']' 00:21:52.523 22:31:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 3899231 00:21:52.523 22:31:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:21:52.523 22:31:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:52.523 22:31:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3899231 00:21:52.523 22:31:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:52.523 22:31:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:52.523 22:31:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3899231' 00:21:52.523 killing process with pid 3899231 00:21:52.523 22:31:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 3899231 00:21:52.523 22:31:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 3899231 00:21:52.523 22:31:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:52.523 22:31:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:52.523 22:31:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:52.523 22:31:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:52.523 22:31:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:52.523 22:31:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:52.523 22:31:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:52.523 22:31:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:55.059 22:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:55.059 00:21:55.060 real 0m13.809s 00:21:55.060 user 0m21.060s 00:21:55.060 sys 0m2.592s 00:21:55.060 22:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:55.060 22:31:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:55.060 ************************************ 00:21:55.060 END TEST nvmf_host_discovery 00:21:55.060 ************************************ 00:21:55.060 22:31:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:21:55.060 22:31:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:55.060 22:31:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:55.060 22:31:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:55.060 ************************************ 00:21:55.060 START TEST nvmf_host_multipath_status 00:21:55.060 ************************************ 00:21:55.060 22:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:21:55.060 * Looking for test storage... 00:21:55.060 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:55.060 22:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:55.060 22:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:21:55.060 22:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:55.060 22:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:55.060 22:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:55.060 22:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:55.060 22:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:55.060 22:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:55.060 22:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:55.060 22:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:55.060 22:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:55.060 22:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:55.060 22:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:21:55.060 22:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:21:55.060 22:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:55.060 22:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:55.060 22:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:55.060 22:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:55.060 22:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:55.060 22:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:55.060 22:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:55.060 22:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:55.060 22:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.060 22:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.060 22:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.060 22:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:21:55.060 22:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.060 22:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:21:55.060 22:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:55.060 22:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:55.060 22:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:55.060 22:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:55.060 22:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:55.060 22:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:55.060 22:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:55.060 22:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:55.060 22:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:55.060 22:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:55.060 22:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:55.060 22:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:21:55.060 22:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:55.060 22:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:21:55.060 22:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:21:55.060 22:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:55.060 22:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:55.060 22:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:55.060 22:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:55.060 22:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:55.060 22:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:55.060 22:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:55.060 22:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:55.060 22:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:55.060 22:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:55.060 22:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:21:55.061 22:31:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:56.439 22:31:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:56.439 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:21:56.439 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:56.439 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:56.439 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:56.439 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:56.439 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:56.439 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:21:56.439 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:56.439 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:21:56.439 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:21:56.439 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:21:56.439 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:21:56.439 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:21:56.439 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:21:56.439 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:56.439 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:56.439 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:56.439 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:56.439 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:56.439 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:56.439 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:56.439 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:56.439 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:56.439 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:56.439 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:56.439 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:56.439 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:56.439 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:56.439 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:56.439 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:56.439 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:56.439 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:56.439 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:21:56.439 Found 0000:08:00.0 (0x8086 - 0x159b) 00:21:56.439 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:56.439 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:56.439 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:56.439 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:56.439 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:56.439 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:56.439 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:21:56.439 Found 0000:08:00.1 (0x8086 - 0x159b) 00:21:56.439 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:56.439 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:56.439 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:56.439 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:56.439 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:56.439 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:56.439 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:56.439 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:56.439 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:56.439 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:56.439 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:56.439 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:56.439 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:56.439 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:56.439 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:56.439 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:21:56.440 Found net devices under 0000:08:00.0: cvl_0_0 00:21:56.440 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:56.440 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:56.440 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:56.440 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:56.440 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:56.440 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:56.440 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:56.440 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:56.440 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:21:56.440 Found net devices under 0000:08:00.1: cvl_0_1 00:21:56.440 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:56.440 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:56.440 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:21:56.440 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:56.440 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:56.440 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:56.440 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:56.440 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:56.440 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:56.440 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:56.440 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:56.440 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:56.440 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:56.440 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:56.440 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:56.440 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:56.440 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:56.440 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:56.440 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:56.440 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:56.440 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:56.440 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:56.440 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:56.440 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:56.440 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:56.440 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:56.440 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:56.440 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.383 ms 00:21:56.440 00:21:56.440 --- 10.0.0.2 ping statistics --- 00:21:56.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:56.440 rtt min/avg/max/mdev = 0.383/0.383/0.383/0.000 ms 00:21:56.440 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:56.440 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:56.440 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:21:56.440 00:21:56.440 --- 10.0.0.1 ping statistics --- 00:21:56.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:56.440 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:21:56.440 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:56.440 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:21:56.440 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:56.440 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:56.440 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:56.440 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:56.440 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:56.440 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:56.440 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:56.698 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:21:56.698 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:56.698 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:56.698 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:56.698 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=3901732 00:21:56.698 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:56.698 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 3901732 00:21:56.698 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 3901732 ']' 00:21:56.698 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:56.698 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:56.698 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:56.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:56.698 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:56.698 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:56.698 [2024-07-24 22:31:22.225570] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:21:56.698 [2024-07-24 22:31:22.225672] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:56.698 EAL: No free 2048 kB hugepages reported on node 1 00:21:56.698 [2024-07-24 22:31:22.291333] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:56.956 [2024-07-24 22:31:22.408254] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:56.956 [2024-07-24 22:31:22.408307] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:56.956 [2024-07-24 22:31:22.408323] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:56.956 [2024-07-24 22:31:22.408336] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:56.956 [2024-07-24 22:31:22.408348] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:56.956 [2024-07-24 22:31:22.408427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:56.956 [2024-07-24 22:31:22.408432] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:56.956 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:56.956 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:21:56.956 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:56.956 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:56.956 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:56.956 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:56.956 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3901732 00:21:56.956 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:57.214 [2024-07-24 22:31:22.807636] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:57.214 22:31:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:57.471 Malloc0 00:21:57.471 22:31:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:21:58.036 22:31:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:58.293 22:31:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:58.550 [2024-07-24 22:31:24.028858] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:58.550 22:31:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:58.809 [2024-07-24 22:31:24.305595] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:58.809 22:31:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3901955 00:21:58.809 22:31:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:21:58.809 22:31:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:58.809 22:31:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3901955 /var/tmp/bdevperf.sock 00:21:58.809 22:31:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 3901955 ']' 00:21:58.809 22:31:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:58.809 22:31:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:58.809 22:31:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:58.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:58.809 22:31:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:58.809 22:31:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:59.106 22:31:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:59.106 22:31:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:21:59.107 22:31:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:59.389 22:31:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:21:59.953 Nvme0n1 00:21:59.953 22:31:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:22:00.211 Nvme0n1 00:22:00.211 22:31:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:22:00.211 22:31:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:22:02.734 22:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:22:02.734 22:31:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:22:02.734 22:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:02.991 22:31:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:22:03.922 22:31:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:22:03.922 22:31:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:03.922 22:31:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:03.922 22:31:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:04.180 22:31:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:04.180 22:31:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:04.180 22:31:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:04.180 22:31:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:04.437 22:31:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:04.437 22:31:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:04.437 22:31:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:04.438 22:31:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:05.002 22:31:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:05.002 22:31:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:05.002 22:31:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:05.002 22:31:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:05.002 22:31:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:05.002 22:31:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:05.002 22:31:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:05.002 22:31:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:05.260 22:31:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:05.260 22:31:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:05.260 22:31:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:05.260 22:31:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:05.517 22:31:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:05.517 22:31:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:22:05.517 22:31:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:05.775 22:31:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:06.033 22:31:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:22:07.404 22:31:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:22:07.404 22:31:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:07.404 22:31:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:07.404 22:31:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:07.404 22:31:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:07.404 22:31:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:07.404 22:31:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:07.404 22:31:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:07.662 22:31:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:07.662 22:31:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:07.662 22:31:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:07.662 22:31:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:07.919 22:31:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:07.919 22:31:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:07.919 22:31:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:07.919 22:31:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:08.484 22:31:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:08.484 22:31:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:08.484 22:31:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:08.484 22:31:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:08.741 22:31:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:08.741 22:31:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:08.741 22:31:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:08.741 22:31:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:08.998 22:31:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:08.998 22:31:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:22:08.998 22:31:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:09.255 22:31:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:22:09.512 22:31:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:22:10.442 22:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:22:10.442 22:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:10.442 22:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:10.442 22:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:11.007 22:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:11.007 22:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:11.007 22:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:11.007 22:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:11.264 22:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:11.264 22:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:11.264 22:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:11.264 22:31:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:11.522 22:31:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:11.522 22:31:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:11.522 22:31:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:11.522 22:31:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:11.780 22:31:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:11.780 22:31:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:11.780 22:31:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:11.780 22:31:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:12.037 22:31:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:12.037 22:31:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:12.038 22:31:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:12.038 22:31:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:12.295 22:31:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:12.295 22:31:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:22:12.295 22:31:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:12.553 22:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:13.118 22:31:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:22:14.050 22:31:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:22:14.050 22:31:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:14.051 22:31:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:14.051 22:31:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:14.308 22:31:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:14.308 22:31:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:14.308 22:31:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:14.308 22:31:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:14.565 22:31:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:14.565 22:31:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:14.565 22:31:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:14.565 22:31:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:14.823 22:31:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:14.823 22:31:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:14.823 22:31:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:14.823 22:31:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:15.081 22:31:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:15.081 22:31:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:15.081 22:31:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:15.081 22:31:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:15.646 22:31:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:15.646 22:31:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:15.646 22:31:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:15.646 22:31:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:15.904 22:31:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:15.904 22:31:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:22:15.904 22:31:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:22:16.162 22:31:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:16.419 22:31:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:22:17.351 22:31:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:22:17.351 22:31:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:17.351 22:31:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:17.351 22:31:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:17.608 22:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:17.608 22:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:17.608 22:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:17.608 22:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:17.867 22:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:17.867 22:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:17.867 22:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:17.867 22:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:18.432 22:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:18.432 22:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:18.432 22:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:18.432 22:31:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:18.689 22:31:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:18.689 22:31:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:18.690 22:31:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:18.690 22:31:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:18.947 22:31:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:18.947 22:31:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:18.947 22:31:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:18.947 22:31:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:19.204 22:31:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:19.204 22:31:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:22:19.204 22:31:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:22:19.461 22:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:19.718 22:31:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:22:20.651 22:31:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:22:20.651 22:31:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:20.651 22:31:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:20.651 22:31:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:21.217 22:31:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:21.217 22:31:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:21.217 22:31:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:21.217 22:31:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:21.474 22:31:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:21.474 22:31:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:21.474 22:31:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:21.474 22:31:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:21.733 22:31:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:21.733 22:31:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:21.733 22:31:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:21.733 22:31:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:21.990 22:31:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:21.990 22:31:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:21.990 22:31:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:21.990 22:31:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:22.286 22:31:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:22.286 22:31:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:22.286 22:31:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:22.286 22:31:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:22.612 22:31:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:22.612 22:31:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:22:22.869 22:31:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:22:22.869 22:31:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:22:23.126 22:31:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:23.384 22:31:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:22:24.759 22:31:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:22:24.759 22:31:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:24.759 22:31:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:24.759 22:31:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:24.759 22:31:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:24.759 22:31:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:24.759 22:31:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:24.759 22:31:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:25.017 22:31:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:25.017 22:31:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:25.017 22:31:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:25.017 22:31:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:25.276 22:31:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:25.276 22:31:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:25.276 22:31:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:25.276 22:31:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:25.842 22:31:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:25.842 22:31:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:25.842 22:31:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:25.842 22:31:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:26.100 22:31:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:26.100 22:31:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:26.100 22:31:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:26.101 22:31:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:26.359 22:31:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:26.359 22:31:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:22:26.359 22:31:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:26.617 22:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:26.875 22:31:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:22:27.813 22:31:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:22:27.813 22:31:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:27.813 22:31:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:27.813 22:31:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:28.073 22:31:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:28.073 22:31:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:28.073 22:31:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:28.073 22:31:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:28.330 22:31:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:28.330 22:31:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:28.330 22:31:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:28.330 22:31:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:28.588 22:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:28.588 22:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:28.588 22:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:28.588 22:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:29.155 22:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:29.155 22:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:29.155 22:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:29.155 22:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:29.413 22:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:29.413 22:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:29.413 22:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:29.413 22:31:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:29.670 22:31:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:29.670 22:31:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:22:29.670 22:31:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:29.928 22:31:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:22:30.186 22:31:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:22:31.121 22:31:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:22:31.121 22:31:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:31.121 22:31:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:31.121 22:31:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:31.379 22:31:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:31.638 22:31:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:31.638 22:31:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:31.638 22:31:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:31.638 22:31:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:31.638 22:31:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:31.638 22:31:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:31.638 22:31:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:31.897 22:31:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:31.897 22:31:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:31.897 22:31:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:31.897 22:31:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:32.156 22:31:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:32.156 22:31:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:32.156 22:31:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:32.156 22:31:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:32.414 22:31:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:32.414 22:31:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:32.414 22:31:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:32.414 22:31:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:32.672 22:31:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:32.672 22:31:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:22:32.672 22:31:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:32.930 22:31:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:33.188 22:31:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:22:34.123 22:31:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:22:34.123 22:31:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:34.123 22:31:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:34.123 22:31:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:34.689 22:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:34.689 22:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:34.689 22:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:34.689 22:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:34.947 22:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:34.947 22:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:34.947 22:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:34.947 22:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:35.206 22:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:35.206 22:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:35.206 22:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:35.206 22:32:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:35.463 22:32:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:35.463 22:32:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:35.463 22:32:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:35.463 22:32:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:35.721 22:32:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:35.721 22:32:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:35.721 22:32:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:35.721 22:32:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:35.978 22:32:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:35.979 22:32:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3901955 00:22:35.979 22:32:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 3901955 ']' 00:22:35.979 22:32:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 3901955 00:22:35.979 22:32:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:22:35.979 22:32:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:35.979 22:32:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3901955 00:22:35.979 22:32:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:35.979 22:32:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:35.979 22:32:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3901955' 00:22:35.979 killing process with pid 3901955 00:22:35.979 22:32:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 3901955 00:22:35.979 22:32:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 3901955 00:22:36.240 Connection closed with partial response: 00:22:36.240 00:22:36.240 00:22:36.240 22:32:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3901955 00:22:36.240 22:32:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:36.240 [2024-07-24 22:31:24.363071] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:22:36.240 [2024-07-24 22:31:24.363163] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3901955 ] 00:22:36.240 EAL: No free 2048 kB hugepages reported on node 1 00:22:36.240 [2024-07-24 22:31:24.417328] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:36.240 [2024-07-24 22:31:24.537447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:36.240 Running I/O for 90 seconds... 00:22:36.240 [2024-07-24 22:31:41.633631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:29224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.240 [2024-07-24 22:31:41.633692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:36.240 [2024-07-24 22:31:41.633779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:29232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.240 [2024-07-24 22:31:41.633809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:36.240 [2024-07-24 22:31:41.633848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:29240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.240 [2024-07-24 22:31:41.633875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:36.240 [2024-07-24 22:31:41.633914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:29248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.240 [2024-07-24 22:31:41.633941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:36.240 [2024-07-24 22:31:41.633979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:29256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.240 [2024-07-24 22:31:41.634006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:36.240 [2024-07-24 22:31:41.634046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:29264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.240 [2024-07-24 22:31:41.634073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:36.240 [2024-07-24 22:31:41.634110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:29272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.240 [2024-07-24 22:31:41.634139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.240 [2024-07-24 22:31:41.634176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:29280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.240 [2024-07-24 22:31:41.634202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.240 [2024-07-24 22:31:41.634238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:29288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.240 [2024-07-24 22:31:41.634264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:36.240 [2024-07-24 22:31:41.634303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:29296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.240 [2024-07-24 22:31:41.634333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:36.240 [2024-07-24 22:31:41.634370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:29304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.240 [2024-07-24 22:31:41.634412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:36.240 [2024-07-24 22:31:41.634453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:29312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.240 [2024-07-24 22:31:41.634490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:36.240 [2024-07-24 22:31:41.634531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:29320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.240 [2024-07-24 22:31:41.634560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:36.240 [2024-07-24 22:31:41.634598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.240 [2024-07-24 22:31:41.634625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:36.240 [2024-07-24 22:31:41.634666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:29336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.240 [2024-07-24 22:31:41.634694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:36.240 [2024-07-24 22:31:41.634734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:29344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.240 [2024-07-24 22:31:41.634761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:36.240 [2024-07-24 22:31:41.634800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.240 [2024-07-24 22:31:41.634828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:36.240 [2024-07-24 22:31:41.634868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:29360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.240 [2024-07-24 22:31:41.634896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:36.240 [2024-07-24 22:31:41.634936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:29368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.240 [2024-07-24 22:31:41.634971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:36.240 [2024-07-24 22:31:41.635012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:29376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.240 [2024-07-24 22:31:41.635041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:36.240 [2024-07-24 22:31:41.635080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:29384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.240 [2024-07-24 22:31:41.635107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:36.240 [2024-07-24 22:31:41.635146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:29392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.240 [2024-07-24 22:31:41.635176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:36.240 [2024-07-24 22:31:41.635214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:29400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.240 [2024-07-24 22:31:41.635244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:36.240 [2024-07-24 22:31:41.635288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:29408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.240 [2024-07-24 22:31:41.635317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:36.240 [2024-07-24 22:31:41.635355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:29416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.240 [2024-07-24 22:31:41.635383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:36.240 [2024-07-24 22:31:41.635421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:29424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.240 [2024-07-24 22:31:41.635448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:36.240 [2024-07-24 22:31:41.635494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:29432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.240 [2024-07-24 22:31:41.635531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:36.240 [2024-07-24 22:31:41.635577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:29440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.240 [2024-07-24 22:31:41.635605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:36.240 [2024-07-24 22:31:41.635646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:29448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.241 [2024-07-24 22:31:41.635674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:36.241 [2024-07-24 22:31:41.635714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:29456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.241 [2024-07-24 22:31:41.635743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:36.241 [2024-07-24 22:31:41.635783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:29464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.241 [2024-07-24 22:31:41.635810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:36.241 [2024-07-24 22:31:41.635850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:29472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.241 [2024-07-24 22:31:41.635878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:36.241 [2024-07-24 22:31:41.636367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:29480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.241 [2024-07-24 22:31:41.636400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:36.241 [2024-07-24 22:31:41.636448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:29488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.241 [2024-07-24 22:31:41.636478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:36.241 [2024-07-24 22:31:41.636539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:29496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.241 [2024-07-24 22:31:41.636569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:36.241 [2024-07-24 22:31:41.636626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:29504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.241 [2024-07-24 22:31:41.636656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:36.241 [2024-07-24 22:31:41.636699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.241 [2024-07-24 22:31:41.636727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:36.241 [2024-07-24 22:31:41.636769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:29520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.241 [2024-07-24 22:31:41.636798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:36.241 [2024-07-24 22:31:41.636840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.241 [2024-07-24 22:31:41.636869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:36.241 [2024-07-24 22:31:41.636912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:29536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.241 [2024-07-24 22:31:41.636940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:36.241 [2024-07-24 22:31:41.636984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:29544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.241 [2024-07-24 22:31:41.637012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:36.241 [2024-07-24 22:31:41.637055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:29552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.241 [2024-07-24 22:31:41.637083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:36.241 [2024-07-24 22:31:41.637126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:29560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.241 [2024-07-24 22:31:41.637157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:36.241 [2024-07-24 22:31:41.637199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:29568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.241 [2024-07-24 22:31:41.637228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:36.241 [2024-07-24 22:31:41.637268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:29576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.241 [2024-07-24 22:31:41.637297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:36.241 [2024-07-24 22:31:41.637339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:29584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.241 [2024-07-24 22:31:41.637366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:36.241 [2024-07-24 22:31:41.637409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:29592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.241 [2024-07-24 22:31:41.637437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:36.241 [2024-07-24 22:31:41.637491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.241 [2024-07-24 22:31:41.637535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:36.241 [2024-07-24 22:31:41.637579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.241 [2024-07-24 22:31:41.637608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:36.241 [2024-07-24 22:31:41.637650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:29616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.241 [2024-07-24 22:31:41.637680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:36.241 [2024-07-24 22:31:41.637721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:29624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.241 [2024-07-24 22:31:41.637749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:36.241 [2024-07-24 22:31:41.637791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:29632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.241 [2024-07-24 22:31:41.637820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:36.241 [2024-07-24 22:31:41.637862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:29640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.241 [2024-07-24 22:31:41.637890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:36.241 [2024-07-24 22:31:41.637933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:29648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.241 [2024-07-24 22:31:41.637961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:36.241 [2024-07-24 22:31:41.638004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:29656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.241 [2024-07-24 22:31:41.638033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:36.241 [2024-07-24 22:31:41.638076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:29080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.241 [2024-07-24 22:31:41.638105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:36.241 [2024-07-24 22:31:41.638148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:29088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.241 [2024-07-24 22:31:41.638178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:36.241 [2024-07-24 22:31:41.638220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:29664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.241 [2024-07-24 22:31:41.638248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:36.241 [2024-07-24 22:31:41.638289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:29672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.241 [2024-07-24 22:31:41.638318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:36.241 [2024-07-24 22:31:41.638360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.241 [2024-07-24 22:31:41.638394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:36.241 [2024-07-24 22:31:41.638438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:29688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.241 [2024-07-24 22:31:41.638467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:36.241 [2024-07-24 22:31:41.638520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:29696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.241 [2024-07-24 22:31:41.638550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:36.241 [2024-07-24 22:31:41.638598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:29704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.241 [2024-07-24 22:31:41.638627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:36.241 [2024-07-24 22:31:41.638667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:29712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.241 [2024-07-24 22:31:41.638696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:36.241 [2024-07-24 22:31:41.638738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:29720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.241 [2024-07-24 22:31:41.638766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:36.241 [2024-07-24 22:31:41.638809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.241 [2024-07-24 22:31:41.638837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:36.242 [2024-07-24 22:31:41.638880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:29736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.242 [2024-07-24 22:31:41.638908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:36.242 [2024-07-24 22:31:41.638951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:29744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.242 [2024-07-24 22:31:41.638980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:36.242 [2024-07-24 22:31:41.639021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:29752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.242 [2024-07-24 22:31:41.639051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:36.242 [2024-07-24 22:31:41.639093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:29760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.242 [2024-07-24 22:31:41.639121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:36.242 [2024-07-24 22:31:41.639163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:29768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.242 [2024-07-24 22:31:41.639192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:36.242 [2024-07-24 22:31:41.639233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.242 [2024-07-24 22:31:41.639261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.242 [2024-07-24 22:31:41.639311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:29784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.242 [2024-07-24 22:31:41.639339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:36.242 [2024-07-24 22:31:41.639382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:29792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.242 [2024-07-24 22:31:41.639411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:36.242 [2024-07-24 22:31:41.639453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:29800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.242 [2024-07-24 22:31:41.639491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:36.242 [2024-07-24 22:31:41.639540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:29808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.242 [2024-07-24 22:31:41.639570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:36.242 [2024-07-24 22:31:41.639612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:29816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.242 [2024-07-24 22:31:41.639641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:36.242 [2024-07-24 22:31:41.639683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:29824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.242 [2024-07-24 22:31:41.639712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:36.242 [2024-07-24 22:31:41.639753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:29832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.242 [2024-07-24 22:31:41.639781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:36.242 [2024-07-24 22:31:41.639824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:29840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.242 [2024-07-24 22:31:41.639853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:36.242 [2024-07-24 22:31:41.639896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:29848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.242 [2024-07-24 22:31:41.639925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:36.242 [2024-07-24 22:31:41.639967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.242 [2024-07-24 22:31:41.639997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:36.242 [2024-07-24 22:31:41.640039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:29864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.242 [2024-07-24 22:31:41.640067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:36.242 [2024-07-24 22:31:41.640109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:29872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.242 [2024-07-24 22:31:41.640138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:36.242 [2024-07-24 22:31:41.640186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:29096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.242 [2024-07-24 22:31:41.640214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:36.242 [2024-07-24 22:31:41.640259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:29104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.242 [2024-07-24 22:31:41.640288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:36.242 [2024-07-24 22:31:41.640331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:29112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.242 [2024-07-24 22:31:41.640362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:36.242 [2024-07-24 22:31:41.640404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:29120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.242 [2024-07-24 22:31:41.640433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:36.242 [2024-07-24 22:31:41.640475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.242 [2024-07-24 22:31:41.640523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:36.242 [2024-07-24 22:31:41.640564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:29136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.242 [2024-07-24 22:31:41.640597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:36.242 [2024-07-24 22:31:41.640640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.242 [2024-07-24 22:31:41.640668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:36.242 [2024-07-24 22:31:41.640710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:29152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.242 [2024-07-24 22:31:41.640737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:36.242 [2024-07-24 22:31:41.640779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.242 [2024-07-24 22:31:41.640808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:36.242 [2024-07-24 22:31:41.640852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:29168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.242 [2024-07-24 22:31:41.640879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:36.242 [2024-07-24 22:31:41.640923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:29176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.242 [2024-07-24 22:31:41.640952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:36.242 [2024-07-24 22:31:41.640994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.242 [2024-07-24 22:31:41.641023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:36.242 [2024-07-24 22:31:41.641064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:29192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.242 [2024-07-24 22:31:41.641099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:36.242 [2024-07-24 22:31:41.641143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:29200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.242 [2024-07-24 22:31:41.641171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:36.242 [2024-07-24 22:31:41.641215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:29208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.242 [2024-07-24 22:31:41.641243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:36.242 [2024-07-24 22:31:41.641607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:29216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.242 [2024-07-24 22:31:41.641641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:36.242 [2024-07-24 22:31:41.641696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:29880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.242 [2024-07-24 22:31:41.641726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:36.242 [2024-07-24 22:31:41.641774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:29888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.242 [2024-07-24 22:31:41.641802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:36.242 [2024-07-24 22:31:41.641852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:29896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.242 [2024-07-24 22:31:41.641881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:36.242 [2024-07-24 22:31:41.641930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:29904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.243 [2024-07-24 22:31:41.641959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.243 [2024-07-24 22:31:41.642008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:29912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.243 [2024-07-24 22:31:41.642037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:36.243 [2024-07-24 22:31:41.642085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:29920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.243 [2024-07-24 22:31:41.642114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:36.243 [2024-07-24 22:31:41.642163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:29928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.243 [2024-07-24 22:31:41.642191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:36.243 [2024-07-24 22:31:41.642240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:29936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.243 [2024-07-24 22:31:41.642270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:36.243 [2024-07-24 22:31:41.642320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:29944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.243 [2024-07-24 22:31:41.642355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:36.243 [2024-07-24 22:31:41.642404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:29952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.243 [2024-07-24 22:31:41.642431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:36.243 [2024-07-24 22:31:41.642489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:29960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.243 [2024-07-24 22:31:41.642519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:36.243 [2024-07-24 22:31:41.642569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:29968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.243 [2024-07-24 22:31:41.642598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:36.243 [2024-07-24 22:31:41.642646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:29976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.243 [2024-07-24 22:31:41.642674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:36.243 [2024-07-24 22:31:41.642722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:29984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.243 [2024-07-24 22:31:41.642750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:36.243 [2024-07-24 22:31:41.642801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.243 [2024-07-24 22:31:41.642829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:36.243 [2024-07-24 22:31:41.642879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:30000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.243 [2024-07-24 22:31:41.642909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:36.243 [2024-07-24 22:31:41.642957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.243 [2024-07-24 22:31:41.642985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:36.243 [2024-07-24 22:31:41.643033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:30016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.243 [2024-07-24 22:31:41.643062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:36.243 [2024-07-24 22:31:41.643111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:30024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.243 [2024-07-24 22:31:41.643140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:36.243 [2024-07-24 22:31:41.643190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:30032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.243 [2024-07-24 22:31:41.643219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:36.243 [2024-07-24 22:31:41.643266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:30040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.243 [2024-07-24 22:31:41.643294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:36.243 [2024-07-24 22:31:41.643349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:30048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.243 [2024-07-24 22:31:41.643378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:36.243 [2024-07-24 22:31:41.643429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:30056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.243 [2024-07-24 22:31:41.643457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:36.243 [2024-07-24 22:31:41.643514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:30064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.243 [2024-07-24 22:31:41.643543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:36.243 [2024-07-24 22:31:41.643591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:30072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.243 [2024-07-24 22:31:41.643619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:36.243 [2024-07-24 22:31:41.643668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:30080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.243 [2024-07-24 22:31:41.643696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:36.243 [2024-07-24 22:31:41.643747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:30088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.243 [2024-07-24 22:31:41.643777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:36.243 [2024-07-24 22:31:41.643826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:30096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.243 [2024-07-24 22:31:41.643854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:36.243 [2024-07-24 22:31:58.802857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:123712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.243 [2024-07-24 22:31:58.802927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:36.243 [2024-07-24 22:31:58.803011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:123728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.243 [2024-07-24 22:31:58.803042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:36.243 [2024-07-24 22:31:58.803082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:123744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.243 [2024-07-24 22:31:58.803109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:36.243 [2024-07-24 22:31:58.803149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:123760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.243 [2024-07-24 22:31:58.803175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:36.243 [2024-07-24 22:31:58.803213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:123776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.243 [2024-07-24 22:31:58.803241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:36.243 [2024-07-24 22:31:58.803293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:123792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.243 [2024-07-24 22:31:58.803321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:36.243 [2024-07-24 22:31:58.803359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:123808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.243 [2024-07-24 22:31:58.803387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:36.243 [2024-07-24 22:31:58.803423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:123824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.243 [2024-07-24 22:31:58.803451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:36.243 [2024-07-24 22:31:58.803497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:123288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.243 [2024-07-24 22:31:58.803531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:36.243 [2024-07-24 22:31:58.803571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:123320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.243 [2024-07-24 22:31:58.803599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:36.243 [2024-07-24 22:31:58.803637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:123352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.243 [2024-07-24 22:31:58.803668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:36.243 [2024-07-24 22:31:58.803707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:123384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.243 [2024-07-24 22:31:58.803735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:36.243 [2024-07-24 22:31:58.803772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:123416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.243 [2024-07-24 22:31:58.803800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:36.243 [2024-07-24 22:31:58.803838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:123448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.243 [2024-07-24 22:31:58.803867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:36.244 [2024-07-24 22:31:58.803905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:123480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.244 [2024-07-24 22:31:58.803935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:36.244 [2024-07-24 22:31:58.803972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:123280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.244 [2024-07-24 22:31:58.804000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:36.244 [2024-07-24 22:31:58.804038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:123312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.244 [2024-07-24 22:31:58.804066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:36.244 [2024-07-24 22:31:58.804110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:123344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.244 [2024-07-24 22:31:58.804138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:36.244 [2024-07-24 22:31:58.804177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:123376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.244 [2024-07-24 22:31:58.804206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:36.244 [2024-07-24 22:31:58.804244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:123408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.244 [2024-07-24 22:31:58.804272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:36.244 [2024-07-24 22:31:58.804310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:123440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.244 [2024-07-24 22:31:58.804338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:36.244 [2024-07-24 22:31:58.804378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:123472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.244 [2024-07-24 22:31:58.804405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:36.244 [2024-07-24 22:31:58.804445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:123504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.244 [2024-07-24 22:31:58.804473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:36.244 [2024-07-24 22:31:58.808629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:123840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.244 [2024-07-24 22:31:58.808670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:36.244 [2024-07-24 22:31:58.808720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:123536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.244 [2024-07-24 22:31:58.808751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:36.244 [2024-07-24 22:31:58.808791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:123856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.244 [2024-07-24 22:31:58.808820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:36.244 [2024-07-24 22:31:58.808859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:123872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.244 [2024-07-24 22:31:58.808887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:36.244 [2024-07-24 22:31:58.808926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:123888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.244 [2024-07-24 22:31:58.808954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:36.244 [2024-07-24 22:31:58.808993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:123528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.244 [2024-07-24 22:31:58.809021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:36.244 [2024-07-24 22:31:58.809060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:123560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.244 [2024-07-24 22:31:58.809102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:36.244 [2024-07-24 22:31:58.809144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:123552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.244 [2024-07-24 22:31:58.809171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:36.244 [2024-07-24 22:31:58.809212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:123584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.244 [2024-07-24 22:31:58.809240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:36.244 [2024-07-24 22:31:58.809282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:123904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.244 [2024-07-24 22:31:58.809310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:36.244 [2024-07-24 22:31:58.809350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:123920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.244 [2024-07-24 22:31:58.809379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:36.244 [2024-07-24 22:31:58.809426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:123936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.244 [2024-07-24 22:31:58.809455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:36.244 [2024-07-24 22:31:58.809502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:123952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.244 [2024-07-24 22:31:58.809532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:36.244 [2024-07-24 22:31:58.809578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:123968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.244 [2024-07-24 22:31:58.809607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:36.244 [2024-07-24 22:31:58.809645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:123616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.244 [2024-07-24 22:31:58.809673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:36.244 [2024-07-24 22:31:58.809713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:123648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.244 [2024-07-24 22:31:58.809741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:36.244 [2024-07-24 22:31:58.809780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:123680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.244 [2024-07-24 22:31:58.809808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:36.244 [2024-07-24 22:31:58.809848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:123976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.244 [2024-07-24 22:31:58.809876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:36.244 [2024-07-24 22:31:58.809917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:123992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.244 [2024-07-24 22:31:58.809951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:36.244 [2024-07-24 22:31:58.809992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:124008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.244 [2024-07-24 22:31:58.810021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:36.244 [2024-07-24 22:31:58.810060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:124024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.244 [2024-07-24 22:31:58.810089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:36.245 [2024-07-24 22:31:58.810128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:124040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.245 [2024-07-24 22:31:58.810156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:36.245 [2024-07-24 22:31:58.810195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:124056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.245 [2024-07-24 22:31:58.810224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:36.245 [2024-07-24 22:31:58.810262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:124072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.245 [2024-07-24 22:31:58.810291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:36.245 [2024-07-24 22:31:58.810329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:124088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.245 [2024-07-24 22:31:58.810359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:36.245 [2024-07-24 22:31:58.810399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:124104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:36.245 [2024-07-24 22:31:58.810427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:36.245 Received shutdown signal, test time was about 35.596799 seconds 00:22:36.245 00:22:36.245 Latency(us) 00:22:36.245 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:36.245 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:36.245 Verification LBA range: start 0x0 length 0x4000 00:22:36.245 Nvme0n1 : 35.60 7024.58 27.44 0.00 0.00 18185.76 579.51 4026531.84 00:22:36.245 =================================================================================================================== 00:22:36.245 Total : 7024.58 27.44 0.00 0.00 18185.76 579.51 4026531.84 00:22:36.245 22:32:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:36.504 22:32:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:22:36.505 22:32:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:36.505 22:32:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:22:36.505 22:32:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:36.505 22:32:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:22:36.505 22:32:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:36.505 22:32:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:22:36.505 22:32:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:36.505 22:32:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:36.505 rmmod nvme_tcp 00:22:36.766 rmmod nvme_fabrics 00:22:36.766 rmmod nvme_keyring 00:22:36.766 22:32:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:36.766 22:32:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:22:36.766 22:32:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:22:36.766 22:32:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 3901732 ']' 00:22:36.766 22:32:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 3901732 00:22:36.766 22:32:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 3901732 ']' 00:22:36.766 22:32:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 3901732 00:22:36.766 22:32:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:22:36.766 22:32:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:36.766 22:32:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3901732 00:22:36.766 22:32:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:36.766 22:32:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:36.766 22:32:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3901732' 00:22:36.766 killing process with pid 3901732 00:22:36.766 22:32:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 3901732 00:22:36.766 22:32:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 3901732 00:22:37.028 22:32:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:37.028 22:32:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:37.028 22:32:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:37.028 22:32:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:37.028 22:32:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:37.028 22:32:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:37.028 22:32:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:37.028 22:32:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:38.932 22:32:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:38.932 00:22:38.932 real 0m44.295s 00:22:38.932 user 2m11.214s 00:22:38.932 sys 0m13.018s 00:22:38.932 22:32:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:38.932 22:32:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:38.932 ************************************ 00:22:38.932 END TEST nvmf_host_multipath_status 00:22:38.932 ************************************ 00:22:38.932 22:32:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:38.932 22:32:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:38.932 22:32:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:38.932 22:32:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:38.932 ************************************ 00:22:38.932 START TEST nvmf_discovery_remove_ifc 00:22:38.932 ************************************ 00:22:38.932 22:32:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:39.191 * Looking for test storage... 00:22:39.191 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:39.191 22:32:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:39.191 22:32:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:22:39.191 22:32:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:39.191 22:32:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:39.191 22:32:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:39.191 22:32:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:39.191 22:32:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:39.191 22:32:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:39.191 22:32:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:39.191 22:32:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:39.191 22:32:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:39.191 22:32:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:39.191 22:32:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:22:39.191 22:32:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:22:39.191 22:32:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:39.191 22:32:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:39.191 22:32:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:39.191 22:32:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:39.191 22:32:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:39.191 22:32:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:39.191 22:32:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:39.191 22:32:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:39.191 22:32:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.191 22:32:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.191 22:32:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.191 22:32:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:22:39.192 22:32:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.192 22:32:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:22:39.192 22:32:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:39.192 22:32:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:39.192 22:32:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:39.192 22:32:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:39.192 22:32:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:39.192 22:32:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:39.192 22:32:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:39.192 22:32:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:39.192 22:32:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:22:39.192 22:32:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:22:39.192 22:32:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:22:39.192 22:32:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:22:39.192 22:32:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:22:39.192 22:32:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:22:39.192 22:32:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:22:39.192 22:32:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:39.192 22:32:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:39.192 22:32:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:39.192 22:32:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:39.192 22:32:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:39.192 22:32:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:39.192 22:32:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:39.192 22:32:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:39.192 22:32:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:39.192 22:32:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:39.192 22:32:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:22:39.192 22:32:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:40.572 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:40.572 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:22:40.572 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:40.572 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:40.572 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:40.572 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:40.572 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:40.572 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:22:40.572 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:40.572 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:22:40.572 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:22:40.572 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:22:40.572 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:22:40.572 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:22:40.833 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:22:40.833 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:40.833 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:40.833 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:40.833 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:40.833 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:40.833 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:40.833 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:40.833 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:40.833 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:40.833 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:40.833 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:40.833 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:40.833 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:40.833 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:40.833 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:40.833 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:40.833 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:40.833 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:40.833 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:22:40.833 Found 0000:08:00.0 (0x8086 - 0x159b) 00:22:40.833 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:40.833 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:40.833 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:40.833 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:40.833 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:40.833 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:40.834 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:22:40.834 Found 0000:08:00.1 (0x8086 - 0x159b) 00:22:40.834 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:40.834 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:40.834 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:40.834 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:40.834 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:40.834 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:40.834 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:40.834 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:40.834 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:40.834 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:40.834 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:40.834 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:40.834 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:40.834 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:40.834 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:40.834 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:22:40.834 Found net devices under 0000:08:00.0: cvl_0_0 00:22:40.834 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:40.834 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:40.834 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:40.834 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:40.834 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:40.834 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:40.834 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:40.834 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:40.834 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:22:40.834 Found net devices under 0000:08:00.1: cvl_0_1 00:22:40.834 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:40.834 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:40.834 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:22:40.834 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:40.834 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:40.834 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:40.834 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:40.834 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:40.834 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:40.834 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:40.834 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:40.834 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:40.834 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:40.834 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:40.834 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:40.834 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:40.834 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:40.834 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:40.834 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:40.834 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:40.834 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:40.834 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:40.834 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:40.834 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:40.834 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:40.834 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:40.834 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:40.834 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:22:40.834 00:22:40.834 --- 10.0.0.2 ping statistics --- 00:22:40.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:40.834 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:22:40.834 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:40.834 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:40.834 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:22:40.834 00:22:40.834 --- 10.0.0.1 ping statistics --- 00:22:40.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:40.834 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:22:40.834 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:40.834 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:22:40.834 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:40.834 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:40.834 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:40.834 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:40.834 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:40.834 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:40.834 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:40.834 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:22:40.834 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:40.834 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:40.834 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:40.834 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=3907111 00:22:40.834 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:40.834 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 3907111 00:22:40.834 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 3907111 ']' 00:22:40.834 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:40.834 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:40.834 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:40.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:40.834 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:40.834 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:40.834 [2024-07-24 22:32:06.474953] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:22:40.834 [2024-07-24 22:32:06.475058] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:40.834 EAL: No free 2048 kB hugepages reported on node 1 00:22:41.093 [2024-07-24 22:32:06.540761] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:41.093 [2024-07-24 22:32:06.659928] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:41.093 [2024-07-24 22:32:06.659998] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:41.093 [2024-07-24 22:32:06.660013] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:41.093 [2024-07-24 22:32:06.660026] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:41.093 [2024-07-24 22:32:06.660037] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:41.093 [2024-07-24 22:32:06.660077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:41.093 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:41.093 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:22:41.093 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:41.093 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:41.093 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:41.093 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:41.093 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:22:41.093 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.093 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:41.351 [2024-07-24 22:32:06.802205] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:41.351 [2024-07-24 22:32:06.810380] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:41.351 null0 00:22:41.351 [2024-07-24 22:32:06.842340] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:41.351 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.351 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3907139 00:22:41.351 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3907139 /tmp/host.sock 00:22:41.351 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:22:41.351 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 3907139 ']' 00:22:41.351 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:22:41.351 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:41.351 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:41.351 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:41.351 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:41.351 22:32:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:41.351 [2024-07-24 22:32:06.913170] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:22:41.351 [2024-07-24 22:32:06.913264] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3907139 ] 00:22:41.351 EAL: No free 2048 kB hugepages reported on node 1 00:22:41.351 [2024-07-24 22:32:06.974372] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:41.610 [2024-07-24 22:32:07.091188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:41.610 22:32:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:41.610 22:32:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:22:41.610 22:32:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:41.610 22:32:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:22:41.610 22:32:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.610 22:32:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:41.610 22:32:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.610 22:32:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:22:41.610 22:32:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.610 22:32:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:41.610 22:32:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:41.610 22:32:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:22:41.610 22:32:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:41.610 22:32:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:42.990 [2024-07-24 22:32:08.280057] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:42.990 [2024-07-24 22:32:08.280096] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:42.990 [2024-07-24 22:32:08.280124] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:42.990 [2024-07-24 22:32:08.407549] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:42.990 [2024-07-24 22:32:08.511301] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:42.990 [2024-07-24 22:32:08.511380] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:42.990 [2024-07-24 22:32:08.511424] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:42.990 [2024-07-24 22:32:08.511451] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:42.990 [2024-07-24 22:32:08.511496] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:42.990 22:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.990 22:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:22:42.990 22:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:42.990 22:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:42.990 22:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:42.990 22:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.990 22:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:42.990 22:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:42.990 22:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:42.990 [2024-07-24 22:32:08.518551] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1c8ada0 was disconnected and freed. delete nvme_qpair. 00:22:42.990 22:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.990 22:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:22:42.990 22:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:22:42.990 22:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:22:42.990 22:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:22:42.990 22:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:42.990 22:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:42.990 22:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:42.990 22:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:42.991 22:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:42.991 22:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:42.991 22:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:42.991 22:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:42.991 22:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:42.991 22:32:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:44.369 22:32:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:44.369 22:32:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:44.369 22:32:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:44.369 22:32:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:44.369 22:32:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:44.369 22:32:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:44.369 22:32:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:44.369 22:32:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:44.369 22:32:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:44.369 22:32:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:45.365 22:32:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:45.365 22:32:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:45.365 22:32:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:45.365 22:32:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:45.365 22:32:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:45.365 22:32:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:45.365 22:32:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:45.365 22:32:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:45.365 22:32:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:45.365 22:32:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:46.329 22:32:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:46.329 22:32:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:46.329 22:32:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:46.329 22:32:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:46.329 22:32:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:46.329 22:32:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:46.330 22:32:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:46.330 22:32:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:46.330 22:32:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:46.330 22:32:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:47.265 22:32:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:47.265 22:32:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:47.265 22:32:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:47.265 22:32:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:47.265 22:32:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:47.265 22:32:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:47.265 22:32:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:47.265 22:32:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:47.265 22:32:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:47.265 22:32:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:48.205 22:32:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:48.205 22:32:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:48.205 22:32:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:48.205 22:32:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:48.205 22:32:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:48.205 22:32:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:48.205 22:32:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:48.205 22:32:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:48.205 22:32:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:48.205 22:32:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:48.465 [2024-07-24 22:32:13.952373] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:22:48.465 [2024-07-24 22:32:13.952459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:48.465 [2024-07-24 22:32:13.952489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.465 [2024-07-24 22:32:13.952512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:48.465 [2024-07-24 22:32:13.952527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.465 [2024-07-24 22:32:13.952543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:48.465 [2024-07-24 22:32:13.952557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.465 [2024-07-24 22:32:13.952573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:48.465 [2024-07-24 22:32:13.952588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.465 [2024-07-24 22:32:13.952604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:48.465 [2024-07-24 22:32:13.952618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.465 [2024-07-24 22:32:13.952633] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51620 is same with the state(5) to be set 00:22:48.465 [2024-07-24 22:32:13.962379] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c51620 (9): Bad file descriptor 00:22:48.465 [2024-07-24 22:32:13.972430] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:49.403 22:32:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:49.403 22:32:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:49.403 22:32:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:49.403 22:32:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:49.403 22:32:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:49.403 22:32:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:49.403 22:32:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:49.403 [2024-07-24 22:32:14.995509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:22:49.403 [2024-07-24 22:32:14.995575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c51620 with addr=10.0.0.2, port=4420 00:22:49.403 [2024-07-24 22:32:14.995597] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c51620 is same with the state(5) to be set 00:22:49.403 [2024-07-24 22:32:14.995630] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c51620 (9): Bad file descriptor 00:22:49.403 [2024-07-24 22:32:14.996074] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:49.403 [2024-07-24 22:32:14.996125] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:49.403 [2024-07-24 22:32:14.996140] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:49.403 [2024-07-24 22:32:14.996156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:49.403 [2024-07-24 22:32:14.996180] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:49.403 [2024-07-24 22:32:14.996195] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:49.403 22:32:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:49.403 22:32:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:49.403 22:32:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:50.340 [2024-07-24 22:32:15.998694] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:50.340 [2024-07-24 22:32:15.998725] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:50.340 [2024-07-24 22:32:15.998741] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:50.340 [2024-07-24 22:32:15.998756] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:22:50.340 [2024-07-24 22:32:15.998778] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:50.340 [2024-07-24 22:32:15.998818] bdev_nvme.c:6762:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:22:50.340 [2024-07-24 22:32:15.998856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.340 [2024-07-24 22:32:15.998877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.340 [2024-07-24 22:32:15.998898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.340 [2024-07-24 22:32:15.998913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.340 [2024-07-24 22:32:15.998929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.340 [2024-07-24 22:32:15.998943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.340 [2024-07-24 22:32:15.998960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.340 [2024-07-24 22:32:15.998974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.340 [2024-07-24 22:32:15.998990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.340 [2024-07-24 22:32:15.999005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.340 [2024-07-24 22:32:15.999028] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:22:50.340 [2024-07-24 22:32:15.999148] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c50a80 (9): Bad file descriptor 00:22:50.340 [2024-07-24 22:32:16.000195] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:22:50.340 [2024-07-24 22:32:16.000219] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:22:50.340 22:32:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:50.340 22:32:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:50.340 22:32:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:50.340 22:32:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.340 22:32:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:50.340 22:32:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:50.340 22:32:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:50.340 22:32:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.600 22:32:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:22:50.600 22:32:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:50.600 22:32:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:50.600 22:32:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:22:50.600 22:32:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:50.600 22:32:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:50.600 22:32:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:50.600 22:32:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:50.600 22:32:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:50.600 22:32:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:50.600 22:32:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:50.600 22:32:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:50.600 22:32:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:50.600 22:32:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:51.537 22:32:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:51.537 22:32:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:51.537 22:32:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:51.537 22:32:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:51.537 22:32:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:51.537 22:32:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:51.537 22:32:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:51.537 22:32:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:51.537 22:32:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:51.537 22:32:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:52.476 [2024-07-24 22:32:18.015698] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:52.476 [2024-07-24 22:32:18.015726] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:52.477 [2024-07-24 22:32:18.015753] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:52.477 [2024-07-24 22:32:18.102014] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:22:52.736 22:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:52.736 22:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:52.736 22:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:52.736 22:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:52.736 22:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:52.736 22:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:52.736 22:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:52.736 22:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:52.736 22:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:52.736 22:32:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:52.736 [2024-07-24 22:32:18.328329] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:52.736 [2024-07-24 22:32:18.328392] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:52.736 [2024-07-24 22:32:18.328431] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:52.736 [2024-07-24 22:32:18.328457] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:22:52.736 [2024-07-24 22:32:18.328474] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:52.736 [2024-07-24 22:32:18.375131] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1c58250 was disconnected and freed. delete nvme_qpair. 00:22:53.677 22:32:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:53.677 22:32:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:53.677 22:32:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:53.677 22:32:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:53.677 22:32:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:53.677 22:32:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:53.677 22:32:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:53.677 22:32:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:53.677 22:32:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:22:53.677 22:32:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:22:53.677 22:32:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3907139 00:22:53.677 22:32:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 3907139 ']' 00:22:53.677 22:32:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 3907139 00:22:53.677 22:32:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:22:53.677 22:32:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:53.677 22:32:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3907139 00:22:53.677 22:32:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:53.677 22:32:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:53.677 22:32:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3907139' 00:22:53.677 killing process with pid 3907139 00:22:53.677 22:32:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 3907139 00:22:53.677 22:32:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 3907139 00:22:53.937 22:32:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:22:53.937 22:32:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:53.937 22:32:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:22:53.937 22:32:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:53.937 22:32:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:22:53.937 22:32:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:53.937 22:32:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:53.937 rmmod nvme_tcp 00:22:53.937 rmmod nvme_fabrics 00:22:53.937 rmmod nvme_keyring 00:22:53.937 22:32:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:53.937 22:32:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:22:53.937 22:32:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:22:53.937 22:32:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 3907111 ']' 00:22:53.937 22:32:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 3907111 00:22:53.937 22:32:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 3907111 ']' 00:22:53.937 22:32:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 3907111 00:22:53.937 22:32:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:22:53.937 22:32:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:53.937 22:32:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3907111 00:22:53.937 22:32:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:53.937 22:32:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:53.937 22:32:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3907111' 00:22:53.937 killing process with pid 3907111 00:22:53.937 22:32:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 3907111 00:22:53.937 22:32:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 3907111 00:22:54.196 22:32:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:54.196 22:32:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:54.196 22:32:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:54.196 22:32:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:54.196 22:32:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:54.196 22:32:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:54.196 22:32:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:54.196 22:32:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:56.732 22:32:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:56.732 00:22:56.732 real 0m17.235s 00:22:56.732 user 0m25.550s 00:22:56.732 sys 0m2.684s 00:22:56.732 22:32:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:56.732 22:32:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:56.732 ************************************ 00:22:56.732 END TEST nvmf_discovery_remove_ifc 00:22:56.732 ************************************ 00:22:56.732 22:32:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:22:56.732 22:32:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:56.732 22:32:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:56.732 22:32:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:56.732 ************************************ 00:22:56.732 START TEST nvmf_identify_kernel_target 00:22:56.732 ************************************ 00:22:56.732 22:32:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:22:56.732 * Looking for test storage... 00:22:56.732 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:56.732 22:32:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:56.732 22:32:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:22:56.732 22:32:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:56.732 22:32:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:56.732 22:32:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:56.732 22:32:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:56.732 22:32:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:56.732 22:32:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:56.732 22:32:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:56.732 22:32:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:56.732 22:32:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:56.732 22:32:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:56.732 22:32:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:22:56.732 22:32:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:22:56.732 22:32:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:56.732 22:32:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:56.732 22:32:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:56.732 22:32:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:56.732 22:32:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:56.732 22:32:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:56.732 22:32:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:56.732 22:32:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:56.732 22:32:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.732 22:32:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.732 22:32:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.732 22:32:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:22:56.732 22:32:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.732 22:32:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:22:56.732 22:32:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:56.732 22:32:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:56.732 22:32:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:56.732 22:32:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:56.732 22:32:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:56.732 22:32:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:56.732 22:32:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:56.732 22:32:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:56.732 22:32:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:22:56.732 22:32:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:56.732 22:32:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:56.732 22:32:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:56.732 22:32:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:56.732 22:32:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:56.732 22:32:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:56.732 22:32:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:56.732 22:32:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:56.732 22:32:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:56.733 22:32:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:56.733 22:32:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:22:56.733 22:32:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.115 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:58.115 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:22:58.115 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:58.115 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:58.115 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:58.115 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:58.115 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:58.115 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:22:58.115 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:58.115 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:22:58.115 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:22:58.115 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:22:58.115 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:22:58.115 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:22:58.115 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:22:58.115 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:58.115 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:58.115 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:58.115 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:58.115 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:58.115 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:58.115 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:58.115 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:58.115 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:58.115 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:58.115 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:58.115 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:58.115 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:58.115 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:58.115 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:58.115 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:58.115 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:58.115 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:58.115 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:22:58.115 Found 0000:08:00.0 (0x8086 - 0x159b) 00:22:58.115 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:58.115 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:58.115 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:58.115 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:58.115 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:58.115 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:58.115 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:22:58.115 Found 0000:08:00.1 (0x8086 - 0x159b) 00:22:58.115 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:58.115 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:58.115 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:58.115 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:58.115 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:58.115 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:58.115 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:58.115 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:58.115 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:58.115 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:58.115 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:58.115 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:58.115 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:58.116 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:58.116 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:58.116 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:22:58.116 Found net devices under 0000:08:00.0: cvl_0_0 00:22:58.116 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:58.116 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:58.116 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:58.116 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:58.116 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:58.116 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:58.116 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:58.116 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:58.116 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:22:58.116 Found net devices under 0000:08:00.1: cvl_0_1 00:22:58.116 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:58.116 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:58.116 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:22:58.116 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:58.116 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:58.116 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:58.116 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:58.116 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:58.116 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:58.116 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:58.116 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:58.116 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:58.116 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:58.116 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:58.116 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:58.116 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:58.116 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:58.116 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:58.116 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:58.116 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:58.116 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:58.116 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:58.116 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:58.116 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:58.116 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:58.116 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:58.116 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:58.116 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:22:58.116 00:22:58.116 --- 10.0.0.2 ping statistics --- 00:22:58.116 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:58.116 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:22:58.116 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:58.116 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:58.116 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:22:58.116 00:22:58.116 --- 10.0.0.1 ping statistics --- 00:22:58.116 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:58.116 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:22:58.116 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:58.116 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:22:58.116 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:58.116 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:58.116 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:58.116 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:58.116 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:58.116 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:58.116 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:58.116 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:22:58.116 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:22:58.116 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:22:58.116 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:58.116 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:58.116 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:58.116 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:58.116 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:58.116 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:58.116 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:58.116 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:58.116 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:58.116 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:22:58.116 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:22:58.116 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:22:58.116 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:22:58.116 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:58.116 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:58.116 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:22:58.116 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:22:58.116 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:22:58.116 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:22:58.116 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:22:58.116 22:32:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:22:59.057 Waiting for block devices as requested 00:22:59.057 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:22:59.317 0000:00:04.7 (8086 3c27): vfio-pci -> ioatdma 00:22:59.317 0000:00:04.6 (8086 3c26): vfio-pci -> ioatdma 00:22:59.317 0000:00:04.5 (8086 3c25): vfio-pci -> ioatdma 00:22:59.578 0000:00:04.4 (8086 3c24): vfio-pci -> ioatdma 00:22:59.578 0000:00:04.3 (8086 3c23): vfio-pci -> ioatdma 00:22:59.578 0000:00:04.2 (8086 3c22): vfio-pci -> ioatdma 00:22:59.578 0000:00:04.1 (8086 3c21): vfio-pci -> ioatdma 00:22:59.838 0000:00:04.0 (8086 3c20): vfio-pci -> ioatdma 00:22:59.838 0000:80:04.7 (8086 3c27): vfio-pci -> ioatdma 00:22:59.838 0000:80:04.6 (8086 3c26): vfio-pci -> ioatdma 00:23:00.099 0000:80:04.5 (8086 3c25): vfio-pci -> ioatdma 00:23:00.099 0000:80:04.4 (8086 3c24): vfio-pci -> ioatdma 00:23:00.099 0000:80:04.3 (8086 3c23): vfio-pci -> ioatdma 00:23:00.099 0000:80:04.2 (8086 3c22): vfio-pci -> ioatdma 00:23:00.360 0000:80:04.1 (8086 3c21): vfio-pci -> ioatdma 00:23:00.360 0000:80:04.0 (8086 3c20): vfio-pci -> ioatdma 00:23:00.360 22:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:00.360 22:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:00.360 22:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:23:00.360 22:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # local device=nvme0n1 00:23:00.360 22:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:00.360 22:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:23:00.360 22:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:23:00.360 22:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:00.360 22:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:00.619 No valid GPT data, bailing 00:23:00.619 22:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:00.619 22:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:23:00.620 22:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:23:00.620 22:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:23:00.620 22:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:23:00.620 22:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:00.620 22:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:00.620 22:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:00.620 22:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:23:00.620 22:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:23:00.620 22:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:23:00.620 22:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:23:00.620 22:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:23:00.620 22:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:23:00.620 22:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:23:00.620 22:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:23:00.620 22:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:00.620 22:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -a 10.0.0.1 -t tcp -s 4420 00:23:00.620 00:23:00.620 Discovery Log Number of Records 2, Generation counter 2 00:23:00.620 =====Discovery Log Entry 0====== 00:23:00.620 trtype: tcp 00:23:00.620 adrfam: ipv4 00:23:00.620 subtype: current discovery subsystem 00:23:00.620 treq: not specified, sq flow control disable supported 00:23:00.620 portid: 1 00:23:00.620 trsvcid: 4420 00:23:00.620 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:00.620 traddr: 10.0.0.1 00:23:00.620 eflags: none 00:23:00.620 sectype: none 00:23:00.620 =====Discovery Log Entry 1====== 00:23:00.620 trtype: tcp 00:23:00.620 adrfam: ipv4 00:23:00.620 subtype: nvme subsystem 00:23:00.620 treq: not specified, sq flow control disable supported 00:23:00.620 portid: 1 00:23:00.620 trsvcid: 4420 00:23:00.620 subnqn: nqn.2016-06.io.spdk:testnqn 00:23:00.620 traddr: 10.0.0.1 00:23:00.620 eflags: none 00:23:00.620 sectype: none 00:23:00.620 22:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:23:00.620 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:23:00.620 EAL: No free 2048 kB hugepages reported on node 1 00:23:00.620 ===================================================== 00:23:00.620 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:00.620 ===================================================== 00:23:00.620 Controller Capabilities/Features 00:23:00.620 ================================ 00:23:00.620 Vendor ID: 0000 00:23:00.620 Subsystem Vendor ID: 0000 00:23:00.620 Serial Number: 1c8f4074769fbfa2b47a 00:23:00.620 Model Number: Linux 00:23:00.620 Firmware Version: 6.7.0-68 00:23:00.620 Recommended Arb Burst: 0 00:23:00.620 IEEE OUI Identifier: 00 00 00 00:23:00.620 Multi-path I/O 00:23:00.620 May have multiple subsystem ports: No 00:23:00.620 May have multiple controllers: No 00:23:00.620 Associated with SR-IOV VF: No 00:23:00.620 Max Data Transfer Size: Unlimited 00:23:00.620 Max Number of Namespaces: 0 00:23:00.620 Max Number of I/O Queues: 1024 00:23:00.620 NVMe Specification Version (VS): 1.3 00:23:00.620 NVMe Specification Version (Identify): 1.3 00:23:00.620 Maximum Queue Entries: 1024 00:23:00.620 Contiguous Queues Required: No 00:23:00.620 Arbitration Mechanisms Supported 00:23:00.620 Weighted Round Robin: Not Supported 00:23:00.620 Vendor Specific: Not Supported 00:23:00.620 Reset Timeout: 7500 ms 00:23:00.620 Doorbell Stride: 4 bytes 00:23:00.620 NVM Subsystem Reset: Not Supported 00:23:00.620 Command Sets Supported 00:23:00.620 NVM Command Set: Supported 00:23:00.620 Boot Partition: Not Supported 00:23:00.620 Memory Page Size Minimum: 4096 bytes 00:23:00.620 Memory Page Size Maximum: 4096 bytes 00:23:00.620 Persistent Memory Region: Not Supported 00:23:00.620 Optional Asynchronous Events Supported 00:23:00.620 Namespace Attribute Notices: Not Supported 00:23:00.620 Firmware Activation Notices: Not Supported 00:23:00.620 ANA Change Notices: Not Supported 00:23:00.620 PLE Aggregate Log Change Notices: Not Supported 00:23:00.620 LBA Status Info Alert Notices: Not Supported 00:23:00.620 EGE Aggregate Log Change Notices: Not Supported 00:23:00.620 Normal NVM Subsystem Shutdown event: Not Supported 00:23:00.620 Zone Descriptor Change Notices: Not Supported 00:23:00.620 Discovery Log Change Notices: Supported 00:23:00.620 Controller Attributes 00:23:00.620 128-bit Host Identifier: Not Supported 00:23:00.620 Non-Operational Permissive Mode: Not Supported 00:23:00.620 NVM Sets: Not Supported 00:23:00.620 Read Recovery Levels: Not Supported 00:23:00.620 Endurance Groups: Not Supported 00:23:00.620 Predictable Latency Mode: Not Supported 00:23:00.620 Traffic Based Keep ALive: Not Supported 00:23:00.620 Namespace Granularity: Not Supported 00:23:00.620 SQ Associations: Not Supported 00:23:00.620 UUID List: Not Supported 00:23:00.620 Multi-Domain Subsystem: Not Supported 00:23:00.620 Fixed Capacity Management: Not Supported 00:23:00.620 Variable Capacity Management: Not Supported 00:23:00.620 Delete Endurance Group: Not Supported 00:23:00.620 Delete NVM Set: Not Supported 00:23:00.620 Extended LBA Formats Supported: Not Supported 00:23:00.620 Flexible Data Placement Supported: Not Supported 00:23:00.620 00:23:00.620 Controller Memory Buffer Support 00:23:00.620 ================================ 00:23:00.620 Supported: No 00:23:00.620 00:23:00.620 Persistent Memory Region Support 00:23:00.620 ================================ 00:23:00.620 Supported: No 00:23:00.620 00:23:00.620 Admin Command Set Attributes 00:23:00.620 ============================ 00:23:00.620 Security Send/Receive: Not Supported 00:23:00.620 Format NVM: Not Supported 00:23:00.620 Firmware Activate/Download: Not Supported 00:23:00.620 Namespace Management: Not Supported 00:23:00.620 Device Self-Test: Not Supported 00:23:00.620 Directives: Not Supported 00:23:00.620 NVMe-MI: Not Supported 00:23:00.620 Virtualization Management: Not Supported 00:23:00.620 Doorbell Buffer Config: Not Supported 00:23:00.620 Get LBA Status Capability: Not Supported 00:23:00.620 Command & Feature Lockdown Capability: Not Supported 00:23:00.620 Abort Command Limit: 1 00:23:00.620 Async Event Request Limit: 1 00:23:00.620 Number of Firmware Slots: N/A 00:23:00.620 Firmware Slot 1 Read-Only: N/A 00:23:00.620 Firmware Activation Without Reset: N/A 00:23:00.620 Multiple Update Detection Support: N/A 00:23:00.620 Firmware Update Granularity: No Information Provided 00:23:00.620 Per-Namespace SMART Log: No 00:23:00.620 Asymmetric Namespace Access Log Page: Not Supported 00:23:00.620 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:00.620 Command Effects Log Page: Not Supported 00:23:00.620 Get Log Page Extended Data: Supported 00:23:00.620 Telemetry Log Pages: Not Supported 00:23:00.620 Persistent Event Log Pages: Not Supported 00:23:00.620 Supported Log Pages Log Page: May Support 00:23:00.620 Commands Supported & Effects Log Page: Not Supported 00:23:00.620 Feature Identifiers & Effects Log Page:May Support 00:23:00.620 NVMe-MI Commands & Effects Log Page: May Support 00:23:00.620 Data Area 4 for Telemetry Log: Not Supported 00:23:00.620 Error Log Page Entries Supported: 1 00:23:00.620 Keep Alive: Not Supported 00:23:00.620 00:23:00.620 NVM Command Set Attributes 00:23:00.620 ========================== 00:23:00.620 Submission Queue Entry Size 00:23:00.620 Max: 1 00:23:00.620 Min: 1 00:23:00.620 Completion Queue Entry Size 00:23:00.620 Max: 1 00:23:00.620 Min: 1 00:23:00.620 Number of Namespaces: 0 00:23:00.620 Compare Command: Not Supported 00:23:00.620 Write Uncorrectable Command: Not Supported 00:23:00.620 Dataset Management Command: Not Supported 00:23:00.620 Write Zeroes Command: Not Supported 00:23:00.620 Set Features Save Field: Not Supported 00:23:00.620 Reservations: Not Supported 00:23:00.620 Timestamp: Not Supported 00:23:00.620 Copy: Not Supported 00:23:00.620 Volatile Write Cache: Not Present 00:23:00.620 Atomic Write Unit (Normal): 1 00:23:00.620 Atomic Write Unit (PFail): 1 00:23:00.620 Atomic Compare & Write Unit: 1 00:23:00.620 Fused Compare & Write: Not Supported 00:23:00.620 Scatter-Gather List 00:23:00.620 SGL Command Set: Supported 00:23:00.620 SGL Keyed: Not Supported 00:23:00.620 SGL Bit Bucket Descriptor: Not Supported 00:23:00.620 SGL Metadata Pointer: Not Supported 00:23:00.620 Oversized SGL: Not Supported 00:23:00.620 SGL Metadata Address: Not Supported 00:23:00.621 SGL Offset: Supported 00:23:00.621 Transport SGL Data Block: Not Supported 00:23:00.621 Replay Protected Memory Block: Not Supported 00:23:00.621 00:23:00.621 Firmware Slot Information 00:23:00.621 ========================= 00:23:00.621 Active slot: 0 00:23:00.621 00:23:00.621 00:23:00.621 Error Log 00:23:00.621 ========= 00:23:00.621 00:23:00.621 Active Namespaces 00:23:00.621 ================= 00:23:00.621 Discovery Log Page 00:23:00.621 ================== 00:23:00.621 Generation Counter: 2 00:23:00.621 Number of Records: 2 00:23:00.621 Record Format: 0 00:23:00.621 00:23:00.621 Discovery Log Entry 0 00:23:00.621 ---------------------- 00:23:00.621 Transport Type: 3 (TCP) 00:23:00.621 Address Family: 1 (IPv4) 00:23:00.621 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:00.621 Entry Flags: 00:23:00.621 Duplicate Returned Information: 0 00:23:00.621 Explicit Persistent Connection Support for Discovery: 0 00:23:00.621 Transport Requirements: 00:23:00.621 Secure Channel: Not Specified 00:23:00.621 Port ID: 1 (0x0001) 00:23:00.621 Controller ID: 65535 (0xffff) 00:23:00.621 Admin Max SQ Size: 32 00:23:00.621 Transport Service Identifier: 4420 00:23:00.621 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:00.621 Transport Address: 10.0.0.1 00:23:00.621 Discovery Log Entry 1 00:23:00.621 ---------------------- 00:23:00.621 Transport Type: 3 (TCP) 00:23:00.621 Address Family: 1 (IPv4) 00:23:00.621 Subsystem Type: 2 (NVM Subsystem) 00:23:00.621 Entry Flags: 00:23:00.621 Duplicate Returned Information: 0 00:23:00.621 Explicit Persistent Connection Support for Discovery: 0 00:23:00.621 Transport Requirements: 00:23:00.621 Secure Channel: Not Specified 00:23:00.621 Port ID: 1 (0x0001) 00:23:00.621 Controller ID: 65535 (0xffff) 00:23:00.621 Admin Max SQ Size: 32 00:23:00.621 Transport Service Identifier: 4420 00:23:00.621 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:23:00.621 Transport Address: 10.0.0.1 00:23:00.621 22:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:00.882 EAL: No free 2048 kB hugepages reported on node 1 00:23:00.882 get_feature(0x01) failed 00:23:00.882 get_feature(0x02) failed 00:23:00.882 get_feature(0x04) failed 00:23:00.882 ===================================================== 00:23:00.882 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:00.882 ===================================================== 00:23:00.882 Controller Capabilities/Features 00:23:00.882 ================================ 00:23:00.882 Vendor ID: 0000 00:23:00.882 Subsystem Vendor ID: 0000 00:23:00.882 Serial Number: 137233b6b629f7eac2b0 00:23:00.882 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:23:00.882 Firmware Version: 6.7.0-68 00:23:00.882 Recommended Arb Burst: 6 00:23:00.882 IEEE OUI Identifier: 00 00 00 00:23:00.882 Multi-path I/O 00:23:00.882 May have multiple subsystem ports: Yes 00:23:00.882 May have multiple controllers: Yes 00:23:00.882 Associated with SR-IOV VF: No 00:23:00.882 Max Data Transfer Size: Unlimited 00:23:00.882 Max Number of Namespaces: 1024 00:23:00.882 Max Number of I/O Queues: 128 00:23:00.882 NVMe Specification Version (VS): 1.3 00:23:00.882 NVMe Specification Version (Identify): 1.3 00:23:00.882 Maximum Queue Entries: 1024 00:23:00.882 Contiguous Queues Required: No 00:23:00.882 Arbitration Mechanisms Supported 00:23:00.882 Weighted Round Robin: Not Supported 00:23:00.882 Vendor Specific: Not Supported 00:23:00.882 Reset Timeout: 7500 ms 00:23:00.882 Doorbell Stride: 4 bytes 00:23:00.882 NVM Subsystem Reset: Not Supported 00:23:00.882 Command Sets Supported 00:23:00.882 NVM Command Set: Supported 00:23:00.882 Boot Partition: Not Supported 00:23:00.882 Memory Page Size Minimum: 4096 bytes 00:23:00.882 Memory Page Size Maximum: 4096 bytes 00:23:00.882 Persistent Memory Region: Not Supported 00:23:00.882 Optional Asynchronous Events Supported 00:23:00.882 Namespace Attribute Notices: Supported 00:23:00.882 Firmware Activation Notices: Not Supported 00:23:00.882 ANA Change Notices: Supported 00:23:00.882 PLE Aggregate Log Change Notices: Not Supported 00:23:00.882 LBA Status Info Alert Notices: Not Supported 00:23:00.882 EGE Aggregate Log Change Notices: Not Supported 00:23:00.882 Normal NVM Subsystem Shutdown event: Not Supported 00:23:00.882 Zone Descriptor Change Notices: Not Supported 00:23:00.882 Discovery Log Change Notices: Not Supported 00:23:00.882 Controller Attributes 00:23:00.882 128-bit Host Identifier: Supported 00:23:00.882 Non-Operational Permissive Mode: Not Supported 00:23:00.882 NVM Sets: Not Supported 00:23:00.882 Read Recovery Levels: Not Supported 00:23:00.882 Endurance Groups: Not Supported 00:23:00.882 Predictable Latency Mode: Not Supported 00:23:00.882 Traffic Based Keep ALive: Supported 00:23:00.882 Namespace Granularity: Not Supported 00:23:00.882 SQ Associations: Not Supported 00:23:00.882 UUID List: Not Supported 00:23:00.882 Multi-Domain Subsystem: Not Supported 00:23:00.882 Fixed Capacity Management: Not Supported 00:23:00.882 Variable Capacity Management: Not Supported 00:23:00.882 Delete Endurance Group: Not Supported 00:23:00.882 Delete NVM Set: Not Supported 00:23:00.882 Extended LBA Formats Supported: Not Supported 00:23:00.882 Flexible Data Placement Supported: Not Supported 00:23:00.882 00:23:00.882 Controller Memory Buffer Support 00:23:00.882 ================================ 00:23:00.882 Supported: No 00:23:00.882 00:23:00.882 Persistent Memory Region Support 00:23:00.882 ================================ 00:23:00.882 Supported: No 00:23:00.882 00:23:00.882 Admin Command Set Attributes 00:23:00.882 ============================ 00:23:00.882 Security Send/Receive: Not Supported 00:23:00.882 Format NVM: Not Supported 00:23:00.882 Firmware Activate/Download: Not Supported 00:23:00.882 Namespace Management: Not Supported 00:23:00.882 Device Self-Test: Not Supported 00:23:00.882 Directives: Not Supported 00:23:00.882 NVMe-MI: Not Supported 00:23:00.882 Virtualization Management: Not Supported 00:23:00.882 Doorbell Buffer Config: Not Supported 00:23:00.882 Get LBA Status Capability: Not Supported 00:23:00.882 Command & Feature Lockdown Capability: Not Supported 00:23:00.882 Abort Command Limit: 4 00:23:00.882 Async Event Request Limit: 4 00:23:00.882 Number of Firmware Slots: N/A 00:23:00.882 Firmware Slot 1 Read-Only: N/A 00:23:00.882 Firmware Activation Without Reset: N/A 00:23:00.882 Multiple Update Detection Support: N/A 00:23:00.882 Firmware Update Granularity: No Information Provided 00:23:00.882 Per-Namespace SMART Log: Yes 00:23:00.882 Asymmetric Namespace Access Log Page: Supported 00:23:00.882 ANA Transition Time : 10 sec 00:23:00.882 00:23:00.882 Asymmetric Namespace Access Capabilities 00:23:00.882 ANA Optimized State : Supported 00:23:00.882 ANA Non-Optimized State : Supported 00:23:00.882 ANA Inaccessible State : Supported 00:23:00.882 ANA Persistent Loss State : Supported 00:23:00.882 ANA Change State : Supported 00:23:00.882 ANAGRPID is not changed : No 00:23:00.882 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:23:00.882 00:23:00.882 ANA Group Identifier Maximum : 128 00:23:00.882 Number of ANA Group Identifiers : 128 00:23:00.882 Max Number of Allowed Namespaces : 1024 00:23:00.882 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:23:00.882 Command Effects Log Page: Supported 00:23:00.882 Get Log Page Extended Data: Supported 00:23:00.882 Telemetry Log Pages: Not Supported 00:23:00.882 Persistent Event Log Pages: Not Supported 00:23:00.882 Supported Log Pages Log Page: May Support 00:23:00.882 Commands Supported & Effects Log Page: Not Supported 00:23:00.882 Feature Identifiers & Effects Log Page:May Support 00:23:00.882 NVMe-MI Commands & Effects Log Page: May Support 00:23:00.882 Data Area 4 for Telemetry Log: Not Supported 00:23:00.882 Error Log Page Entries Supported: 128 00:23:00.882 Keep Alive: Supported 00:23:00.882 Keep Alive Granularity: 1000 ms 00:23:00.882 00:23:00.882 NVM Command Set Attributes 00:23:00.882 ========================== 00:23:00.882 Submission Queue Entry Size 00:23:00.882 Max: 64 00:23:00.882 Min: 64 00:23:00.882 Completion Queue Entry Size 00:23:00.882 Max: 16 00:23:00.882 Min: 16 00:23:00.882 Number of Namespaces: 1024 00:23:00.882 Compare Command: Not Supported 00:23:00.882 Write Uncorrectable Command: Not Supported 00:23:00.882 Dataset Management Command: Supported 00:23:00.882 Write Zeroes Command: Supported 00:23:00.882 Set Features Save Field: Not Supported 00:23:00.882 Reservations: Not Supported 00:23:00.882 Timestamp: Not Supported 00:23:00.882 Copy: Not Supported 00:23:00.882 Volatile Write Cache: Present 00:23:00.882 Atomic Write Unit (Normal): 1 00:23:00.882 Atomic Write Unit (PFail): 1 00:23:00.882 Atomic Compare & Write Unit: 1 00:23:00.882 Fused Compare & Write: Not Supported 00:23:00.882 Scatter-Gather List 00:23:00.882 SGL Command Set: Supported 00:23:00.882 SGL Keyed: Not Supported 00:23:00.882 SGL Bit Bucket Descriptor: Not Supported 00:23:00.883 SGL Metadata Pointer: Not Supported 00:23:00.883 Oversized SGL: Not Supported 00:23:00.883 SGL Metadata Address: Not Supported 00:23:00.883 SGL Offset: Supported 00:23:00.883 Transport SGL Data Block: Not Supported 00:23:00.883 Replay Protected Memory Block: Not Supported 00:23:00.883 00:23:00.883 Firmware Slot Information 00:23:00.883 ========================= 00:23:00.883 Active slot: 0 00:23:00.883 00:23:00.883 Asymmetric Namespace Access 00:23:00.883 =========================== 00:23:00.883 Change Count : 0 00:23:00.883 Number of ANA Group Descriptors : 1 00:23:00.883 ANA Group Descriptor : 0 00:23:00.883 ANA Group ID : 1 00:23:00.883 Number of NSID Values : 1 00:23:00.883 Change Count : 0 00:23:00.883 ANA State : 1 00:23:00.883 Namespace Identifier : 1 00:23:00.883 00:23:00.883 Commands Supported and Effects 00:23:00.883 ============================== 00:23:00.883 Admin Commands 00:23:00.883 -------------- 00:23:00.883 Get Log Page (02h): Supported 00:23:00.883 Identify (06h): Supported 00:23:00.883 Abort (08h): Supported 00:23:00.883 Set Features (09h): Supported 00:23:00.883 Get Features (0Ah): Supported 00:23:00.883 Asynchronous Event Request (0Ch): Supported 00:23:00.883 Keep Alive (18h): Supported 00:23:00.883 I/O Commands 00:23:00.883 ------------ 00:23:00.883 Flush (00h): Supported 00:23:00.883 Write (01h): Supported LBA-Change 00:23:00.883 Read (02h): Supported 00:23:00.883 Write Zeroes (08h): Supported LBA-Change 00:23:00.883 Dataset Management (09h): Supported 00:23:00.883 00:23:00.883 Error Log 00:23:00.883 ========= 00:23:00.883 Entry: 0 00:23:00.883 Error Count: 0x3 00:23:00.883 Submission Queue Id: 0x0 00:23:00.883 Command Id: 0x5 00:23:00.883 Phase Bit: 0 00:23:00.883 Status Code: 0x2 00:23:00.883 Status Code Type: 0x0 00:23:00.883 Do Not Retry: 1 00:23:00.883 Error Location: 0x28 00:23:00.883 LBA: 0x0 00:23:00.883 Namespace: 0x0 00:23:00.883 Vendor Log Page: 0x0 00:23:00.883 ----------- 00:23:00.883 Entry: 1 00:23:00.883 Error Count: 0x2 00:23:00.883 Submission Queue Id: 0x0 00:23:00.883 Command Id: 0x5 00:23:00.883 Phase Bit: 0 00:23:00.883 Status Code: 0x2 00:23:00.883 Status Code Type: 0x0 00:23:00.883 Do Not Retry: 1 00:23:00.883 Error Location: 0x28 00:23:00.883 LBA: 0x0 00:23:00.883 Namespace: 0x0 00:23:00.883 Vendor Log Page: 0x0 00:23:00.883 ----------- 00:23:00.883 Entry: 2 00:23:00.883 Error Count: 0x1 00:23:00.883 Submission Queue Id: 0x0 00:23:00.883 Command Id: 0x4 00:23:00.883 Phase Bit: 0 00:23:00.883 Status Code: 0x2 00:23:00.883 Status Code Type: 0x0 00:23:00.883 Do Not Retry: 1 00:23:00.883 Error Location: 0x28 00:23:00.883 LBA: 0x0 00:23:00.883 Namespace: 0x0 00:23:00.883 Vendor Log Page: 0x0 00:23:00.883 00:23:00.883 Number of Queues 00:23:00.883 ================ 00:23:00.883 Number of I/O Submission Queues: 128 00:23:00.883 Number of I/O Completion Queues: 128 00:23:00.883 00:23:00.883 ZNS Specific Controller Data 00:23:00.883 ============================ 00:23:00.883 Zone Append Size Limit: 0 00:23:00.883 00:23:00.883 00:23:00.883 Active Namespaces 00:23:00.883 ================= 00:23:00.883 get_feature(0x05) failed 00:23:00.883 Namespace ID:1 00:23:00.883 Command Set Identifier: NVM (00h) 00:23:00.883 Deallocate: Supported 00:23:00.883 Deallocated/Unwritten Error: Not Supported 00:23:00.883 Deallocated Read Value: Unknown 00:23:00.883 Deallocate in Write Zeroes: Not Supported 00:23:00.883 Deallocated Guard Field: 0xFFFF 00:23:00.883 Flush: Supported 00:23:00.883 Reservation: Not Supported 00:23:00.883 Namespace Sharing Capabilities: Multiple Controllers 00:23:00.883 Size (in LBAs): 1953525168 (931GiB) 00:23:00.883 Capacity (in LBAs): 1953525168 (931GiB) 00:23:00.883 Utilization (in LBAs): 1953525168 (931GiB) 00:23:00.883 UUID: 46cb7a66-c035-430a-96fc-da648f9c3177 00:23:00.883 Thin Provisioning: Not Supported 00:23:00.883 Per-NS Atomic Units: Yes 00:23:00.883 Atomic Boundary Size (Normal): 0 00:23:00.883 Atomic Boundary Size (PFail): 0 00:23:00.883 Atomic Boundary Offset: 0 00:23:00.883 NGUID/EUI64 Never Reused: No 00:23:00.883 ANA group ID: 1 00:23:00.883 Namespace Write Protected: No 00:23:00.883 Number of LBA Formats: 1 00:23:00.883 Current LBA Format: LBA Format #00 00:23:00.883 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:00.883 00:23:00.883 22:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:23:00.883 22:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:00.883 22:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:23:00.883 22:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:00.883 22:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:23:00.883 22:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:00.883 22:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:00.883 rmmod nvme_tcp 00:23:00.883 rmmod nvme_fabrics 00:23:00.883 22:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:00.883 22:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:23:00.883 22:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:23:00.883 22:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:23:00.883 22:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:00.883 22:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:00.883 22:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:00.883 22:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:00.883 22:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:00.883 22:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:00.883 22:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:00.883 22:32:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:02.791 22:32:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:02.791 22:32:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:23:02.791 22:32:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:23:02.791 22:32:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:23:03.050 22:32:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:03.050 22:32:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:03.050 22:32:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:03.050 22:32:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:03.050 22:32:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:23:03.050 22:32:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:23:03.050 22:32:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:03.991 0000:00:04.7 (8086 3c27): ioatdma -> vfio-pci 00:23:03.991 0000:00:04.6 (8086 3c26): ioatdma -> vfio-pci 00:23:03.991 0000:00:04.5 (8086 3c25): ioatdma -> vfio-pci 00:23:03.991 0000:00:04.4 (8086 3c24): ioatdma -> vfio-pci 00:23:03.991 0000:00:04.3 (8086 3c23): ioatdma -> vfio-pci 00:23:03.991 0000:00:04.2 (8086 3c22): ioatdma -> vfio-pci 00:23:03.991 0000:00:04.1 (8086 3c21): ioatdma -> vfio-pci 00:23:03.991 0000:00:04.0 (8086 3c20): ioatdma -> vfio-pci 00:23:03.991 0000:80:04.7 (8086 3c27): ioatdma -> vfio-pci 00:23:03.991 0000:80:04.6 (8086 3c26): ioatdma -> vfio-pci 00:23:03.991 0000:80:04.5 (8086 3c25): ioatdma -> vfio-pci 00:23:03.991 0000:80:04.4 (8086 3c24): ioatdma -> vfio-pci 00:23:03.991 0000:80:04.3 (8086 3c23): ioatdma -> vfio-pci 00:23:03.991 0000:80:04.2 (8086 3c22): ioatdma -> vfio-pci 00:23:03.991 0000:80:04.1 (8086 3c21): ioatdma -> vfio-pci 00:23:03.991 0000:80:04.0 (8086 3c20): ioatdma -> vfio-pci 00:23:04.929 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:23:04.929 00:23:04.929 real 0m8.640s 00:23:04.929 user 0m1.763s 00:23:04.929 sys 0m2.973s 00:23:04.929 22:32:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:04.929 22:32:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.929 ************************************ 00:23:04.929 END TEST nvmf_identify_kernel_target 00:23:04.929 ************************************ 00:23:04.929 22:32:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:04.929 22:32:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:04.929 22:32:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:04.929 22:32:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.929 ************************************ 00:23:04.929 START TEST nvmf_auth_host 00:23:04.929 ************************************ 00:23:04.929 22:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:05.188 * Looking for test storage... 00:23:05.188 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:05.188 22:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:05.188 22:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:23:05.188 22:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:05.188 22:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:05.188 22:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:05.188 22:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:05.188 22:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:05.188 22:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:05.188 22:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:05.188 22:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:05.188 22:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:05.188 22:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:05.188 22:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:23:05.188 22:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:23:05.188 22:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:05.188 22:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:05.188 22:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:05.188 22:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:05.188 22:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:05.188 22:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:05.189 22:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:05.189 22:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:05.189 22:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.189 22:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.189 22:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.189 22:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:23:05.189 22:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.189 22:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:23:05.189 22:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:05.189 22:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:05.189 22:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:05.189 22:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:05.189 22:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:05.189 22:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:05.189 22:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:05.189 22:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:05.189 22:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:23:05.189 22:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:23:05.189 22:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:23:05.189 22:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:23:05.189 22:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:05.189 22:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:05.189 22:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:23:05.189 22:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:23:05.189 22:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:23:05.189 22:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:05.189 22:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:05.189 22:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:05.189 22:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:05.189 22:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:05.189 22:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:05.189 22:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:05.189 22:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:05.189 22:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:05.189 22:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:05.189 22:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:23:05.189 22:32:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.566 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:06.566 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:23:06.566 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:06.566 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:06.566 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:06.566 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:06.566 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:06.566 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:23:06.566 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:06.566 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:23:06.566 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:23:06.566 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:23:06.566 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:23:06.566 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:23:06.566 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:23:06.566 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:06.566 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:06.566 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:06.566 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:06.566 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:06.566 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:06.566 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:06.566 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:06.566 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:06.566 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:06.566 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:06.566 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:06.566 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:06.566 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:06.566 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:06.566 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:06.566 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:06.566 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:06.566 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:23:06.566 Found 0000:08:00.0 (0x8086 - 0x159b) 00:23:06.566 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:06.566 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:06.566 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:06.566 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:06.566 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:06.566 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:06.566 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:23:06.566 Found 0000:08:00.1 (0x8086 - 0x159b) 00:23:06.566 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:06.566 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:06.566 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:06.566 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:06.567 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:06.567 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:06.567 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:06.567 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:06.567 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:06.567 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:06.567 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:06.567 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:06.567 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:06.567 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:06.567 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:06.567 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:23:06.567 Found net devices under 0000:08:00.0: cvl_0_0 00:23:06.567 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:06.567 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:06.567 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:06.567 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:06.567 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:06.567 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:06.567 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:06.567 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:06.567 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:23:06.567 Found net devices under 0000:08:00.1: cvl_0_1 00:23:06.567 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:06.567 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:06.567 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:23:06.567 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:06.567 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:06.567 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:06.567 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:06.567 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:06.567 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:06.567 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:06.567 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:06.567 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:06.567 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:06.567 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:06.567 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:06.567 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:06.567 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:06.567 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:06.567 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:06.567 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:06.567 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:06.826 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:06.826 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:06.826 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:06.826 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:06.826 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:06.826 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:06.826 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:23:06.826 00:23:06.826 --- 10.0.0.2 ping statistics --- 00:23:06.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:06.826 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:23:06.826 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:06.826 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:06.826 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:23:06.826 00:23:06.826 --- 10.0.0.1 ping statistics --- 00:23:06.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:06.826 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:23:06.826 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:06.826 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:23:06.826 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:06.826 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:06.826 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:06.826 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:06.826 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:06.826 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:06.826 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:06.826 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:23:06.826 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:06.826 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:06.826 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.826 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=3912553 00:23:06.826 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:23:06.826 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 3912553 00:23:06.826 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 3912553 ']' 00:23:06.826 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:06.826 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:06.826 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:06.826 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:06.826 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.085 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:07.085 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:23:07.085 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:07.085 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:07.085 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.085 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:07.085 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:23:07.085 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:23:07.085 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:07.085 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:07.085 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:07.085 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:23:07.085 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:07.085 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:07.085 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9a1e97fefda7b54755baa89c2f940c2e 00:23:07.085 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:23:07.085 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Nn1 00:23:07.085 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9a1e97fefda7b54755baa89c2f940c2e 0 00:23:07.085 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9a1e97fefda7b54755baa89c2f940c2e 0 00:23:07.085 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:07.085 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:07.085 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9a1e97fefda7b54755baa89c2f940c2e 00:23:07.085 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:23:07.085 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:07.085 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Nn1 00:23:07.085 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Nn1 00:23:07.085 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Nn1 00:23:07.085 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:23:07.085 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:07.085 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:07.085 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:07.085 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:23:07.085 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:23:07.085 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:07.085 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2226f5e888293c3292e1da36602851ef5bac1ef6e3c407929ea16c4a98c1a632 00:23:07.085 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:23:07.085 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.PSe 00:23:07.085 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2226f5e888293c3292e1da36602851ef5bac1ef6e3c407929ea16c4a98c1a632 3 00:23:07.085 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2226f5e888293c3292e1da36602851ef5bac1ef6e3c407929ea16c4a98c1a632 3 00:23:07.085 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:07.085 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:07.085 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2226f5e888293c3292e1da36602851ef5bac1ef6e3c407929ea16c4a98c1a632 00:23:07.085 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:23:07.085 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:07.344 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.PSe 00:23:07.344 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.PSe 00:23:07.344 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.PSe 00:23:07.344 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:23:07.344 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:07.344 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:07.344 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:07.344 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:23:07.344 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:23:07.344 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:07.344 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=7dc75e6e9abeb2ca61ff2c927a54aa98db7de914e681ed21 00:23:07.344 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:23:07.344 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.fLD 00:23:07.344 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 7dc75e6e9abeb2ca61ff2c927a54aa98db7de914e681ed21 0 00:23:07.344 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 7dc75e6e9abeb2ca61ff2c927a54aa98db7de914e681ed21 0 00:23:07.344 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:07.344 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:07.344 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=7dc75e6e9abeb2ca61ff2c927a54aa98db7de914e681ed21 00:23:07.344 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:23:07.344 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:07.344 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.fLD 00:23:07.344 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.fLD 00:23:07.344 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.fLD 00:23:07.344 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:23:07.344 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:07.344 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:07.344 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:07.344 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:23:07.344 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:23:07.344 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:07.344 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=32ac81948b4896d394c99f86122bb4badceb5e532c940b60 00:23:07.344 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:23:07.344 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.sz5 00:23:07.344 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 32ac81948b4896d394c99f86122bb4badceb5e532c940b60 2 00:23:07.344 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 32ac81948b4896d394c99f86122bb4badceb5e532c940b60 2 00:23:07.344 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:07.344 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:07.344 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=32ac81948b4896d394c99f86122bb4badceb5e532c940b60 00:23:07.344 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:23:07.344 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:07.344 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.sz5 00:23:07.344 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.sz5 00:23:07.344 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.sz5 00:23:07.344 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:23:07.345 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:07.345 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:07.345 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:07.345 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:23:07.345 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:07.345 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:07.345 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=baf63f68b88dcf8ef70918807c331c97 00:23:07.345 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:23:07.345 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.yPt 00:23:07.345 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key baf63f68b88dcf8ef70918807c331c97 1 00:23:07.345 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 baf63f68b88dcf8ef70918807c331c97 1 00:23:07.345 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:07.345 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:07.345 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=baf63f68b88dcf8ef70918807c331c97 00:23:07.345 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:23:07.345 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:07.345 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.yPt 00:23:07.345 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.yPt 00:23:07.345 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.yPt 00:23:07.345 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:23:07.345 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:07.345 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:07.345 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:07.345 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:23:07.345 22:32:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:07.345 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:07.345 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=633a937661f79611329018ac7268abd7 00:23:07.345 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:23:07.345 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.rVE 00:23:07.345 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 633a937661f79611329018ac7268abd7 1 00:23:07.345 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 633a937661f79611329018ac7268abd7 1 00:23:07.345 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:07.345 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:07.345 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=633a937661f79611329018ac7268abd7 00:23:07.345 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:23:07.345 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:07.604 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.rVE 00:23:07.604 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.rVE 00:23:07.604 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.rVE 00:23:07.604 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:23:07.604 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:07.604 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:07.604 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:07.604 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:23:07.604 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:23:07.604 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:07.604 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8cb2e902181a37601ede0e54f9e48a4debe6cfd72183263e 00:23:07.604 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:23:07.604 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.JIU 00:23:07.604 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8cb2e902181a37601ede0e54f9e48a4debe6cfd72183263e 2 00:23:07.604 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8cb2e902181a37601ede0e54f9e48a4debe6cfd72183263e 2 00:23:07.604 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:07.604 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:07.604 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8cb2e902181a37601ede0e54f9e48a4debe6cfd72183263e 00:23:07.604 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:23:07.604 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:07.604 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.JIU 00:23:07.604 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.JIU 00:23:07.604 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.JIU 00:23:07.604 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:23:07.604 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:07.604 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:07.604 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:07.604 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:23:07.604 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:07.604 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:07.604 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f77b1255673d1c70877ee92cdb48d830 00:23:07.604 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:23:07.604 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.3uE 00:23:07.604 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f77b1255673d1c70877ee92cdb48d830 0 00:23:07.604 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f77b1255673d1c70877ee92cdb48d830 0 00:23:07.604 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:07.604 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:07.604 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f77b1255673d1c70877ee92cdb48d830 00:23:07.604 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:23:07.604 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:07.604 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.3uE 00:23:07.604 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.3uE 00:23:07.604 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.3uE 00:23:07.604 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:23:07.604 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:07.604 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:07.604 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:07.604 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:23:07.604 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:23:07.604 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:07.604 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=7606959bddc8cb383bc5ba94c56fd095586b25c036e3887a3b0757ed37e305c1 00:23:07.604 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:23:07.604 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.d7a 00:23:07.604 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 7606959bddc8cb383bc5ba94c56fd095586b25c036e3887a3b0757ed37e305c1 3 00:23:07.604 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 7606959bddc8cb383bc5ba94c56fd095586b25c036e3887a3b0757ed37e305c1 3 00:23:07.604 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:07.604 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:07.604 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=7606959bddc8cb383bc5ba94c56fd095586b25c036e3887a3b0757ed37e305c1 00:23:07.604 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:23:07.604 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:07.604 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.d7a 00:23:07.604 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.d7a 00:23:07.604 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.d7a 00:23:07.604 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:23:07.604 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3912553 00:23:07.604 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 3912553 ']' 00:23:07.604 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:07.604 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:07.604 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:07.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:07.604 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:07.604 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.862 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:07.862 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:23:07.862 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:07.862 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Nn1 00:23:07.862 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.862 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.862 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.862 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.PSe ]] 00:23:07.862 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.PSe 00:23:07.862 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.862 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.862 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.862 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:07.862 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.fLD 00:23:07.862 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.862 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.862 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.862 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.sz5 ]] 00:23:07.862 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.sz5 00:23:07.862 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.862 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.862 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.862 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:07.862 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.yPt 00:23:07.862 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.862 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.120 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.120 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.rVE ]] 00:23:08.120 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.rVE 00:23:08.120 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.120 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.120 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.120 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:08.120 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.JIU 00:23:08.120 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.120 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.120 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.120 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.3uE ]] 00:23:08.120 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.3uE 00:23:08.120 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.120 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.120 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.120 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:08.120 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.d7a 00:23:08.120 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.120 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.120 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.120 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:23:08.120 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:23:08.120 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:23:08.120 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:08.120 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:08.120 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:08.120 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:08.120 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:08.120 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:08.120 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:08.120 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:08.120 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:08.120 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:08.120 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:23:08.120 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:23:08.120 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:23:08.120 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:08.120 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:08.120 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:08.120 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:23:08.120 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:23:08.120 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:23:08.120 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:08.120 22:32:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:09.055 Waiting for block devices as requested 00:23:09.055 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:23:09.055 0000:00:04.7 (8086 3c27): vfio-pci -> ioatdma 00:23:09.055 0000:00:04.6 (8086 3c26): vfio-pci -> ioatdma 00:23:09.315 0000:00:04.5 (8086 3c25): vfio-pci -> ioatdma 00:23:09.315 0000:00:04.4 (8086 3c24): vfio-pci -> ioatdma 00:23:09.315 0000:00:04.3 (8086 3c23): vfio-pci -> ioatdma 00:23:09.315 0000:00:04.2 (8086 3c22): vfio-pci -> ioatdma 00:23:09.574 0000:00:04.1 (8086 3c21): vfio-pci -> ioatdma 00:23:09.574 0000:00:04.0 (8086 3c20): vfio-pci -> ioatdma 00:23:09.574 0000:80:04.7 (8086 3c27): vfio-pci -> ioatdma 00:23:09.574 0000:80:04.6 (8086 3c26): vfio-pci -> ioatdma 00:23:09.833 0000:80:04.5 (8086 3c25): vfio-pci -> ioatdma 00:23:09.833 0000:80:04.4 (8086 3c24): vfio-pci -> ioatdma 00:23:09.833 0000:80:04.3 (8086 3c23): vfio-pci -> ioatdma 00:23:09.833 0000:80:04.2 (8086 3c22): vfio-pci -> ioatdma 00:23:10.091 0000:80:04.1 (8086 3c21): vfio-pci -> ioatdma 00:23:10.091 0000:80:04.0 (8086 3c20): vfio-pci -> ioatdma 00:23:10.350 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:10.350 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:10.350 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:23:10.350 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1660 -- # local device=nvme0n1 00:23:10.350 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:10.350 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:23:10.350 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:23:10.350 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:10.350 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:10.610 No valid GPT data, bailing 00:23:10.610 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:10.610 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:23:10.610 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:23:10.610 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:23:10.610 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:23:10.610 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:10.610 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:10.610 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:10.610 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:23:10.610 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:23:10.610 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:23:10.610 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:23:10.610 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:23:10.610 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:23:10.610 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:23:10.610 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:23:10.610 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:10.610 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -a 10.0.0.1 -t tcp -s 4420 00:23:10.610 00:23:10.611 Discovery Log Number of Records 2, Generation counter 2 00:23:10.611 =====Discovery Log Entry 0====== 00:23:10.611 trtype: tcp 00:23:10.611 adrfam: ipv4 00:23:10.611 subtype: current discovery subsystem 00:23:10.611 treq: not specified, sq flow control disable supported 00:23:10.611 portid: 1 00:23:10.611 trsvcid: 4420 00:23:10.611 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:10.611 traddr: 10.0.0.1 00:23:10.611 eflags: none 00:23:10.611 sectype: none 00:23:10.611 =====Discovery Log Entry 1====== 00:23:10.611 trtype: tcp 00:23:10.611 adrfam: ipv4 00:23:10.611 subtype: nvme subsystem 00:23:10.611 treq: not specified, sq flow control disable supported 00:23:10.611 portid: 1 00:23:10.611 trsvcid: 4420 00:23:10.611 subnqn: nqn.2024-02.io.spdk:cnode0 00:23:10.611 traddr: 10.0.0.1 00:23:10.611 eflags: none 00:23:10.611 sectype: none 00:23:10.611 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:10.611 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:23:10.611 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:10.611 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:10.611 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:10.611 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:10.611 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:10.611 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:10.611 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2RjNzVlNmU5YWJlYjJjYTYxZmYyYzkyN2E1NGFhOThkYjdkZTkxNGU2ODFlZDIxTp4lhw==: 00:23:10.611 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzJhYzgxOTQ4YjQ4OTZkMzk0Yzk5Zjg2MTIyYmI0YmFkY2ViNWU1MzJjOTQwYjYwXuLstA==: 00:23:10.611 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:10.611 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:10.611 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2RjNzVlNmU5YWJlYjJjYTYxZmYyYzkyN2E1NGFhOThkYjdkZTkxNGU2ODFlZDIxTp4lhw==: 00:23:10.611 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzJhYzgxOTQ4YjQ4OTZkMzk0Yzk5Zjg2MTIyYmI0YmFkY2ViNWU1MzJjOTQwYjYwXuLstA==: ]] 00:23:10.611 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzJhYzgxOTQ4YjQ4OTZkMzk0Yzk5Zjg2MTIyYmI0YmFkY2ViNWU1MzJjOTQwYjYwXuLstA==: 00:23:10.611 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:10.611 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:23:10.611 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:10.611 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:10.611 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:23:10.611 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:10.611 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:23:10.611 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:10.611 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:10.611 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:10.611 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:10.611 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.611 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.611 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.611 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:10.611 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:10.611 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:10.611 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:10.611 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:10.611 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:10.611 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:10.611 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:10.611 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:10.611 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:10.611 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:10.611 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:10.611 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.611 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.871 nvme0n1 00:23:10.871 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.871 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:10.871 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:10.871 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.871 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.871 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.871 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:10.871 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:10.871 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.871 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.871 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.871 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:10.871 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:10.871 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:10.871 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:23:10.871 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:10.871 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:10.871 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:10.871 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:10.871 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWExZTk3ZmVmZGE3YjU0NzU1YmFhODljMmY5NDBjMmV2oBIL: 00:23:10.871 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjIyNmY1ZTg4ODI5M2MzMjkyZTFkYTM2NjAyODUxZWY1YmFjMWVmNmUzYzQwNzkyOWVhMTZjNGE5OGMxYTYzMnYqF4A=: 00:23:10.871 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:10.871 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:10.871 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWExZTk3ZmVmZGE3YjU0NzU1YmFhODljMmY5NDBjMmV2oBIL: 00:23:10.871 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjIyNmY1ZTg4ODI5M2MzMjkyZTFkYTM2NjAyODUxZWY1YmFjMWVmNmUzYzQwNzkyOWVhMTZjNGE5OGMxYTYzMnYqF4A=: ]] 00:23:10.871 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjIyNmY1ZTg4ODI5M2MzMjkyZTFkYTM2NjAyODUxZWY1YmFjMWVmNmUzYzQwNzkyOWVhMTZjNGE5OGMxYTYzMnYqF4A=: 00:23:10.871 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:23:10.871 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:10.871 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:10.871 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:10.871 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:10.871 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:10.871 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:10.871 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.871 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.871 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.871 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:10.871 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:10.871 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:10.871 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:10.871 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:10.871 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:10.871 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:10.871 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:10.871 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:10.871 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:10.871 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:10.871 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:10.871 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.871 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.131 nvme0n1 00:23:11.131 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.131 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:11.131 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:11.131 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.131 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.131 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.131 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:11.131 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:11.131 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.131 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.131 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.131 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:11.131 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:11.131 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:11.132 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:11.132 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:11.132 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:11.132 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2RjNzVlNmU5YWJlYjJjYTYxZmYyYzkyN2E1NGFhOThkYjdkZTkxNGU2ODFlZDIxTp4lhw==: 00:23:11.132 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzJhYzgxOTQ4YjQ4OTZkMzk0Yzk5Zjg2MTIyYmI0YmFkY2ViNWU1MzJjOTQwYjYwXuLstA==: 00:23:11.132 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:11.132 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:11.132 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2RjNzVlNmU5YWJlYjJjYTYxZmYyYzkyN2E1NGFhOThkYjdkZTkxNGU2ODFlZDIxTp4lhw==: 00:23:11.132 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzJhYzgxOTQ4YjQ4OTZkMzk0Yzk5Zjg2MTIyYmI0YmFkY2ViNWU1MzJjOTQwYjYwXuLstA==: ]] 00:23:11.132 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzJhYzgxOTQ4YjQ4OTZkMzk0Yzk5Zjg2MTIyYmI0YmFkY2ViNWU1MzJjOTQwYjYwXuLstA==: 00:23:11.132 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:23:11.132 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:11.132 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:11.132 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:11.132 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:11.132 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:11.132 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:11.132 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.132 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.132 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.132 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:11.132 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:11.132 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:11.132 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:11.132 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:11.132 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:11.132 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:11.132 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:11.132 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:11.132 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:11.132 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:11.132 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:11.132 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.132 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.390 nvme0n1 00:23:11.390 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.390 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:11.390 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.390 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:11.390 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.390 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.390 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:11.390 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:11.390 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.390 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.390 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.390 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:11.390 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:23:11.390 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:11.390 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:11.390 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:11.390 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:11.390 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmFmNjNmNjhiODhkY2Y4ZWY3MDkxODgwN2MzMzFjOTeOYqNP: 00:23:11.391 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjMzYTkzNzY2MWY3OTYxMTMyOTAxOGFjNzI2OGFiZDeRopdL: 00:23:11.391 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:11.391 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:11.391 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmFmNjNmNjhiODhkY2Y4ZWY3MDkxODgwN2MzMzFjOTeOYqNP: 00:23:11.391 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjMzYTkzNzY2MWY3OTYxMTMyOTAxOGFjNzI2OGFiZDeRopdL: ]] 00:23:11.391 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjMzYTkzNzY2MWY3OTYxMTMyOTAxOGFjNzI2OGFiZDeRopdL: 00:23:11.391 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:23:11.391 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:11.391 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:11.391 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:11.391 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:11.391 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:11.391 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:11.391 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.391 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.391 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.391 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:11.391 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:11.391 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:11.391 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:11.391 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:11.391 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:11.391 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:11.391 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:11.391 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:11.391 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:11.391 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:11.391 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:11.391 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.391 22:32:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.391 nvme0n1 00:23:11.391 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.391 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:11.391 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.391 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:11.391 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.391 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.649 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:11.649 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:11.649 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.649 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.649 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.649 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:11.649 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:23:11.649 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:11.649 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:11.649 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:11.649 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:11.649 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGNiMmU5MDIxODFhMzc2MDFlZGUwZTU0ZjllNDhhNGRlYmU2Y2ZkNzIxODMyNjNloyKljw==: 00:23:11.649 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Zjc3YjEyNTU2NzNkMWM3MDg3N2VlOTJjZGI0OGQ4MzC+iXbo: 00:23:11.649 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:11.649 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:11.649 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGNiMmU5MDIxODFhMzc2MDFlZGUwZTU0ZjllNDhhNGRlYmU2Y2ZkNzIxODMyNjNloyKljw==: 00:23:11.649 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Zjc3YjEyNTU2NzNkMWM3MDg3N2VlOTJjZGI0OGQ4MzC+iXbo: ]] 00:23:11.649 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Zjc3YjEyNTU2NzNkMWM3MDg3N2VlOTJjZGI0OGQ4MzC+iXbo: 00:23:11.649 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:23:11.649 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:11.649 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:11.649 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:11.649 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:11.649 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:11.649 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:11.649 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.649 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.649 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.649 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:11.649 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:11.649 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:11.649 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:11.649 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:11.649 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:11.649 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:11.649 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:11.649 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:11.649 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:11.649 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:11.649 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:11.649 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.649 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.649 nvme0n1 00:23:11.649 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.649 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:11.649 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:11.649 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.649 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.649 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.649 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:11.649 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:11.649 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.649 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.908 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.908 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:11.908 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzYwNjk1OWJkZGM4Y2IzODNiYzViYTk0YzU2ZmQwOTU1ODZiMjVjMDM2ZTM4ODdhM2IwNzU3ZWQzN2UzMDVjMRz82Us=: 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzYwNjk1OWJkZGM4Y2IzODNiYzViYTk0YzU2ZmQwOTU1ODZiMjVjMDM2ZTM4ODdhM2IwNzU3ZWQzN2UzMDVjMRz82Us=: 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.909 nvme0n1 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWExZTk3ZmVmZGE3YjU0NzU1YmFhODljMmY5NDBjMmV2oBIL: 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjIyNmY1ZTg4ODI5M2MzMjkyZTFkYTM2NjAyODUxZWY1YmFjMWVmNmUzYzQwNzkyOWVhMTZjNGE5OGMxYTYzMnYqF4A=: 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWExZTk3ZmVmZGE3YjU0NzU1YmFhODljMmY5NDBjMmV2oBIL: 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjIyNmY1ZTg4ODI5M2MzMjkyZTFkYTM2NjAyODUxZWY1YmFjMWVmNmUzYzQwNzkyOWVhMTZjNGE5OGMxYTYzMnYqF4A=: ]] 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjIyNmY1ZTg4ODI5M2MzMjkyZTFkYTM2NjAyODUxZWY1YmFjMWVmNmUzYzQwNzkyOWVhMTZjNGE5OGMxYTYzMnYqF4A=: 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:11.909 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.169 nvme0n1 00:23:12.169 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.169 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:12.169 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:12.169 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.169 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.169 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.169 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.169 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:12.169 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.169 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.429 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.429 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:12.429 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:23:12.429 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:12.429 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:12.429 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:12.429 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:12.429 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2RjNzVlNmU5YWJlYjJjYTYxZmYyYzkyN2E1NGFhOThkYjdkZTkxNGU2ODFlZDIxTp4lhw==: 00:23:12.429 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzJhYzgxOTQ4YjQ4OTZkMzk0Yzk5Zjg2MTIyYmI0YmFkY2ViNWU1MzJjOTQwYjYwXuLstA==: 00:23:12.429 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:12.429 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:12.429 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2RjNzVlNmU5YWJlYjJjYTYxZmYyYzkyN2E1NGFhOThkYjdkZTkxNGU2ODFlZDIxTp4lhw==: 00:23:12.429 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzJhYzgxOTQ4YjQ4OTZkMzk0Yzk5Zjg2MTIyYmI0YmFkY2ViNWU1MzJjOTQwYjYwXuLstA==: ]] 00:23:12.429 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzJhYzgxOTQ4YjQ4OTZkMzk0Yzk5Zjg2MTIyYmI0YmFkY2ViNWU1MzJjOTQwYjYwXuLstA==: 00:23:12.429 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:23:12.429 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:12.429 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:12.429 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:12.429 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:12.429 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:12.429 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:12.429 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.429 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.429 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.429 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:12.429 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:12.429 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:12.429 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:12.429 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:12.429 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:12.429 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:12.429 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:12.429 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:12.429 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:12.429 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:12.429 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:12.429 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.429 22:32:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.429 nvme0n1 00:23:12.429 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.429 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:12.429 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:12.429 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.429 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.429 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.429 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.429 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:12.429 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.429 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.687 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.687 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:12.687 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:23:12.687 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:12.687 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:12.687 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:12.687 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:12.687 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmFmNjNmNjhiODhkY2Y4ZWY3MDkxODgwN2MzMzFjOTeOYqNP: 00:23:12.687 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjMzYTkzNzY2MWY3OTYxMTMyOTAxOGFjNzI2OGFiZDeRopdL: 00:23:12.687 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:12.687 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:12.687 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmFmNjNmNjhiODhkY2Y4ZWY3MDkxODgwN2MzMzFjOTeOYqNP: 00:23:12.687 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjMzYTkzNzY2MWY3OTYxMTMyOTAxOGFjNzI2OGFiZDeRopdL: ]] 00:23:12.687 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjMzYTkzNzY2MWY3OTYxMTMyOTAxOGFjNzI2OGFiZDeRopdL: 00:23:12.687 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:23:12.687 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:12.687 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:12.687 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:12.687 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:12.687 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:12.687 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:12.687 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.687 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.687 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.687 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:12.687 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:12.687 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:12.687 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:12.687 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:12.687 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:12.687 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:12.687 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:12.687 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:12.687 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:12.687 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:12.687 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:12.687 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.687 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.687 nvme0n1 00:23:12.687 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.687 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:12.687 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:12.687 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.687 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.687 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.945 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.945 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:12.945 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.945 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.945 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.945 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:12.945 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:23:12.945 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:12.945 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:12.945 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:12.945 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:12.945 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGNiMmU5MDIxODFhMzc2MDFlZGUwZTU0ZjllNDhhNGRlYmU2Y2ZkNzIxODMyNjNloyKljw==: 00:23:12.945 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Zjc3YjEyNTU2NzNkMWM3MDg3N2VlOTJjZGI0OGQ4MzC+iXbo: 00:23:12.945 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:12.945 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:12.945 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGNiMmU5MDIxODFhMzc2MDFlZGUwZTU0ZjllNDhhNGRlYmU2Y2ZkNzIxODMyNjNloyKljw==: 00:23:12.945 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Zjc3YjEyNTU2NzNkMWM3MDg3N2VlOTJjZGI0OGQ4MzC+iXbo: ]] 00:23:12.945 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Zjc3YjEyNTU2NzNkMWM3MDg3N2VlOTJjZGI0OGQ4MzC+iXbo: 00:23:12.946 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:23:12.946 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:12.946 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:12.946 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:12.946 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:12.946 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:12.946 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:12.946 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.946 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.946 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.946 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:12.946 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:12.946 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:12.946 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:12.946 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:12.946 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:12.946 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:12.946 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:12.946 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:12.946 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:12.946 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:12.946 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:12.946 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.946 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.946 nvme0n1 00:23:12.946 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.946 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:12.946 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:12.946 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.946 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.946 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.205 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.205 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:13.205 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.205 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.205 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.205 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:13.205 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:23:13.205 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:13.205 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:13.205 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:13.205 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:13.205 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzYwNjk1OWJkZGM4Y2IzODNiYzViYTk0YzU2ZmQwOTU1ODZiMjVjMDM2ZTM4ODdhM2IwNzU3ZWQzN2UzMDVjMRz82Us=: 00:23:13.205 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:13.205 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:13.205 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:13.205 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzYwNjk1OWJkZGM4Y2IzODNiYzViYTk0YzU2ZmQwOTU1ODZiMjVjMDM2ZTM4ODdhM2IwNzU3ZWQzN2UzMDVjMRz82Us=: 00:23:13.206 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:13.206 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:23:13.206 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:13.206 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:13.206 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:13.206 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:13.206 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:13.206 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:13.206 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.206 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.206 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.206 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:13.206 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:13.206 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:13.206 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:13.206 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:13.206 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:13.206 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:13.206 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:13.206 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:13.206 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:13.206 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:13.206 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:13.206 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.206 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.466 nvme0n1 00:23:13.466 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.466 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:13.466 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.466 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:13.466 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.466 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.466 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.466 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:13.466 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.466 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.466 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.466 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:13.466 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:13.466 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:23:13.466 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:13.466 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:13.466 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:13.466 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:13.466 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWExZTk3ZmVmZGE3YjU0NzU1YmFhODljMmY5NDBjMmV2oBIL: 00:23:13.466 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjIyNmY1ZTg4ODI5M2MzMjkyZTFkYTM2NjAyODUxZWY1YmFjMWVmNmUzYzQwNzkyOWVhMTZjNGE5OGMxYTYzMnYqF4A=: 00:23:13.466 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:13.466 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:13.466 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWExZTk3ZmVmZGE3YjU0NzU1YmFhODljMmY5NDBjMmV2oBIL: 00:23:13.466 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjIyNmY1ZTg4ODI5M2MzMjkyZTFkYTM2NjAyODUxZWY1YmFjMWVmNmUzYzQwNzkyOWVhMTZjNGE5OGMxYTYzMnYqF4A=: ]] 00:23:13.466 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjIyNmY1ZTg4ODI5M2MzMjkyZTFkYTM2NjAyODUxZWY1YmFjMWVmNmUzYzQwNzkyOWVhMTZjNGE5OGMxYTYzMnYqF4A=: 00:23:13.466 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:23:13.466 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:13.466 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:13.466 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:13.466 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:13.466 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:13.466 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:13.466 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.466 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.466 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.466 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:13.466 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:13.466 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:13.466 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:13.466 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:13.466 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:13.466 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:13.466 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:13.466 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:13.466 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:13.466 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:13.466 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:13.466 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.466 22:32:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.725 nvme0n1 00:23:13.725 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.725 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:13.725 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:13.725 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.725 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.725 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.725 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.725 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:13.725 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.725 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.725 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.725 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:13.725 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:23:13.725 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:13.725 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:13.725 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:13.725 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:13.725 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2RjNzVlNmU5YWJlYjJjYTYxZmYyYzkyN2E1NGFhOThkYjdkZTkxNGU2ODFlZDIxTp4lhw==: 00:23:13.725 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzJhYzgxOTQ4YjQ4OTZkMzk0Yzk5Zjg2MTIyYmI0YmFkY2ViNWU1MzJjOTQwYjYwXuLstA==: 00:23:13.725 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:13.725 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:13.725 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2RjNzVlNmU5YWJlYjJjYTYxZmYyYzkyN2E1NGFhOThkYjdkZTkxNGU2ODFlZDIxTp4lhw==: 00:23:13.725 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzJhYzgxOTQ4YjQ4OTZkMzk0Yzk5Zjg2MTIyYmI0YmFkY2ViNWU1MzJjOTQwYjYwXuLstA==: ]] 00:23:13.725 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzJhYzgxOTQ4YjQ4OTZkMzk0Yzk5Zjg2MTIyYmI0YmFkY2ViNWU1MzJjOTQwYjYwXuLstA==: 00:23:13.725 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:23:13.725 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:13.725 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:13.725 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:13.725 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:13.725 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:13.725 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:13.725 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.725 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.725 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:13.725 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:13.725 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:13.725 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:13.725 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:13.725 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:13.725 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:13.725 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:13.725 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:13.725 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:13.725 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:13.725 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:13.725 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:13.725 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:13.725 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.291 nvme0n1 00:23:14.291 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.291 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:14.291 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.291 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.291 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:14.291 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.291 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:14.291 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:14.291 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.291 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.291 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.291 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:14.291 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:23:14.291 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:14.291 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:14.291 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:14.291 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:14.291 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmFmNjNmNjhiODhkY2Y4ZWY3MDkxODgwN2MzMzFjOTeOYqNP: 00:23:14.291 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjMzYTkzNzY2MWY3OTYxMTMyOTAxOGFjNzI2OGFiZDeRopdL: 00:23:14.291 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:14.291 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:14.291 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmFmNjNmNjhiODhkY2Y4ZWY3MDkxODgwN2MzMzFjOTeOYqNP: 00:23:14.291 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjMzYTkzNzY2MWY3OTYxMTMyOTAxOGFjNzI2OGFiZDeRopdL: ]] 00:23:14.291 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjMzYTkzNzY2MWY3OTYxMTMyOTAxOGFjNzI2OGFiZDeRopdL: 00:23:14.291 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:23:14.291 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:14.291 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:14.291 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:14.291 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:14.291 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:14.291 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:14.291 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.291 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.291 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.291 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:14.291 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:14.291 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:14.291 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:14.291 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:14.291 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:14.291 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:14.291 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:14.291 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:14.291 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:14.291 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:14.291 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:14.291 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.291 22:32:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.551 nvme0n1 00:23:14.551 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.551 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:14.551 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:14.551 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.551 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.551 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.551 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:14.551 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:14.551 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.551 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.551 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.551 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:14.551 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:23:14.551 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:14.551 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:14.551 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:14.551 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:14.551 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGNiMmU5MDIxODFhMzc2MDFlZGUwZTU0ZjllNDhhNGRlYmU2Y2ZkNzIxODMyNjNloyKljw==: 00:23:14.551 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Zjc3YjEyNTU2NzNkMWM3MDg3N2VlOTJjZGI0OGQ4MzC+iXbo: 00:23:14.551 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:14.551 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:14.551 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGNiMmU5MDIxODFhMzc2MDFlZGUwZTU0ZjllNDhhNGRlYmU2Y2ZkNzIxODMyNjNloyKljw==: 00:23:14.551 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Zjc3YjEyNTU2NzNkMWM3MDg3N2VlOTJjZGI0OGQ4MzC+iXbo: ]] 00:23:14.551 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Zjc3YjEyNTU2NzNkMWM3MDg3N2VlOTJjZGI0OGQ4MzC+iXbo: 00:23:14.551 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:23:14.551 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:14.551 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:14.551 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:14.551 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:14.551 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:14.551 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:14.551 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.551 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.551 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.551 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:14.551 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:14.551 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:14.551 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:14.551 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:14.551 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:14.551 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:14.551 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:14.551 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:14.551 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:14.551 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:14.552 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:14.552 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.552 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.852 nvme0n1 00:23:14.852 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.852 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:14.852 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.853 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:14.853 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.853 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:14.853 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:14.853 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:14.853 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:14.853 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.129 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.129 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:15.129 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:23:15.129 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:15.129 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:15.129 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:15.129 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:15.129 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzYwNjk1OWJkZGM4Y2IzODNiYzViYTk0YzU2ZmQwOTU1ODZiMjVjMDM2ZTM4ODdhM2IwNzU3ZWQzN2UzMDVjMRz82Us=: 00:23:15.129 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:15.129 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:15.129 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:15.129 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzYwNjk1OWJkZGM4Y2IzODNiYzViYTk0YzU2ZmQwOTU1ODZiMjVjMDM2ZTM4ODdhM2IwNzU3ZWQzN2UzMDVjMRz82Us=: 00:23:15.129 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:15.129 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:23:15.129 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:15.129 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:15.129 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:15.129 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:15.129 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:15.129 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:15.129 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.129 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.129 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.129 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:15.129 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:15.129 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:15.129 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:15.129 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:15.129 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:15.129 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:15.129 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:15.129 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:15.129 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:15.129 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:15.129 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:15.129 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.129 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.388 nvme0n1 00:23:15.388 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.388 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:15.388 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:15.388 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.388 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.388 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.388 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.388 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:15.388 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.388 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.388 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.388 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:15.388 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:15.388 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:23:15.388 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:15.388 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:15.388 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:15.388 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:15.388 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWExZTk3ZmVmZGE3YjU0NzU1YmFhODljMmY5NDBjMmV2oBIL: 00:23:15.388 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjIyNmY1ZTg4ODI5M2MzMjkyZTFkYTM2NjAyODUxZWY1YmFjMWVmNmUzYzQwNzkyOWVhMTZjNGE5OGMxYTYzMnYqF4A=: 00:23:15.388 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:15.388 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:15.388 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWExZTk3ZmVmZGE3YjU0NzU1YmFhODljMmY5NDBjMmV2oBIL: 00:23:15.388 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjIyNmY1ZTg4ODI5M2MzMjkyZTFkYTM2NjAyODUxZWY1YmFjMWVmNmUzYzQwNzkyOWVhMTZjNGE5OGMxYTYzMnYqF4A=: ]] 00:23:15.388 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjIyNmY1ZTg4ODI5M2MzMjkyZTFkYTM2NjAyODUxZWY1YmFjMWVmNmUzYzQwNzkyOWVhMTZjNGE5OGMxYTYzMnYqF4A=: 00:23:15.388 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:23:15.388 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:15.388 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:15.388 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:15.388 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:15.388 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:15.388 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:15.388 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.388 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.388 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.388 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:15.388 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:15.388 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:15.388 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:15.389 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:15.389 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:15.389 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:15.389 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:15.389 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:15.389 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:15.389 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:15.389 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:15.389 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.389 22:32:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.957 nvme0n1 00:23:15.957 22:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.957 22:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:15.957 22:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.957 22:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.957 22:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:15.957 22:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.957 22:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.957 22:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:15.957 22:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.957 22:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.957 22:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.957 22:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:15.957 22:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:23:15.957 22:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:15.957 22:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:15.957 22:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:15.957 22:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:15.957 22:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2RjNzVlNmU5YWJlYjJjYTYxZmYyYzkyN2E1NGFhOThkYjdkZTkxNGU2ODFlZDIxTp4lhw==: 00:23:15.957 22:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzJhYzgxOTQ4YjQ4OTZkMzk0Yzk5Zjg2MTIyYmI0YmFkY2ViNWU1MzJjOTQwYjYwXuLstA==: 00:23:15.957 22:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:15.957 22:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:15.957 22:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2RjNzVlNmU5YWJlYjJjYTYxZmYyYzkyN2E1NGFhOThkYjdkZTkxNGU2ODFlZDIxTp4lhw==: 00:23:15.957 22:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzJhYzgxOTQ4YjQ4OTZkMzk0Yzk5Zjg2MTIyYmI0YmFkY2ViNWU1MzJjOTQwYjYwXuLstA==: ]] 00:23:15.957 22:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzJhYzgxOTQ4YjQ4OTZkMzk0Yzk5Zjg2MTIyYmI0YmFkY2ViNWU1MzJjOTQwYjYwXuLstA==: 00:23:15.957 22:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:23:15.957 22:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:15.957 22:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:15.957 22:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:15.957 22:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:15.957 22:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:15.957 22:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:15.957 22:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.957 22:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.957 22:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.957 22:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:15.957 22:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:15.957 22:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:15.957 22:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:15.957 22:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:15.957 22:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:15.957 22:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:15.957 22:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:15.957 22:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:15.957 22:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:15.957 22:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:15.957 22:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:15.957 22:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.957 22:32:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.893 nvme0n1 00:23:16.893 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.893 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:16.893 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.893 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:16.893 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.893 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.893 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.893 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:16.893 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.893 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.893 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.893 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:16.893 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:23:16.893 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:16.893 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:16.893 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:16.893 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:16.893 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmFmNjNmNjhiODhkY2Y4ZWY3MDkxODgwN2MzMzFjOTeOYqNP: 00:23:16.893 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjMzYTkzNzY2MWY3OTYxMTMyOTAxOGFjNzI2OGFiZDeRopdL: 00:23:16.893 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:16.893 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:16.893 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmFmNjNmNjhiODhkY2Y4ZWY3MDkxODgwN2MzMzFjOTeOYqNP: 00:23:16.893 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjMzYTkzNzY2MWY3OTYxMTMyOTAxOGFjNzI2OGFiZDeRopdL: ]] 00:23:16.893 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjMzYTkzNzY2MWY3OTYxMTMyOTAxOGFjNzI2OGFiZDeRopdL: 00:23:16.893 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:23:16.893 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:16.893 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:16.893 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:16.893 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:16.893 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:16.893 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:16.893 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.893 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.893 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:16.893 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:16.893 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:16.893 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:16.893 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:16.893 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:16.893 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:16.893 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:16.893 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:16.893 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:16.893 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:16.893 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:16.893 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:16.893 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:16.893 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.463 nvme0n1 00:23:17.463 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.463 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:17.463 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:17.463 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.463 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.463 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.463 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.463 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:17.463 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.463 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.463 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.463 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:17.463 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:23:17.463 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:17.463 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:17.463 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:17.463 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:17.463 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGNiMmU5MDIxODFhMzc2MDFlZGUwZTU0ZjllNDhhNGRlYmU2Y2ZkNzIxODMyNjNloyKljw==: 00:23:17.463 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Zjc3YjEyNTU2NzNkMWM3MDg3N2VlOTJjZGI0OGQ4MzC+iXbo: 00:23:17.463 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:17.463 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:17.463 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGNiMmU5MDIxODFhMzc2MDFlZGUwZTU0ZjllNDhhNGRlYmU2Y2ZkNzIxODMyNjNloyKljw==: 00:23:17.463 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Zjc3YjEyNTU2NzNkMWM3MDg3N2VlOTJjZGI0OGQ4MzC+iXbo: ]] 00:23:17.463 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Zjc3YjEyNTU2NzNkMWM3MDg3N2VlOTJjZGI0OGQ4MzC+iXbo: 00:23:17.463 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:23:17.463 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:17.463 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:17.463 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:17.463 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:17.463 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:17.463 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:17.463 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.463 22:32:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.463 22:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.463 22:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:17.463 22:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:17.463 22:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:17.463 22:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:17.463 22:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:17.463 22:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:17.463 22:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:17.463 22:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:17.463 22:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:17.463 22:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:17.463 22:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:17.463 22:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:17.463 22:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.463 22:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.032 nvme0n1 00:23:18.032 22:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.032 22:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:18.032 22:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:18.032 22:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.032 22:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.032 22:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.032 22:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.032 22:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:18.032 22:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.032 22:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.032 22:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.032 22:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:18.032 22:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:23:18.032 22:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:18.032 22:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:18.032 22:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:18.032 22:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:18.032 22:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzYwNjk1OWJkZGM4Y2IzODNiYzViYTk0YzU2ZmQwOTU1ODZiMjVjMDM2ZTM4ODdhM2IwNzU3ZWQzN2UzMDVjMRz82Us=: 00:23:18.032 22:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:18.032 22:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:18.032 22:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:18.032 22:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzYwNjk1OWJkZGM4Y2IzODNiYzViYTk0YzU2ZmQwOTU1ODZiMjVjMDM2ZTM4ODdhM2IwNzU3ZWQzN2UzMDVjMRz82Us=: 00:23:18.032 22:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:18.032 22:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:23:18.032 22:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:18.032 22:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:18.032 22:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:18.032 22:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:18.032 22:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:18.032 22:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:18.032 22:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.032 22:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.032 22:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.032 22:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:18.032 22:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:18.032 22:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:18.032 22:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:18.032 22:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:18.032 22:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:18.032 22:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:18.032 22:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:18.032 22:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:18.032 22:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:18.032 22:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:18.032 22:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:18.032 22:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.032 22:32:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.967 nvme0n1 00:23:18.967 22:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.967 22:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:18.967 22:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.967 22:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:18.967 22:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.967 22:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.967 22:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.967 22:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:18.967 22:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.967 22:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.967 22:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.967 22:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:18.967 22:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:18.967 22:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:23:18.967 22:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:18.968 22:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:18.968 22:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:18.968 22:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:18.968 22:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWExZTk3ZmVmZGE3YjU0NzU1YmFhODljMmY5NDBjMmV2oBIL: 00:23:18.968 22:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjIyNmY1ZTg4ODI5M2MzMjkyZTFkYTM2NjAyODUxZWY1YmFjMWVmNmUzYzQwNzkyOWVhMTZjNGE5OGMxYTYzMnYqF4A=: 00:23:18.968 22:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:18.968 22:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:18.968 22:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWExZTk3ZmVmZGE3YjU0NzU1YmFhODljMmY5NDBjMmV2oBIL: 00:23:18.968 22:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjIyNmY1ZTg4ODI5M2MzMjkyZTFkYTM2NjAyODUxZWY1YmFjMWVmNmUzYzQwNzkyOWVhMTZjNGE5OGMxYTYzMnYqF4A=: ]] 00:23:18.968 22:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjIyNmY1ZTg4ODI5M2MzMjkyZTFkYTM2NjAyODUxZWY1YmFjMWVmNmUzYzQwNzkyOWVhMTZjNGE5OGMxYTYzMnYqF4A=: 00:23:18.968 22:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:23:18.968 22:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:18.968 22:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:18.968 22:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:18.968 22:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:18.968 22:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:18.968 22:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:18.968 22:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.968 22:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.968 22:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.968 22:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:18.968 22:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:18.968 22:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:18.968 22:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:18.968 22:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:18.968 22:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:18.968 22:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:18.968 22:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:18.968 22:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:18.968 22:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:18.968 22:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:18.968 22:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:18.968 22:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.968 22:32:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.902 nvme0n1 00:23:19.902 22:32:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.902 22:32:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:19.902 22:32:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.902 22:32:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:19.902 22:32:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.902 22:32:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.902 22:32:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.902 22:32:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:19.902 22:32:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.902 22:32:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.902 22:32:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.902 22:32:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:19.902 22:32:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:23:19.902 22:32:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:19.902 22:32:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:19.902 22:32:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:19.902 22:32:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:19.902 22:32:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2RjNzVlNmU5YWJlYjJjYTYxZmYyYzkyN2E1NGFhOThkYjdkZTkxNGU2ODFlZDIxTp4lhw==: 00:23:19.902 22:32:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzJhYzgxOTQ4YjQ4OTZkMzk0Yzk5Zjg2MTIyYmI0YmFkY2ViNWU1MzJjOTQwYjYwXuLstA==: 00:23:19.902 22:32:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:19.902 22:32:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:19.902 22:32:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2RjNzVlNmU5YWJlYjJjYTYxZmYyYzkyN2E1NGFhOThkYjdkZTkxNGU2ODFlZDIxTp4lhw==: 00:23:19.902 22:32:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzJhYzgxOTQ4YjQ4OTZkMzk0Yzk5Zjg2MTIyYmI0YmFkY2ViNWU1MzJjOTQwYjYwXuLstA==: ]] 00:23:19.902 22:32:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzJhYzgxOTQ4YjQ4OTZkMzk0Yzk5Zjg2MTIyYmI0YmFkY2ViNWU1MzJjOTQwYjYwXuLstA==: 00:23:19.902 22:32:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:23:19.902 22:32:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:19.902 22:32:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:19.902 22:32:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:19.902 22:32:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:19.902 22:32:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:19.902 22:32:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:19.902 22:32:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.902 22:32:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.162 22:32:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.162 22:32:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:20.162 22:32:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:20.162 22:32:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:20.162 22:32:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:20.162 22:32:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:20.162 22:32:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:20.162 22:32:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:20.162 22:32:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:20.162 22:32:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:20.162 22:32:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:20.162 22:32:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:20.162 22:32:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:20.162 22:32:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.162 22:32:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.101 nvme0n1 00:23:21.101 22:32:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.101 22:32:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:21.101 22:32:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:21.101 22:32:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.101 22:32:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.101 22:32:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.101 22:32:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:21.101 22:32:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:21.101 22:32:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.101 22:32:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.101 22:32:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.101 22:32:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:21.101 22:32:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:23:21.101 22:32:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:21.101 22:32:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:21.101 22:32:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:21.101 22:32:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:21.101 22:32:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmFmNjNmNjhiODhkY2Y4ZWY3MDkxODgwN2MzMzFjOTeOYqNP: 00:23:21.101 22:32:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjMzYTkzNzY2MWY3OTYxMTMyOTAxOGFjNzI2OGFiZDeRopdL: 00:23:21.101 22:32:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:21.101 22:32:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:21.101 22:32:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmFmNjNmNjhiODhkY2Y4ZWY3MDkxODgwN2MzMzFjOTeOYqNP: 00:23:21.101 22:32:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjMzYTkzNzY2MWY3OTYxMTMyOTAxOGFjNzI2OGFiZDeRopdL: ]] 00:23:21.101 22:32:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjMzYTkzNzY2MWY3OTYxMTMyOTAxOGFjNzI2OGFiZDeRopdL: 00:23:21.101 22:32:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:23:21.101 22:32:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:21.101 22:32:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:21.101 22:32:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:21.101 22:32:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:21.101 22:32:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:21.101 22:32:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:21.101 22:32:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.101 22:32:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.101 22:32:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:21.101 22:32:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:21.101 22:32:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:21.101 22:32:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:21.101 22:32:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:21.101 22:32:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:21.102 22:32:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:21.102 22:32:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:21.102 22:32:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:21.102 22:32:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:21.102 22:32:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:21.361 22:32:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:21.361 22:32:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:21.361 22:32:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:21.361 22:32:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.300 nvme0n1 00:23:22.300 22:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.300 22:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:22.300 22:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:22.300 22:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.300 22:32:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.560 22:32:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.560 22:32:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.560 22:32:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:22.560 22:32:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.560 22:32:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.560 22:32:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.560 22:32:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:22.560 22:32:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:23:22.560 22:32:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:22.560 22:32:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:22.560 22:32:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:22.560 22:32:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:22.560 22:32:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGNiMmU5MDIxODFhMzc2MDFlZGUwZTU0ZjllNDhhNGRlYmU2Y2ZkNzIxODMyNjNloyKljw==: 00:23:22.560 22:32:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Zjc3YjEyNTU2NzNkMWM3MDg3N2VlOTJjZGI0OGQ4MzC+iXbo: 00:23:22.560 22:32:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:22.560 22:32:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:22.560 22:32:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGNiMmU5MDIxODFhMzc2MDFlZGUwZTU0ZjllNDhhNGRlYmU2Y2ZkNzIxODMyNjNloyKljw==: 00:23:22.560 22:32:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Zjc3YjEyNTU2NzNkMWM3MDg3N2VlOTJjZGI0OGQ4MzC+iXbo: ]] 00:23:22.560 22:32:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Zjc3YjEyNTU2NzNkMWM3MDg3N2VlOTJjZGI0OGQ4MzC+iXbo: 00:23:22.560 22:32:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:23:22.560 22:32:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:22.560 22:32:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:22.560 22:32:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:22.560 22:32:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:22.560 22:32:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:22.560 22:32:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:22.560 22:32:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.560 22:32:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.560 22:32:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:22.560 22:32:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:22.560 22:32:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:22.560 22:32:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:22.560 22:32:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:22.560 22:32:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:22.560 22:32:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:22.560 22:32:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:22.560 22:32:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:22.560 22:32:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:22.560 22:32:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:22.560 22:32:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:22.560 22:32:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:22.560 22:32:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:22.560 22:32:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.496 nvme0n1 00:23:23.496 22:32:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.496 22:32:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:23.496 22:32:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:23.496 22:32:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.496 22:32:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.755 22:32:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.755 22:32:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:23.755 22:32:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:23.755 22:32:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.755 22:32:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.755 22:32:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.755 22:32:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:23.755 22:32:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:23:23.755 22:32:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:23.755 22:32:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:23.755 22:32:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:23.755 22:32:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:23.755 22:32:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzYwNjk1OWJkZGM4Y2IzODNiYzViYTk0YzU2ZmQwOTU1ODZiMjVjMDM2ZTM4ODdhM2IwNzU3ZWQzN2UzMDVjMRz82Us=: 00:23:23.755 22:32:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:23.755 22:32:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:23.755 22:32:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:23.755 22:32:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzYwNjk1OWJkZGM4Y2IzODNiYzViYTk0YzU2ZmQwOTU1ODZiMjVjMDM2ZTM4ODdhM2IwNzU3ZWQzN2UzMDVjMRz82Us=: 00:23:23.755 22:32:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:23.755 22:32:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:23:23.755 22:32:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:23.755 22:32:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:23.755 22:32:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:23.755 22:32:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:23.755 22:32:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:23.755 22:32:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:23.755 22:32:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.755 22:32:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.755 22:32:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:23.755 22:32:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:23.755 22:32:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:23.755 22:32:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:23.755 22:32:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:23.755 22:32:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:23.755 22:32:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:23.755 22:32:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:23.755 22:32:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:23.755 22:32:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:23.755 22:32:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:23.755 22:32:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:23.755 22:32:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:23.755 22:32:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.755 22:32:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.693 nvme0n1 00:23:24.693 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.693 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:24.693 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.693 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:24.693 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.952 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.952 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:24.952 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:24.952 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.952 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.952 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.952 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:24.952 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:24.952 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:24.952 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:23:24.952 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:24.952 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:24.952 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:24.952 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:24.952 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWExZTk3ZmVmZGE3YjU0NzU1YmFhODljMmY5NDBjMmV2oBIL: 00:23:24.952 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjIyNmY1ZTg4ODI5M2MzMjkyZTFkYTM2NjAyODUxZWY1YmFjMWVmNmUzYzQwNzkyOWVhMTZjNGE5OGMxYTYzMnYqF4A=: 00:23:24.952 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:24.952 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:24.952 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWExZTk3ZmVmZGE3YjU0NzU1YmFhODljMmY5NDBjMmV2oBIL: 00:23:24.952 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjIyNmY1ZTg4ODI5M2MzMjkyZTFkYTM2NjAyODUxZWY1YmFjMWVmNmUzYzQwNzkyOWVhMTZjNGE5OGMxYTYzMnYqF4A=: ]] 00:23:24.952 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjIyNmY1ZTg4ODI5M2MzMjkyZTFkYTM2NjAyODUxZWY1YmFjMWVmNmUzYzQwNzkyOWVhMTZjNGE5OGMxYTYzMnYqF4A=: 00:23:24.952 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:23:24.952 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:24.952 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:24.952 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:24.952 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:24.952 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:24.952 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:24.952 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.952 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.952 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.952 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:24.952 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:24.952 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:24.952 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:24.952 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:24.952 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:24.952 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:24.952 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:24.952 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:24.953 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:24.953 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:24.953 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:24.953 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.953 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.953 nvme0n1 00:23:24.953 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.953 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:24.953 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.953 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:24.953 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.953 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.213 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.213 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:25.213 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.213 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.213 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.213 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:25.213 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:23:25.213 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:25.213 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:25.213 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:25.213 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:25.213 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2RjNzVlNmU5YWJlYjJjYTYxZmYyYzkyN2E1NGFhOThkYjdkZTkxNGU2ODFlZDIxTp4lhw==: 00:23:25.213 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzJhYzgxOTQ4YjQ4OTZkMzk0Yzk5Zjg2MTIyYmI0YmFkY2ViNWU1MzJjOTQwYjYwXuLstA==: 00:23:25.213 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:25.213 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:25.213 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2RjNzVlNmU5YWJlYjJjYTYxZmYyYzkyN2E1NGFhOThkYjdkZTkxNGU2ODFlZDIxTp4lhw==: 00:23:25.213 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzJhYzgxOTQ4YjQ4OTZkMzk0Yzk5Zjg2MTIyYmI0YmFkY2ViNWU1MzJjOTQwYjYwXuLstA==: ]] 00:23:25.213 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzJhYzgxOTQ4YjQ4OTZkMzk0Yzk5Zjg2MTIyYmI0YmFkY2ViNWU1MzJjOTQwYjYwXuLstA==: 00:23:25.213 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:23:25.213 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:25.213 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:25.213 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:25.213 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:25.213 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:25.213 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:25.213 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.213 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.213 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.213 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:25.213 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:25.213 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:25.213 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:25.213 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:25.213 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:25.213 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:25.213 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:25.213 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:25.213 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:25.213 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:25.213 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:25.213 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.213 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.213 nvme0n1 00:23:25.213 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.213 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:25.213 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:25.213 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.213 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.213 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.214 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.214 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:25.214 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.214 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.214 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.214 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:25.214 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:23:25.214 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:25.214 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:25.214 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:25.214 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:25.214 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmFmNjNmNjhiODhkY2Y4ZWY3MDkxODgwN2MzMzFjOTeOYqNP: 00:23:25.214 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjMzYTkzNzY2MWY3OTYxMTMyOTAxOGFjNzI2OGFiZDeRopdL: 00:23:25.214 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:25.214 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:25.214 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmFmNjNmNjhiODhkY2Y4ZWY3MDkxODgwN2MzMzFjOTeOYqNP: 00:23:25.214 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjMzYTkzNzY2MWY3OTYxMTMyOTAxOGFjNzI2OGFiZDeRopdL: ]] 00:23:25.214 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjMzYTkzNzY2MWY3OTYxMTMyOTAxOGFjNzI2OGFiZDeRopdL: 00:23:25.214 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:23:25.214 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:25.214 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:25.214 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:25.214 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:25.214 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:25.214 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:25.214 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.214 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.474 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.474 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:25.474 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:25.474 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:25.474 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:25.474 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:25.474 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:25.474 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:25.474 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:25.474 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:25.474 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:25.474 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:25.474 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:25.474 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.474 22:32:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.474 nvme0n1 00:23:25.474 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.474 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:25.474 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:25.474 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.474 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.474 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.474 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.474 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:25.474 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.474 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.474 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.474 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:25.474 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:23:25.474 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:25.474 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:25.474 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:25.474 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:25.474 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGNiMmU5MDIxODFhMzc2MDFlZGUwZTU0ZjllNDhhNGRlYmU2Y2ZkNzIxODMyNjNloyKljw==: 00:23:25.474 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Zjc3YjEyNTU2NzNkMWM3MDg3N2VlOTJjZGI0OGQ4MzC+iXbo: 00:23:25.474 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:25.474 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:25.474 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGNiMmU5MDIxODFhMzc2MDFlZGUwZTU0ZjllNDhhNGRlYmU2Y2ZkNzIxODMyNjNloyKljw==: 00:23:25.474 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Zjc3YjEyNTU2NzNkMWM3MDg3N2VlOTJjZGI0OGQ4MzC+iXbo: ]] 00:23:25.474 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Zjc3YjEyNTU2NzNkMWM3MDg3N2VlOTJjZGI0OGQ4MzC+iXbo: 00:23:25.474 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:23:25.474 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:25.474 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:25.474 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:25.474 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:25.474 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:25.474 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:25.474 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.474 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.474 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.474 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:25.474 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:25.474 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:25.474 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:25.474 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:25.474 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:25.474 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:25.474 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:25.474 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:25.474 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:25.474 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:25.474 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:25.474 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.474 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.735 nvme0n1 00:23:25.735 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.735 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:25.735 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:25.735 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.735 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.735 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.735 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.735 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:25.735 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.735 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.735 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.735 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:25.736 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:23:25.736 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:25.736 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:25.736 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:25.736 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:25.736 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzYwNjk1OWJkZGM4Y2IzODNiYzViYTk0YzU2ZmQwOTU1ODZiMjVjMDM2ZTM4ODdhM2IwNzU3ZWQzN2UzMDVjMRz82Us=: 00:23:25.736 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:25.736 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:25.736 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:25.736 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzYwNjk1OWJkZGM4Y2IzODNiYzViYTk0YzU2ZmQwOTU1ODZiMjVjMDM2ZTM4ODdhM2IwNzU3ZWQzN2UzMDVjMRz82Us=: 00:23:25.736 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:25.736 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:23:25.736 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:25.736 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:25.736 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:25.736 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:25.736 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:25.736 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:25.736 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.736 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.736 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.736 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:25.736 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:25.736 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:25.736 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:25.736 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:25.736 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:25.736 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:25.736 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:25.736 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:25.736 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:25.736 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:25.736 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:25.736 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.736 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.995 nvme0n1 00:23:25.995 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.995 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:25.995 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:25.995 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.995 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.995 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.995 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.995 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:25.995 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.995 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.995 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.995 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:25.995 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:25.995 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:23:25.995 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:25.995 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:25.995 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:25.995 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:25.995 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWExZTk3ZmVmZGE3YjU0NzU1YmFhODljMmY5NDBjMmV2oBIL: 00:23:25.995 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjIyNmY1ZTg4ODI5M2MzMjkyZTFkYTM2NjAyODUxZWY1YmFjMWVmNmUzYzQwNzkyOWVhMTZjNGE5OGMxYTYzMnYqF4A=: 00:23:25.995 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:25.995 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:25.995 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWExZTk3ZmVmZGE3YjU0NzU1YmFhODljMmY5NDBjMmV2oBIL: 00:23:25.995 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjIyNmY1ZTg4ODI5M2MzMjkyZTFkYTM2NjAyODUxZWY1YmFjMWVmNmUzYzQwNzkyOWVhMTZjNGE5OGMxYTYzMnYqF4A=: ]] 00:23:25.995 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjIyNmY1ZTg4ODI5M2MzMjkyZTFkYTM2NjAyODUxZWY1YmFjMWVmNmUzYzQwNzkyOWVhMTZjNGE5OGMxYTYzMnYqF4A=: 00:23:25.995 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:23:25.995 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:25.995 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:25.995 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:25.995 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:25.995 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:25.995 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:25.995 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.995 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.995 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.995 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:25.995 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:25.995 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:25.995 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:25.995 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:25.995 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:25.995 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:25.995 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:25.995 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:25.995 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:25.995 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:25.995 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:25.995 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.995 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.254 nvme0n1 00:23:26.254 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.254 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:26.254 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:26.254 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.254 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.254 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.254 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.254 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:26.254 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.254 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.254 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.254 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:26.254 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:23:26.254 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:26.254 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:26.254 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:26.254 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:26.254 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2RjNzVlNmU5YWJlYjJjYTYxZmYyYzkyN2E1NGFhOThkYjdkZTkxNGU2ODFlZDIxTp4lhw==: 00:23:26.254 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzJhYzgxOTQ4YjQ4OTZkMzk0Yzk5Zjg2MTIyYmI0YmFkY2ViNWU1MzJjOTQwYjYwXuLstA==: 00:23:26.254 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:26.254 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:26.254 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2RjNzVlNmU5YWJlYjJjYTYxZmYyYzkyN2E1NGFhOThkYjdkZTkxNGU2ODFlZDIxTp4lhw==: 00:23:26.254 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzJhYzgxOTQ4YjQ4OTZkMzk0Yzk5Zjg2MTIyYmI0YmFkY2ViNWU1MzJjOTQwYjYwXuLstA==: ]] 00:23:26.254 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzJhYzgxOTQ4YjQ4OTZkMzk0Yzk5Zjg2MTIyYmI0YmFkY2ViNWU1MzJjOTQwYjYwXuLstA==: 00:23:26.254 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:23:26.254 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:26.254 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:26.254 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:26.254 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:26.254 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:26.254 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:26.254 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.254 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.254 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.254 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:26.254 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:26.254 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:26.254 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:26.254 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:26.254 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:26.254 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:26.254 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:26.254 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:26.254 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:26.254 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:26.254 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:26.255 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.255 22:32:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.512 nvme0n1 00:23:26.512 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.512 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:26.512 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.512 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:26.513 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.513 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.513 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.513 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:26.513 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.513 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.513 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.513 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:26.513 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:23:26.513 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:26.513 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:26.513 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:26.513 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:26.513 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmFmNjNmNjhiODhkY2Y4ZWY3MDkxODgwN2MzMzFjOTeOYqNP: 00:23:26.513 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjMzYTkzNzY2MWY3OTYxMTMyOTAxOGFjNzI2OGFiZDeRopdL: 00:23:26.513 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:26.513 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:26.513 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmFmNjNmNjhiODhkY2Y4ZWY3MDkxODgwN2MzMzFjOTeOYqNP: 00:23:26.513 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjMzYTkzNzY2MWY3OTYxMTMyOTAxOGFjNzI2OGFiZDeRopdL: ]] 00:23:26.513 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjMzYTkzNzY2MWY3OTYxMTMyOTAxOGFjNzI2OGFiZDeRopdL: 00:23:26.513 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:23:26.513 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:26.513 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:26.513 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:26.513 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:26.513 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:26.513 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:26.513 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.513 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.513 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.513 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:26.513 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:26.513 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:26.513 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:26.513 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:26.513 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:26.513 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:26.513 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:26.513 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:26.513 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:26.513 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:26.513 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:26.513 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.513 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.771 nvme0n1 00:23:26.771 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.771 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:26.771 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.771 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.771 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:26.771 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.771 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.771 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:26.771 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.771 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.771 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.771 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:26.771 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:23:26.771 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:26.771 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:26.771 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:26.771 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:26.771 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGNiMmU5MDIxODFhMzc2MDFlZGUwZTU0ZjllNDhhNGRlYmU2Y2ZkNzIxODMyNjNloyKljw==: 00:23:26.771 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Zjc3YjEyNTU2NzNkMWM3MDg3N2VlOTJjZGI0OGQ4MzC+iXbo: 00:23:26.771 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:26.771 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:26.771 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGNiMmU5MDIxODFhMzc2MDFlZGUwZTU0ZjllNDhhNGRlYmU2Y2ZkNzIxODMyNjNloyKljw==: 00:23:26.771 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Zjc3YjEyNTU2NzNkMWM3MDg3N2VlOTJjZGI0OGQ4MzC+iXbo: ]] 00:23:26.771 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Zjc3YjEyNTU2NzNkMWM3MDg3N2VlOTJjZGI0OGQ4MzC+iXbo: 00:23:26.771 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:23:26.771 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:26.771 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:26.771 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:26.771 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:26.771 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:26.771 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:26.771 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.771 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.771 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.771 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:26.771 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:26.771 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:26.771 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:26.771 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:26.771 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:26.771 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:26.771 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:26.772 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:26.772 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:26.772 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:26.772 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:26.772 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.772 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.030 nvme0n1 00:23:27.030 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.030 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:27.030 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:27.030 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.030 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.030 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.030 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.030 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:27.030 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.030 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.030 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.030 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:27.030 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:23:27.030 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:27.030 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:27.030 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:27.030 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:27.030 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzYwNjk1OWJkZGM4Y2IzODNiYzViYTk0YzU2ZmQwOTU1ODZiMjVjMDM2ZTM4ODdhM2IwNzU3ZWQzN2UzMDVjMRz82Us=: 00:23:27.030 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:27.030 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:27.030 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:27.030 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzYwNjk1OWJkZGM4Y2IzODNiYzViYTk0YzU2ZmQwOTU1ODZiMjVjMDM2ZTM4ODdhM2IwNzU3ZWQzN2UzMDVjMRz82Us=: 00:23:27.030 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:27.030 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:23:27.030 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:27.030 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:27.030 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:27.030 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:27.030 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:27.030 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:27.030 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.030 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.030 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.030 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:27.030 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:27.030 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:27.030 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:27.030 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:27.030 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:27.030 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:27.030 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:27.030 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:27.030 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:27.030 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:27.030 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:27.031 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.031 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.289 nvme0n1 00:23:27.289 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.289 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:27.289 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:27.289 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.289 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.289 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.289 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.289 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:27.289 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.289 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.289 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.289 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:27.289 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:27.289 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:23:27.289 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:27.289 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:27.289 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:27.289 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:27.289 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWExZTk3ZmVmZGE3YjU0NzU1YmFhODljMmY5NDBjMmV2oBIL: 00:23:27.289 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjIyNmY1ZTg4ODI5M2MzMjkyZTFkYTM2NjAyODUxZWY1YmFjMWVmNmUzYzQwNzkyOWVhMTZjNGE5OGMxYTYzMnYqF4A=: 00:23:27.289 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:27.289 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:27.289 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWExZTk3ZmVmZGE3YjU0NzU1YmFhODljMmY5NDBjMmV2oBIL: 00:23:27.289 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjIyNmY1ZTg4ODI5M2MzMjkyZTFkYTM2NjAyODUxZWY1YmFjMWVmNmUzYzQwNzkyOWVhMTZjNGE5OGMxYTYzMnYqF4A=: ]] 00:23:27.289 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjIyNmY1ZTg4ODI5M2MzMjkyZTFkYTM2NjAyODUxZWY1YmFjMWVmNmUzYzQwNzkyOWVhMTZjNGE5OGMxYTYzMnYqF4A=: 00:23:27.289 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:23:27.289 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:27.289 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:27.289 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:27.289 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:27.289 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:27.289 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:27.289 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.289 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.547 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.547 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:27.547 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:27.547 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:27.547 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:27.547 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:27.547 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:27.547 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:27.547 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:27.547 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:27.547 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:27.547 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:27.547 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:27.547 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.547 22:32:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.814 nvme0n1 00:23:27.814 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.814 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:27.814 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:27.814 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.814 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.814 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.814 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.814 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:27.814 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.814 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.814 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.814 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:27.814 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:23:27.814 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:27.814 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:27.814 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:27.814 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:27.814 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2RjNzVlNmU5YWJlYjJjYTYxZmYyYzkyN2E1NGFhOThkYjdkZTkxNGU2ODFlZDIxTp4lhw==: 00:23:27.814 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzJhYzgxOTQ4YjQ4OTZkMzk0Yzk5Zjg2MTIyYmI0YmFkY2ViNWU1MzJjOTQwYjYwXuLstA==: 00:23:27.814 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:27.814 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:27.814 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2RjNzVlNmU5YWJlYjJjYTYxZmYyYzkyN2E1NGFhOThkYjdkZTkxNGU2ODFlZDIxTp4lhw==: 00:23:27.814 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzJhYzgxOTQ4YjQ4OTZkMzk0Yzk5Zjg2MTIyYmI0YmFkY2ViNWU1MzJjOTQwYjYwXuLstA==: ]] 00:23:27.814 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzJhYzgxOTQ4YjQ4OTZkMzk0Yzk5Zjg2MTIyYmI0YmFkY2ViNWU1MzJjOTQwYjYwXuLstA==: 00:23:27.814 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:23:27.814 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:27.814 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:27.814 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:27.814 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:27.814 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:27.814 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:27.814 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.814 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.814 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:27.814 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:27.814 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:27.814 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:27.814 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:27.814 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:27.814 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:27.814 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:27.814 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:27.814 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:27.814 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:27.814 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:27.815 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:27.815 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:27.815 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.075 nvme0n1 00:23:28.075 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.075 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.075 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.075 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.075 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:28.075 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.075 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.075 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.075 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.075 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.075 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.075 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:28.075 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:23:28.075 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:28.076 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:28.076 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:28.076 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:28.076 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmFmNjNmNjhiODhkY2Y4ZWY3MDkxODgwN2MzMzFjOTeOYqNP: 00:23:28.076 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjMzYTkzNzY2MWY3OTYxMTMyOTAxOGFjNzI2OGFiZDeRopdL: 00:23:28.076 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:28.076 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:28.076 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmFmNjNmNjhiODhkY2Y4ZWY3MDkxODgwN2MzMzFjOTeOYqNP: 00:23:28.076 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjMzYTkzNzY2MWY3OTYxMTMyOTAxOGFjNzI2OGFiZDeRopdL: ]] 00:23:28.076 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjMzYTkzNzY2MWY3OTYxMTMyOTAxOGFjNzI2OGFiZDeRopdL: 00:23:28.076 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:23:28.076 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:28.076 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:28.076 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:28.076 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:28.076 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:28.076 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:28.076 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.076 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.076 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.076 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:28.076 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:28.076 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:28.076 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:28.076 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.076 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.076 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:28.076 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:28.076 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:28.076 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:28.076 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:28.076 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:28.076 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.076 22:32:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.640 nvme0n1 00:23:28.640 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.640 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.640 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.640 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:28.640 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.640 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.640 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.641 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.641 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.641 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.641 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.641 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:28.641 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:23:28.641 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:28.641 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:28.641 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:28.641 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:28.641 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGNiMmU5MDIxODFhMzc2MDFlZGUwZTU0ZjllNDhhNGRlYmU2Y2ZkNzIxODMyNjNloyKljw==: 00:23:28.641 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Zjc3YjEyNTU2NzNkMWM3MDg3N2VlOTJjZGI0OGQ4MzC+iXbo: 00:23:28.641 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:28.641 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:28.641 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGNiMmU5MDIxODFhMzc2MDFlZGUwZTU0ZjllNDhhNGRlYmU2Y2ZkNzIxODMyNjNloyKljw==: 00:23:28.641 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Zjc3YjEyNTU2NzNkMWM3MDg3N2VlOTJjZGI0OGQ4MzC+iXbo: ]] 00:23:28.641 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Zjc3YjEyNTU2NzNkMWM3MDg3N2VlOTJjZGI0OGQ4MzC+iXbo: 00:23:28.641 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:23:28.641 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:28.641 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:28.641 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:28.641 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:28.641 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:28.641 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:28.641 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.641 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.641 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.641 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:28.641 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:28.641 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:28.641 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:28.641 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.641 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.641 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:28.641 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:28.641 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:28.641 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:28.641 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:28.641 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:28.641 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.641 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.898 nvme0n1 00:23:28.898 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.898 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.898 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.898 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:28.898 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.898 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.898 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.898 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.898 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.898 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.898 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.898 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:28.898 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:23:28.898 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:28.898 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:28.898 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:28.898 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:28.898 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzYwNjk1OWJkZGM4Y2IzODNiYzViYTk0YzU2ZmQwOTU1ODZiMjVjMDM2ZTM4ODdhM2IwNzU3ZWQzN2UzMDVjMRz82Us=: 00:23:28.898 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:28.898 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:28.898 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:28.898 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzYwNjk1OWJkZGM4Y2IzODNiYzViYTk0YzU2ZmQwOTU1ODZiMjVjMDM2ZTM4ODdhM2IwNzU3ZWQzN2UzMDVjMRz82Us=: 00:23:28.898 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:28.898 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:23:28.898 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:28.898 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:28.899 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:28.899 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:28.899 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:28.899 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:28.899 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.899 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.899 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.899 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:28.899 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:28.899 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:28.899 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:28.899 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.899 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.899 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:28.899 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:28.899 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:28.899 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:28.899 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:28.899 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:28.899 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.899 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.464 nvme0n1 00:23:29.464 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.464 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:29.464 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.464 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.464 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.464 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.464 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.464 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.464 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.464 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.464 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.464 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:29.464 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:29.464 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:23:29.464 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.464 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:29.464 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:29.464 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:29.464 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWExZTk3ZmVmZGE3YjU0NzU1YmFhODljMmY5NDBjMmV2oBIL: 00:23:29.464 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjIyNmY1ZTg4ODI5M2MzMjkyZTFkYTM2NjAyODUxZWY1YmFjMWVmNmUzYzQwNzkyOWVhMTZjNGE5OGMxYTYzMnYqF4A=: 00:23:29.464 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:29.464 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:29.464 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWExZTk3ZmVmZGE3YjU0NzU1YmFhODljMmY5NDBjMmV2oBIL: 00:23:29.464 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjIyNmY1ZTg4ODI5M2MzMjkyZTFkYTM2NjAyODUxZWY1YmFjMWVmNmUzYzQwNzkyOWVhMTZjNGE5OGMxYTYzMnYqF4A=: ]] 00:23:29.464 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjIyNmY1ZTg4ODI5M2MzMjkyZTFkYTM2NjAyODUxZWY1YmFjMWVmNmUzYzQwNzkyOWVhMTZjNGE5OGMxYTYzMnYqF4A=: 00:23:29.464 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:23:29.464 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:29.464 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:29.464 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:29.464 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:29.464 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.464 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:29.464 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.464 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.464 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.464 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:29.464 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:29.464 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:29.464 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:29.464 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.464 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.464 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:29.464 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:29.464 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:29.464 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:29.464 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:29.464 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:29.464 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.464 22:32:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.029 nvme0n1 00:23:30.029 22:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.029 22:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:30.029 22:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.029 22:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:30.029 22:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.029 22:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.029 22:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.029 22:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:30.029 22:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.029 22:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.029 22:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.029 22:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:30.029 22:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:23:30.029 22:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:30.029 22:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:30.029 22:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:30.029 22:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:30.029 22:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2RjNzVlNmU5YWJlYjJjYTYxZmYyYzkyN2E1NGFhOThkYjdkZTkxNGU2ODFlZDIxTp4lhw==: 00:23:30.029 22:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzJhYzgxOTQ4YjQ4OTZkMzk0Yzk5Zjg2MTIyYmI0YmFkY2ViNWU1MzJjOTQwYjYwXuLstA==: 00:23:30.029 22:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:30.029 22:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:30.029 22:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2RjNzVlNmU5YWJlYjJjYTYxZmYyYzkyN2E1NGFhOThkYjdkZTkxNGU2ODFlZDIxTp4lhw==: 00:23:30.029 22:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzJhYzgxOTQ4YjQ4OTZkMzk0Yzk5Zjg2MTIyYmI0YmFkY2ViNWU1MzJjOTQwYjYwXuLstA==: ]] 00:23:30.029 22:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzJhYzgxOTQ4YjQ4OTZkMzk0Yzk5Zjg2MTIyYmI0YmFkY2ViNWU1MzJjOTQwYjYwXuLstA==: 00:23:30.029 22:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:23:30.029 22:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:30.029 22:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:30.029 22:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:30.029 22:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:30.029 22:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:30.029 22:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:30.029 22:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.029 22:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.029 22:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.029 22:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:30.029 22:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:30.029 22:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:30.029 22:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:30.029 22:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:30.029 22:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:30.029 22:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:30.029 22:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:30.029 22:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:30.029 22:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:30.029 22:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:30.029 22:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:30.029 22:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.029 22:32:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.594 nvme0n1 00:23:30.594 22:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.594 22:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:30.594 22:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:30.594 22:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.594 22:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.594 22:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.851 22:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.851 22:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:30.851 22:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.851 22:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.851 22:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.851 22:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:30.851 22:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:23:30.851 22:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:30.851 22:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:30.851 22:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:30.851 22:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:30.851 22:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmFmNjNmNjhiODhkY2Y4ZWY3MDkxODgwN2MzMzFjOTeOYqNP: 00:23:30.851 22:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjMzYTkzNzY2MWY3OTYxMTMyOTAxOGFjNzI2OGFiZDeRopdL: 00:23:30.851 22:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:30.851 22:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:30.851 22:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmFmNjNmNjhiODhkY2Y4ZWY3MDkxODgwN2MzMzFjOTeOYqNP: 00:23:30.851 22:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjMzYTkzNzY2MWY3OTYxMTMyOTAxOGFjNzI2OGFiZDeRopdL: ]] 00:23:30.851 22:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjMzYTkzNzY2MWY3OTYxMTMyOTAxOGFjNzI2OGFiZDeRopdL: 00:23:30.851 22:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:23:30.851 22:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:30.851 22:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:30.851 22:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:30.851 22:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:30.851 22:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:30.851 22:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:30.851 22:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.851 22:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.851 22:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.851 22:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:30.851 22:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:30.851 22:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:30.851 22:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:30.851 22:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:30.851 22:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:30.851 22:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:30.851 22:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:30.851 22:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:30.851 22:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:30.851 22:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:30.851 22:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:30.851 22:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.851 22:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.417 nvme0n1 00:23:31.417 22:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.417 22:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:31.417 22:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:31.417 22:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.417 22:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.417 22:32:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.417 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.417 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:31.417 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.417 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.417 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.417 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:31.417 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:23:31.417 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:31.417 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:31.417 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:31.417 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:31.417 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGNiMmU5MDIxODFhMzc2MDFlZGUwZTU0ZjllNDhhNGRlYmU2Y2ZkNzIxODMyNjNloyKljw==: 00:23:31.417 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Zjc3YjEyNTU2NzNkMWM3MDg3N2VlOTJjZGI0OGQ4MzC+iXbo: 00:23:31.417 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:31.417 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:31.417 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGNiMmU5MDIxODFhMzc2MDFlZGUwZTU0ZjllNDhhNGRlYmU2Y2ZkNzIxODMyNjNloyKljw==: 00:23:31.417 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Zjc3YjEyNTU2NzNkMWM3MDg3N2VlOTJjZGI0OGQ4MzC+iXbo: ]] 00:23:31.417 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Zjc3YjEyNTU2NzNkMWM3MDg3N2VlOTJjZGI0OGQ4MzC+iXbo: 00:23:31.417 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:23:31.417 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:31.417 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:31.417 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:31.417 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:31.417 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:31.417 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:31.417 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.417 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.417 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.417 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:31.417 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:31.417 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:31.417 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:31.417 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.417 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.417 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:31.417 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:31.417 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:31.417 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:31.417 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:31.417 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:31.417 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.417 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.982 nvme0n1 00:23:31.982 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.982 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:31.982 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:31.982 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.982 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.982 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.240 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.240 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:32.240 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.240 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.240 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.240 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:32.240 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:23:32.240 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:32.240 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:32.240 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:32.240 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:32.240 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzYwNjk1OWJkZGM4Y2IzODNiYzViYTk0YzU2ZmQwOTU1ODZiMjVjMDM2ZTM4ODdhM2IwNzU3ZWQzN2UzMDVjMRz82Us=: 00:23:32.240 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:32.240 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:32.240 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:32.240 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzYwNjk1OWJkZGM4Y2IzODNiYzViYTk0YzU2ZmQwOTU1ODZiMjVjMDM2ZTM4ODdhM2IwNzU3ZWQzN2UzMDVjMRz82Us=: 00:23:32.240 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:32.240 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:23:32.240 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:32.240 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:32.240 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:32.240 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:32.240 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:32.240 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:32.240 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.240 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.240 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.240 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:32.240 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:32.240 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:32.240 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:32.240 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:32.241 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:32.241 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:32.241 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:32.241 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:32.241 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:32.241 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:32.241 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:32.241 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.241 22:32:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.806 nvme0n1 00:23:32.806 22:32:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.806 22:32:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:32.806 22:32:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:32.806 22:32:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.806 22:32:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.806 22:32:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.806 22:32:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.806 22:32:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:32.806 22:32:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.806 22:32:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.806 22:32:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.806 22:32:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:32.806 22:32:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:32.806 22:32:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:23:32.806 22:32:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:32.806 22:32:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:32.806 22:32:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:32.806 22:32:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:32.806 22:32:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWExZTk3ZmVmZGE3YjU0NzU1YmFhODljMmY5NDBjMmV2oBIL: 00:23:32.807 22:32:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjIyNmY1ZTg4ODI5M2MzMjkyZTFkYTM2NjAyODUxZWY1YmFjMWVmNmUzYzQwNzkyOWVhMTZjNGE5OGMxYTYzMnYqF4A=: 00:23:32.807 22:32:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:32.807 22:32:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:32.807 22:32:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWExZTk3ZmVmZGE3YjU0NzU1YmFhODljMmY5NDBjMmV2oBIL: 00:23:32.807 22:32:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjIyNmY1ZTg4ODI5M2MzMjkyZTFkYTM2NjAyODUxZWY1YmFjMWVmNmUzYzQwNzkyOWVhMTZjNGE5OGMxYTYzMnYqF4A=: ]] 00:23:32.807 22:32:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjIyNmY1ZTg4ODI5M2MzMjkyZTFkYTM2NjAyODUxZWY1YmFjMWVmNmUzYzQwNzkyOWVhMTZjNGE5OGMxYTYzMnYqF4A=: 00:23:32.807 22:32:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:23:32.807 22:32:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:32.807 22:32:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:32.807 22:32:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:32.807 22:32:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:32.807 22:32:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:32.807 22:32:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:32.807 22:32:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.807 22:32:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.807 22:32:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.807 22:32:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:32.807 22:32:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:32.807 22:32:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:32.807 22:32:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:32.807 22:32:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:32.807 22:32:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:32.807 22:32:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:32.807 22:32:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:32.807 22:32:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:32.807 22:32:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:32.807 22:32:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:32.807 22:32:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:32.807 22:32:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.807 22:32:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.181 nvme0n1 00:23:34.181 22:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.181 22:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:34.181 22:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:34.181 22:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.181 22:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.181 22:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.181 22:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.181 22:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:34.181 22:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.181 22:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.181 22:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.181 22:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:34.181 22:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:23:34.181 22:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:34.181 22:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:34.181 22:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:34.181 22:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:34.181 22:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2RjNzVlNmU5YWJlYjJjYTYxZmYyYzkyN2E1NGFhOThkYjdkZTkxNGU2ODFlZDIxTp4lhw==: 00:23:34.181 22:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzJhYzgxOTQ4YjQ4OTZkMzk0Yzk5Zjg2MTIyYmI0YmFkY2ViNWU1MzJjOTQwYjYwXuLstA==: 00:23:34.181 22:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:34.181 22:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:34.181 22:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2RjNzVlNmU5YWJlYjJjYTYxZmYyYzkyN2E1NGFhOThkYjdkZTkxNGU2ODFlZDIxTp4lhw==: 00:23:34.181 22:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzJhYzgxOTQ4YjQ4OTZkMzk0Yzk5Zjg2MTIyYmI0YmFkY2ViNWU1MzJjOTQwYjYwXuLstA==: ]] 00:23:34.181 22:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzJhYzgxOTQ4YjQ4OTZkMzk0Yzk5Zjg2MTIyYmI0YmFkY2ViNWU1MzJjOTQwYjYwXuLstA==: 00:23:34.181 22:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:23:34.181 22:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:34.181 22:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:34.181 22:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:34.181 22:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:34.181 22:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:34.181 22:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:34.181 22:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.181 22:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.181 22:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.182 22:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:34.182 22:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:34.182 22:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:34.182 22:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:34.182 22:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:34.182 22:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:34.182 22:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:34.182 22:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:34.182 22:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:34.182 22:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:34.182 22:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:34.182 22:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:34.182 22:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.182 22:32:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.116 nvme0n1 00:23:35.116 22:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.116 22:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:35.116 22:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.116 22:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:35.116 22:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.373 22:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.373 22:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.373 22:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:35.373 22:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.373 22:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.373 22:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.373 22:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:35.373 22:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:23:35.373 22:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:35.373 22:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:35.373 22:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:35.373 22:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:35.373 22:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmFmNjNmNjhiODhkY2Y4ZWY3MDkxODgwN2MzMzFjOTeOYqNP: 00:23:35.373 22:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjMzYTkzNzY2MWY3OTYxMTMyOTAxOGFjNzI2OGFiZDeRopdL: 00:23:35.373 22:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:35.373 22:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:35.373 22:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmFmNjNmNjhiODhkY2Y4ZWY3MDkxODgwN2MzMzFjOTeOYqNP: 00:23:35.373 22:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjMzYTkzNzY2MWY3OTYxMTMyOTAxOGFjNzI2OGFiZDeRopdL: ]] 00:23:35.373 22:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjMzYTkzNzY2MWY3OTYxMTMyOTAxOGFjNzI2OGFiZDeRopdL: 00:23:35.373 22:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:23:35.373 22:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:35.373 22:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:35.373 22:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:35.373 22:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:35.373 22:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:35.373 22:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:35.374 22:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.374 22:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.374 22:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.374 22:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:35.374 22:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:35.374 22:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:35.374 22:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:35.374 22:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:35.374 22:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:35.374 22:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:35.374 22:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:35.374 22:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:35.374 22:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:35.374 22:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:35.374 22:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:35.374 22:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.374 22:33:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.306 nvme0n1 00:23:36.306 22:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.306 22:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:36.306 22:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:36.306 22:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.306 22:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.564 22:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.564 22:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.564 22:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:36.564 22:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.564 22:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.564 22:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.564 22:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:36.564 22:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:23:36.564 22:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:36.564 22:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:36.564 22:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:36.564 22:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:36.564 22:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGNiMmU5MDIxODFhMzc2MDFlZGUwZTU0ZjllNDhhNGRlYmU2Y2ZkNzIxODMyNjNloyKljw==: 00:23:36.564 22:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Zjc3YjEyNTU2NzNkMWM3MDg3N2VlOTJjZGI0OGQ4MzC+iXbo: 00:23:36.564 22:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:36.564 22:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:36.564 22:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGNiMmU5MDIxODFhMzc2MDFlZGUwZTU0ZjllNDhhNGRlYmU2Y2ZkNzIxODMyNjNloyKljw==: 00:23:36.564 22:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Zjc3YjEyNTU2NzNkMWM3MDg3N2VlOTJjZGI0OGQ4MzC+iXbo: ]] 00:23:36.564 22:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Zjc3YjEyNTU2NzNkMWM3MDg3N2VlOTJjZGI0OGQ4MzC+iXbo: 00:23:36.564 22:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:23:36.564 22:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:36.564 22:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:36.564 22:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:36.564 22:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:36.564 22:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:36.564 22:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:36.564 22:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.565 22:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.565 22:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.565 22:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:36.565 22:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:36.565 22:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:36.565 22:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:36.565 22:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:36.565 22:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:36.565 22:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:36.565 22:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:36.565 22:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:36.565 22:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:36.565 22:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:36.565 22:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:36.565 22:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.565 22:33:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.937 nvme0n1 00:23:37.937 22:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.937 22:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:37.937 22:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:37.937 22:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.937 22:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.937 22:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.937 22:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.937 22:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:37.937 22:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.937 22:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.937 22:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.937 22:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:37.937 22:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:23:37.937 22:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:37.937 22:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:37.937 22:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:37.937 22:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:37.937 22:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzYwNjk1OWJkZGM4Y2IzODNiYzViYTk0YzU2ZmQwOTU1ODZiMjVjMDM2ZTM4ODdhM2IwNzU3ZWQzN2UzMDVjMRz82Us=: 00:23:37.937 22:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:37.937 22:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:37.937 22:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:37.937 22:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzYwNjk1OWJkZGM4Y2IzODNiYzViYTk0YzU2ZmQwOTU1ODZiMjVjMDM2ZTM4ODdhM2IwNzU3ZWQzN2UzMDVjMRz82Us=: 00:23:37.937 22:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:37.937 22:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:23:37.937 22:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:37.937 22:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:37.937 22:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:37.937 22:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:37.937 22:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:37.937 22:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:37.937 22:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.937 22:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.937 22:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.937 22:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:37.937 22:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:37.937 22:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:37.937 22:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:37.937 22:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:37.937 22:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:37.937 22:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:37.937 22:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:37.937 22:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:37.937 22:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:37.937 22:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:37.937 22:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:37.937 22:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.937 22:33:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.871 nvme0n1 00:23:38.871 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.871 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:38.871 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:38.871 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.871 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.871 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.871 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.871 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:38.871 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.871 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.871 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.871 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:38.871 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:38.871 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:38.871 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:23:38.871 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:38.871 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:38.871 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:38.871 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:38.871 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWExZTk3ZmVmZGE3YjU0NzU1YmFhODljMmY5NDBjMmV2oBIL: 00:23:38.871 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjIyNmY1ZTg4ODI5M2MzMjkyZTFkYTM2NjAyODUxZWY1YmFjMWVmNmUzYzQwNzkyOWVhMTZjNGE5OGMxYTYzMnYqF4A=: 00:23:38.871 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:38.871 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:38.871 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWExZTk3ZmVmZGE3YjU0NzU1YmFhODljMmY5NDBjMmV2oBIL: 00:23:38.871 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjIyNmY1ZTg4ODI5M2MzMjkyZTFkYTM2NjAyODUxZWY1YmFjMWVmNmUzYzQwNzkyOWVhMTZjNGE5OGMxYTYzMnYqF4A=: ]] 00:23:38.871 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjIyNmY1ZTg4ODI5M2MzMjkyZTFkYTM2NjAyODUxZWY1YmFjMWVmNmUzYzQwNzkyOWVhMTZjNGE5OGMxYTYzMnYqF4A=: 00:23:38.871 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:23:38.871 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:38.871 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:38.871 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:38.871 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:38.871 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:38.871 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:38.871 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.871 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.871 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.871 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:38.871 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:38.871 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:38.871 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:38.871 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:38.871 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:38.871 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:38.871 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:38.871 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:38.871 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:38.871 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:38.871 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:38.871 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.871 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.131 nvme0n1 00:23:39.131 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.131 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:39.131 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:39.131 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.131 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.131 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.131 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.131 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:39.131 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.131 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.131 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.131 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:39.131 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:23:39.131 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:39.132 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:39.132 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:39.132 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:39.132 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2RjNzVlNmU5YWJlYjJjYTYxZmYyYzkyN2E1NGFhOThkYjdkZTkxNGU2ODFlZDIxTp4lhw==: 00:23:39.132 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzJhYzgxOTQ4YjQ4OTZkMzk0Yzk5Zjg2MTIyYmI0YmFkY2ViNWU1MzJjOTQwYjYwXuLstA==: 00:23:39.132 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:39.132 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:39.132 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2RjNzVlNmU5YWJlYjJjYTYxZmYyYzkyN2E1NGFhOThkYjdkZTkxNGU2ODFlZDIxTp4lhw==: 00:23:39.132 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzJhYzgxOTQ4YjQ4OTZkMzk0Yzk5Zjg2MTIyYmI0YmFkY2ViNWU1MzJjOTQwYjYwXuLstA==: ]] 00:23:39.132 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzJhYzgxOTQ4YjQ4OTZkMzk0Yzk5Zjg2MTIyYmI0YmFkY2ViNWU1MzJjOTQwYjYwXuLstA==: 00:23:39.132 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:23:39.132 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:39.132 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:39.132 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:39.132 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:39.132 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:39.132 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:39.132 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.132 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.132 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.132 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:39.132 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:39.132 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:39.132 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:39.132 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:39.132 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:39.132 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:39.132 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:39.132 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:39.132 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:39.132 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:39.132 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:39.132 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.132 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.419 nvme0n1 00:23:39.419 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.419 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:39.419 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:39.419 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.419 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.419 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.419 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.419 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:39.419 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.419 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.419 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.419 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:39.419 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:23:39.419 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:39.419 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:39.419 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:39.419 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:39.419 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmFmNjNmNjhiODhkY2Y4ZWY3MDkxODgwN2MzMzFjOTeOYqNP: 00:23:39.419 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjMzYTkzNzY2MWY3OTYxMTMyOTAxOGFjNzI2OGFiZDeRopdL: 00:23:39.419 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:39.419 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:39.419 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmFmNjNmNjhiODhkY2Y4ZWY3MDkxODgwN2MzMzFjOTeOYqNP: 00:23:39.419 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjMzYTkzNzY2MWY3OTYxMTMyOTAxOGFjNzI2OGFiZDeRopdL: ]] 00:23:39.419 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjMzYTkzNzY2MWY3OTYxMTMyOTAxOGFjNzI2OGFiZDeRopdL: 00:23:39.419 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:23:39.419 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:39.419 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:39.419 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:39.419 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:39.419 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:39.419 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:39.419 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.419 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.419 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.419 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:39.419 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:39.419 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:39.419 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:39.419 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:39.419 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:39.419 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:39.419 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:39.419 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:39.419 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:39.419 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:39.419 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:39.419 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.419 22:33:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.679 nvme0n1 00:23:39.679 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.679 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:39.679 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:39.679 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.679 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.679 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.679 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.679 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:39.679 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.679 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.679 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.679 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:39.679 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:23:39.679 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:39.679 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:39.679 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:39.679 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:39.679 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGNiMmU5MDIxODFhMzc2MDFlZGUwZTU0ZjllNDhhNGRlYmU2Y2ZkNzIxODMyNjNloyKljw==: 00:23:39.679 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Zjc3YjEyNTU2NzNkMWM3MDg3N2VlOTJjZGI0OGQ4MzC+iXbo: 00:23:39.679 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:39.679 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:39.679 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGNiMmU5MDIxODFhMzc2MDFlZGUwZTU0ZjllNDhhNGRlYmU2Y2ZkNzIxODMyNjNloyKljw==: 00:23:39.679 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Zjc3YjEyNTU2NzNkMWM3MDg3N2VlOTJjZGI0OGQ4MzC+iXbo: ]] 00:23:39.679 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Zjc3YjEyNTU2NzNkMWM3MDg3N2VlOTJjZGI0OGQ4MzC+iXbo: 00:23:39.679 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:23:39.679 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:39.679 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:39.679 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:39.679 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:39.679 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:39.679 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:39.679 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.679 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.679 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.679 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:39.679 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:39.679 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:39.679 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:39.679 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:39.679 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:39.679 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:39.679 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:39.679 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:39.679 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:39.679 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:39.679 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:39.679 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.679 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.679 nvme0n1 00:23:39.679 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.679 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:39.679 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.679 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:39.679 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.679 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.679 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.679 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:39.679 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.679 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.938 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.938 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:39.938 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:23:39.938 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:39.938 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:39.938 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:39.938 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:39.938 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzYwNjk1OWJkZGM4Y2IzODNiYzViYTk0YzU2ZmQwOTU1ODZiMjVjMDM2ZTM4ODdhM2IwNzU3ZWQzN2UzMDVjMRz82Us=: 00:23:39.938 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:39.938 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:39.938 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:39.938 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzYwNjk1OWJkZGM4Y2IzODNiYzViYTk0YzU2ZmQwOTU1ODZiMjVjMDM2ZTM4ODdhM2IwNzU3ZWQzN2UzMDVjMRz82Us=: 00:23:39.938 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:39.938 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:23:39.938 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:39.938 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:39.938 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:39.938 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:39.938 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:39.938 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:39.938 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.938 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.938 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.938 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:39.938 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:39.938 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:39.938 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:39.938 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:39.938 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:39.938 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:39.938 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:39.938 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:39.938 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:39.938 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:39.938 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:39.938 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.938 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.938 nvme0n1 00:23:39.938 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.938 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:39.938 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.938 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:39.938 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.938 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.938 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.938 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:39.938 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.938 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.938 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.938 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:39.938 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:39.938 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:23:39.938 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:39.938 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:39.938 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:39.938 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:39.938 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWExZTk3ZmVmZGE3YjU0NzU1YmFhODljMmY5NDBjMmV2oBIL: 00:23:39.938 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjIyNmY1ZTg4ODI5M2MzMjkyZTFkYTM2NjAyODUxZWY1YmFjMWVmNmUzYzQwNzkyOWVhMTZjNGE5OGMxYTYzMnYqF4A=: 00:23:39.938 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:39.938 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:39.938 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWExZTk3ZmVmZGE3YjU0NzU1YmFhODljMmY5NDBjMmV2oBIL: 00:23:39.938 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjIyNmY1ZTg4ODI5M2MzMjkyZTFkYTM2NjAyODUxZWY1YmFjMWVmNmUzYzQwNzkyOWVhMTZjNGE5OGMxYTYzMnYqF4A=: ]] 00:23:39.938 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjIyNmY1ZTg4ODI5M2MzMjkyZTFkYTM2NjAyODUxZWY1YmFjMWVmNmUzYzQwNzkyOWVhMTZjNGE5OGMxYTYzMnYqF4A=: 00:23:39.939 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:23:39.939 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:39.939 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:39.939 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:39.939 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:39.939 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:39.939 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:39.939 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.939 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.939 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.939 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:39.939 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:39.939 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:39.939 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:39.939 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:39.939 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:39.939 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:39.939 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:39.939 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:39.939 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:39.939 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:39.939 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:39.939 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.939 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.197 nvme0n1 00:23:40.197 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.197 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:40.197 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.197 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:40.197 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.197 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.197 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.197 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:40.197 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.197 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.197 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.197 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:40.197 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:23:40.197 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:40.197 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:40.197 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:40.197 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:40.197 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2RjNzVlNmU5YWJlYjJjYTYxZmYyYzkyN2E1NGFhOThkYjdkZTkxNGU2ODFlZDIxTp4lhw==: 00:23:40.197 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzJhYzgxOTQ4YjQ4OTZkMzk0Yzk5Zjg2MTIyYmI0YmFkY2ViNWU1MzJjOTQwYjYwXuLstA==: 00:23:40.197 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:40.197 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:40.198 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2RjNzVlNmU5YWJlYjJjYTYxZmYyYzkyN2E1NGFhOThkYjdkZTkxNGU2ODFlZDIxTp4lhw==: 00:23:40.198 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzJhYzgxOTQ4YjQ4OTZkMzk0Yzk5Zjg2MTIyYmI0YmFkY2ViNWU1MzJjOTQwYjYwXuLstA==: ]] 00:23:40.198 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzJhYzgxOTQ4YjQ4OTZkMzk0Yzk5Zjg2MTIyYmI0YmFkY2ViNWU1MzJjOTQwYjYwXuLstA==: 00:23:40.198 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:23:40.198 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:40.198 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:40.198 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:40.198 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:40.198 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:40.198 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:40.198 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.198 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.198 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.198 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:40.198 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:40.466 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:40.466 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:40.466 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:40.466 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:40.466 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:40.466 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:40.466 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:40.466 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:40.466 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:40.466 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:40.466 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.466 22:33:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.466 nvme0n1 00:23:40.466 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.466 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:40.466 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:40.466 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.466 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.466 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.466 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.466 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:40.466 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.466 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.466 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.466 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:40.466 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:23:40.466 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:40.466 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:40.466 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:40.466 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:40.466 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmFmNjNmNjhiODhkY2Y4ZWY3MDkxODgwN2MzMzFjOTeOYqNP: 00:23:40.466 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjMzYTkzNzY2MWY3OTYxMTMyOTAxOGFjNzI2OGFiZDeRopdL: 00:23:40.466 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:40.466 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:40.466 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmFmNjNmNjhiODhkY2Y4ZWY3MDkxODgwN2MzMzFjOTeOYqNP: 00:23:40.466 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjMzYTkzNzY2MWY3OTYxMTMyOTAxOGFjNzI2OGFiZDeRopdL: ]] 00:23:40.466 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjMzYTkzNzY2MWY3OTYxMTMyOTAxOGFjNzI2OGFiZDeRopdL: 00:23:40.466 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:23:40.466 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:40.466 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:40.466 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:40.466 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:40.466 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:40.466 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:40.466 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.466 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.724 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.724 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:40.724 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:40.724 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:40.724 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:40.724 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:40.724 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:40.724 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:40.724 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:40.724 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:40.724 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:40.724 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:40.724 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:40.724 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.724 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.724 nvme0n1 00:23:40.724 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.724 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:40.724 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:40.724 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.724 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.724 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.724 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.724 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:40.724 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.724 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.982 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.982 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:40.982 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:23:40.982 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:40.982 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:40.982 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:40.982 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:40.982 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGNiMmU5MDIxODFhMzc2MDFlZGUwZTU0ZjllNDhhNGRlYmU2Y2ZkNzIxODMyNjNloyKljw==: 00:23:40.982 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Zjc3YjEyNTU2NzNkMWM3MDg3N2VlOTJjZGI0OGQ4MzC+iXbo: 00:23:40.982 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:40.982 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:40.982 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGNiMmU5MDIxODFhMzc2MDFlZGUwZTU0ZjllNDhhNGRlYmU2Y2ZkNzIxODMyNjNloyKljw==: 00:23:40.982 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Zjc3YjEyNTU2NzNkMWM3MDg3N2VlOTJjZGI0OGQ4MzC+iXbo: ]] 00:23:40.982 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Zjc3YjEyNTU2NzNkMWM3MDg3N2VlOTJjZGI0OGQ4MzC+iXbo: 00:23:40.982 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:23:40.982 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:40.982 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:40.982 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:40.982 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:40.982 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:40.982 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:40.982 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.982 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.982 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.982 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:40.982 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:40.982 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:40.982 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:40.982 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:40.982 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:40.982 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:40.982 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:40.982 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:40.982 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:40.982 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:40.982 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:40.982 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.982 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.983 nvme0n1 00:23:40.983 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.983 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:40.983 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:40.983 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.983 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.983 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.241 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.241 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:41.241 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.241 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.241 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.241 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:41.241 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:23:41.241 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:41.241 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:41.241 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:41.241 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:41.241 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzYwNjk1OWJkZGM4Y2IzODNiYzViYTk0YzU2ZmQwOTU1ODZiMjVjMDM2ZTM4ODdhM2IwNzU3ZWQzN2UzMDVjMRz82Us=: 00:23:41.241 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:41.242 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:41.242 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:41.242 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzYwNjk1OWJkZGM4Y2IzODNiYzViYTk0YzU2ZmQwOTU1ODZiMjVjMDM2ZTM4ODdhM2IwNzU3ZWQzN2UzMDVjMRz82Us=: 00:23:41.242 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:41.242 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:23:41.242 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:41.242 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:41.242 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:41.242 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:41.242 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:41.242 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:41.242 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.242 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.242 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.242 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:41.242 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:41.242 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:41.242 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:41.242 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:41.242 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:41.242 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:41.242 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:41.242 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:41.242 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:41.242 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:41.242 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:41.242 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.242 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.242 nvme0n1 00:23:41.242 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.242 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:41.242 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:41.242 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.242 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.242 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.501 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.501 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:41.501 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.501 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.501 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.501 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:41.501 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:41.501 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:23:41.501 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:41.501 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:41.501 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:41.501 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:41.501 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWExZTk3ZmVmZGE3YjU0NzU1YmFhODljMmY5NDBjMmV2oBIL: 00:23:41.501 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjIyNmY1ZTg4ODI5M2MzMjkyZTFkYTM2NjAyODUxZWY1YmFjMWVmNmUzYzQwNzkyOWVhMTZjNGE5OGMxYTYzMnYqF4A=: 00:23:41.501 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:41.501 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:41.501 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWExZTk3ZmVmZGE3YjU0NzU1YmFhODljMmY5NDBjMmV2oBIL: 00:23:41.501 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjIyNmY1ZTg4ODI5M2MzMjkyZTFkYTM2NjAyODUxZWY1YmFjMWVmNmUzYzQwNzkyOWVhMTZjNGE5OGMxYTYzMnYqF4A=: ]] 00:23:41.501 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjIyNmY1ZTg4ODI5M2MzMjkyZTFkYTM2NjAyODUxZWY1YmFjMWVmNmUzYzQwNzkyOWVhMTZjNGE5OGMxYTYzMnYqF4A=: 00:23:41.501 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:23:41.501 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:41.501 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:41.501 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:41.501 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:41.501 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:41.501 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:41.501 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.501 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.501 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.501 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:41.501 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:41.501 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:41.501 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:41.501 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:41.501 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:41.501 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:41.501 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:41.501 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:41.501 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:41.501 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:41.501 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:41.501 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.501 22:33:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.759 nvme0n1 00:23:41.759 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.759 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:41.759 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:41.759 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.759 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.759 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.759 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.759 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:41.759 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.759 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.759 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.759 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:41.759 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:23:41.759 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:41.759 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:41.759 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:41.759 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:41.759 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2RjNzVlNmU5YWJlYjJjYTYxZmYyYzkyN2E1NGFhOThkYjdkZTkxNGU2ODFlZDIxTp4lhw==: 00:23:41.759 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzJhYzgxOTQ4YjQ4OTZkMzk0Yzk5Zjg2MTIyYmI0YmFkY2ViNWU1MzJjOTQwYjYwXuLstA==: 00:23:41.759 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:41.759 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:41.759 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2RjNzVlNmU5YWJlYjJjYTYxZmYyYzkyN2E1NGFhOThkYjdkZTkxNGU2ODFlZDIxTp4lhw==: 00:23:41.759 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzJhYzgxOTQ4YjQ4OTZkMzk0Yzk5Zjg2MTIyYmI0YmFkY2ViNWU1MzJjOTQwYjYwXuLstA==: ]] 00:23:41.759 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzJhYzgxOTQ4YjQ4OTZkMzk0Yzk5Zjg2MTIyYmI0YmFkY2ViNWU1MzJjOTQwYjYwXuLstA==: 00:23:41.759 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:23:41.759 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:41.759 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:41.759 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:41.759 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:41.759 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:41.759 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:41.760 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.760 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.760 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.760 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:41.760 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:41.760 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:41.760 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:41.760 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:41.760 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:41.760 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:41.760 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:41.760 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:41.760 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:41.760 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:41.760 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:41.760 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.760 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.019 nvme0n1 00:23:42.019 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.019 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:42.019 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:42.019 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.019 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.019 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.278 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:42.278 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:42.278 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.278 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.278 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.278 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:42.278 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:23:42.278 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:42.278 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:42.279 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:42.279 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:42.279 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmFmNjNmNjhiODhkY2Y4ZWY3MDkxODgwN2MzMzFjOTeOYqNP: 00:23:42.279 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjMzYTkzNzY2MWY3OTYxMTMyOTAxOGFjNzI2OGFiZDeRopdL: 00:23:42.279 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:42.279 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:42.279 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmFmNjNmNjhiODhkY2Y4ZWY3MDkxODgwN2MzMzFjOTeOYqNP: 00:23:42.279 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjMzYTkzNzY2MWY3OTYxMTMyOTAxOGFjNzI2OGFiZDeRopdL: ]] 00:23:42.279 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjMzYTkzNzY2MWY3OTYxMTMyOTAxOGFjNzI2OGFiZDeRopdL: 00:23:42.279 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:23:42.279 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:42.279 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:42.279 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:42.279 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:42.279 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:42.279 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:42.279 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.279 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.279 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.279 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:42.279 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:42.279 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:42.279 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:42.279 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:42.279 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:42.279 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:42.279 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:42.279 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:42.279 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:42.279 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:42.279 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:42.279 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.279 22:33:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.539 nvme0n1 00:23:42.539 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.539 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:42.539 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:42.539 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.539 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.539 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.539 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:42.539 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:42.539 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.539 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.539 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.539 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:42.539 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:23:42.539 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:42.539 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:42.539 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:42.539 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:42.539 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGNiMmU5MDIxODFhMzc2MDFlZGUwZTU0ZjllNDhhNGRlYmU2Y2ZkNzIxODMyNjNloyKljw==: 00:23:42.539 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Zjc3YjEyNTU2NzNkMWM3MDg3N2VlOTJjZGI0OGQ4MzC+iXbo: 00:23:42.539 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:42.539 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:42.539 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGNiMmU5MDIxODFhMzc2MDFlZGUwZTU0ZjllNDhhNGRlYmU2Y2ZkNzIxODMyNjNloyKljw==: 00:23:42.539 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Zjc3YjEyNTU2NzNkMWM3MDg3N2VlOTJjZGI0OGQ4MzC+iXbo: ]] 00:23:42.539 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Zjc3YjEyNTU2NzNkMWM3MDg3N2VlOTJjZGI0OGQ4MzC+iXbo: 00:23:42.539 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:23:42.539 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:42.539 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:42.539 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:42.539 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:42.539 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:42.539 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:42.539 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.539 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.539 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.539 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:42.539 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:42.539 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:42.539 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:42.539 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:42.539 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:42.539 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:42.539 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:42.539 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:42.539 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:42.539 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:42.539 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:42.539 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.539 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.798 nvme0n1 00:23:42.798 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.798 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:42.798 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.798 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:42.798 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.798 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.056 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.056 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:43.056 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.056 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.056 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.056 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:43.056 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:23:43.056 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:43.056 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:43.056 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:43.056 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:43.056 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzYwNjk1OWJkZGM4Y2IzODNiYzViYTk0YzU2ZmQwOTU1ODZiMjVjMDM2ZTM4ODdhM2IwNzU3ZWQzN2UzMDVjMRz82Us=: 00:23:43.056 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:43.056 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:43.056 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:43.056 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzYwNjk1OWJkZGM4Y2IzODNiYzViYTk0YzU2ZmQwOTU1ODZiMjVjMDM2ZTM4ODdhM2IwNzU3ZWQzN2UzMDVjMRz82Us=: 00:23:43.057 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:43.057 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:23:43.057 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:43.057 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:43.057 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:43.057 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:43.057 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:43.057 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:43.057 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.057 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.057 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.057 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:43.057 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:43.057 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:43.057 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:43.057 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:43.057 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:43.057 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:43.057 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:43.057 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:43.057 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:43.057 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:43.057 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:43.057 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.057 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.315 nvme0n1 00:23:43.315 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.315 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:43.315 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.315 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:43.315 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.315 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.315 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.315 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:43.315 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.315 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.315 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.315 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:43.315 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:43.315 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:23:43.315 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:43.315 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:43.315 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:43.315 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:43.315 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWExZTk3ZmVmZGE3YjU0NzU1YmFhODljMmY5NDBjMmV2oBIL: 00:23:43.315 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjIyNmY1ZTg4ODI5M2MzMjkyZTFkYTM2NjAyODUxZWY1YmFjMWVmNmUzYzQwNzkyOWVhMTZjNGE5OGMxYTYzMnYqF4A=: 00:23:43.315 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:43.315 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:43.315 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWExZTk3ZmVmZGE3YjU0NzU1YmFhODljMmY5NDBjMmV2oBIL: 00:23:43.315 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjIyNmY1ZTg4ODI5M2MzMjkyZTFkYTM2NjAyODUxZWY1YmFjMWVmNmUzYzQwNzkyOWVhMTZjNGE5OGMxYTYzMnYqF4A=: ]] 00:23:43.315 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjIyNmY1ZTg4ODI5M2MzMjkyZTFkYTM2NjAyODUxZWY1YmFjMWVmNmUzYzQwNzkyOWVhMTZjNGE5OGMxYTYzMnYqF4A=: 00:23:43.315 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:23:43.315 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:43.315 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:43.315 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:43.315 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:43.316 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:43.316 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:43.316 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.316 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.316 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.316 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:43.316 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:43.316 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:43.316 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:43.316 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:43.316 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:43.316 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:43.316 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:43.316 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:43.316 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:43.316 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:43.316 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:43.316 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.316 22:33:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.883 nvme0n1 00:23:43.883 22:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.883 22:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:43.883 22:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.883 22:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.883 22:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:43.883 22:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.883 22:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.883 22:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:43.883 22:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.883 22:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.141 22:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.141 22:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:44.141 22:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:23:44.141 22:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:44.141 22:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:44.141 22:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:44.141 22:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:44.141 22:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2RjNzVlNmU5YWJlYjJjYTYxZmYyYzkyN2E1NGFhOThkYjdkZTkxNGU2ODFlZDIxTp4lhw==: 00:23:44.141 22:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzJhYzgxOTQ4YjQ4OTZkMzk0Yzk5Zjg2MTIyYmI0YmFkY2ViNWU1MzJjOTQwYjYwXuLstA==: 00:23:44.141 22:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:44.141 22:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:44.141 22:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2RjNzVlNmU5YWJlYjJjYTYxZmYyYzkyN2E1NGFhOThkYjdkZTkxNGU2ODFlZDIxTp4lhw==: 00:23:44.141 22:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzJhYzgxOTQ4YjQ4OTZkMzk0Yzk5Zjg2MTIyYmI0YmFkY2ViNWU1MzJjOTQwYjYwXuLstA==: ]] 00:23:44.141 22:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzJhYzgxOTQ4YjQ4OTZkMzk0Yzk5Zjg2MTIyYmI0YmFkY2ViNWU1MzJjOTQwYjYwXuLstA==: 00:23:44.141 22:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:23:44.141 22:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:44.141 22:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:44.141 22:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:44.141 22:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:44.141 22:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:44.141 22:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:44.141 22:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.141 22:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.141 22:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.141 22:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:44.141 22:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:44.141 22:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:44.141 22:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:44.141 22:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:44.141 22:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:44.141 22:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:44.141 22:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:44.141 22:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:44.141 22:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:44.141 22:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:44.141 22:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:44.141 22:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.141 22:33:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.708 nvme0n1 00:23:44.708 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.708 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:44.708 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:44.708 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.708 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.708 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.708 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:44.708 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:44.708 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.708 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.708 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.708 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:44.708 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:23:44.708 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:44.708 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:44.708 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:44.708 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:44.708 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmFmNjNmNjhiODhkY2Y4ZWY3MDkxODgwN2MzMzFjOTeOYqNP: 00:23:44.708 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjMzYTkzNzY2MWY3OTYxMTMyOTAxOGFjNzI2OGFiZDeRopdL: 00:23:44.708 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:44.708 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:44.708 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmFmNjNmNjhiODhkY2Y4ZWY3MDkxODgwN2MzMzFjOTeOYqNP: 00:23:44.708 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjMzYTkzNzY2MWY3OTYxMTMyOTAxOGFjNzI2OGFiZDeRopdL: ]] 00:23:44.708 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjMzYTkzNzY2MWY3OTYxMTMyOTAxOGFjNzI2OGFiZDeRopdL: 00:23:44.708 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:23:44.708 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:44.708 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:44.708 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:44.708 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:44.708 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:44.708 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:44.708 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.708 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.708 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.708 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:44.708 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:44.708 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:44.708 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:44.708 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:44.708 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:44.708 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:44.708 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:44.708 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:44.708 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:44.708 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:44.708 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:44.708 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.708 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.274 nvme0n1 00:23:45.274 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.274 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:45.274 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.274 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:45.274 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.274 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.274 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:45.274 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:45.274 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.274 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.274 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.274 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:45.274 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:23:45.274 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:45.274 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:45.274 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:45.274 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:45.274 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGNiMmU5MDIxODFhMzc2MDFlZGUwZTU0ZjllNDhhNGRlYmU2Y2ZkNzIxODMyNjNloyKljw==: 00:23:45.274 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Zjc3YjEyNTU2NzNkMWM3MDg3N2VlOTJjZGI0OGQ4MzC+iXbo: 00:23:45.274 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:45.274 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:45.274 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGNiMmU5MDIxODFhMzc2MDFlZGUwZTU0ZjllNDhhNGRlYmU2Y2ZkNzIxODMyNjNloyKljw==: 00:23:45.274 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Zjc3YjEyNTU2NzNkMWM3MDg3N2VlOTJjZGI0OGQ4MzC+iXbo: ]] 00:23:45.274 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Zjc3YjEyNTU2NzNkMWM3MDg3N2VlOTJjZGI0OGQ4MzC+iXbo: 00:23:45.274 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:23:45.274 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:45.274 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:45.274 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:45.274 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:45.274 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:45.274 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:45.274 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.274 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.274 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.274 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:45.274 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:45.275 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:45.275 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:45.275 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:45.275 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:45.275 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:45.275 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:45.275 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:45.275 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:45.275 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:45.275 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:45.275 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.275 22:33:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.211 nvme0n1 00:23:46.211 22:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.211 22:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:46.211 22:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:46.211 22:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.211 22:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.211 22:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.211 22:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:46.211 22:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:46.211 22:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.211 22:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.211 22:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.211 22:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:46.211 22:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:23:46.211 22:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:46.211 22:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:46.211 22:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:46.211 22:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:46.211 22:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzYwNjk1OWJkZGM4Y2IzODNiYzViYTk0YzU2ZmQwOTU1ODZiMjVjMDM2ZTM4ODdhM2IwNzU3ZWQzN2UzMDVjMRz82Us=: 00:23:46.211 22:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:46.211 22:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:46.211 22:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:46.211 22:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzYwNjk1OWJkZGM4Y2IzODNiYzViYTk0YzU2ZmQwOTU1ODZiMjVjMDM2ZTM4ODdhM2IwNzU3ZWQzN2UzMDVjMRz82Us=: 00:23:46.211 22:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:46.211 22:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:23:46.211 22:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:46.211 22:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:46.211 22:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:46.211 22:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:46.211 22:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:46.211 22:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:46.211 22:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.211 22:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.211 22:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.211 22:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:46.211 22:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:46.211 22:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:46.211 22:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:46.211 22:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:46.211 22:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:46.211 22:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:46.211 22:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:46.211 22:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:46.211 22:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:46.211 22:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:46.211 22:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:46.211 22:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.211 22:33:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.781 nvme0n1 00:23:46.781 22:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.781 22:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:46.781 22:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:46.781 22:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.781 22:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.781 22:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.781 22:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:46.781 22:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:46.781 22:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.781 22:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.781 22:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.781 22:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:46.781 22:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:46.781 22:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:23:46.781 22:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:46.781 22:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:46.781 22:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:46.781 22:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:46.781 22:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OWExZTk3ZmVmZGE3YjU0NzU1YmFhODljMmY5NDBjMmV2oBIL: 00:23:46.781 22:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MjIyNmY1ZTg4ODI5M2MzMjkyZTFkYTM2NjAyODUxZWY1YmFjMWVmNmUzYzQwNzkyOWVhMTZjNGE5OGMxYTYzMnYqF4A=: 00:23:46.781 22:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:46.781 22:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:46.781 22:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OWExZTk3ZmVmZGE3YjU0NzU1YmFhODljMmY5NDBjMmV2oBIL: 00:23:46.781 22:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MjIyNmY1ZTg4ODI5M2MzMjkyZTFkYTM2NjAyODUxZWY1YmFjMWVmNmUzYzQwNzkyOWVhMTZjNGE5OGMxYTYzMnYqF4A=: ]] 00:23:46.781 22:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MjIyNmY1ZTg4ODI5M2MzMjkyZTFkYTM2NjAyODUxZWY1YmFjMWVmNmUzYzQwNzkyOWVhMTZjNGE5OGMxYTYzMnYqF4A=: 00:23:46.781 22:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:23:46.781 22:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:46.781 22:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:46.781 22:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:46.781 22:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:46.781 22:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:46.781 22:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:46.781 22:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.781 22:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.781 22:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.781 22:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:46.781 22:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:46.781 22:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:46.781 22:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:46.781 22:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:46.781 22:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:46.781 22:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:46.781 22:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:46.781 22:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:46.781 22:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:46.781 22:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:46.781 22:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:46.781 22:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.781 22:33:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.160 nvme0n1 00:23:48.160 22:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.160 22:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:48.160 22:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:48.160 22:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.161 22:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.161 22:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.161 22:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:48.161 22:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:48.161 22:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.161 22:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.161 22:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.161 22:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:48.161 22:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:23:48.161 22:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:48.161 22:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:48.161 22:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:48.161 22:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:48.161 22:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2RjNzVlNmU5YWJlYjJjYTYxZmYyYzkyN2E1NGFhOThkYjdkZTkxNGU2ODFlZDIxTp4lhw==: 00:23:48.161 22:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzJhYzgxOTQ4YjQ4OTZkMzk0Yzk5Zjg2MTIyYmI0YmFkY2ViNWU1MzJjOTQwYjYwXuLstA==: 00:23:48.161 22:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:48.161 22:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:48.161 22:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2RjNzVlNmU5YWJlYjJjYTYxZmYyYzkyN2E1NGFhOThkYjdkZTkxNGU2ODFlZDIxTp4lhw==: 00:23:48.161 22:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzJhYzgxOTQ4YjQ4OTZkMzk0Yzk5Zjg2MTIyYmI0YmFkY2ViNWU1MzJjOTQwYjYwXuLstA==: ]] 00:23:48.161 22:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzJhYzgxOTQ4YjQ4OTZkMzk0Yzk5Zjg2MTIyYmI0YmFkY2ViNWU1MzJjOTQwYjYwXuLstA==: 00:23:48.161 22:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:23:48.161 22:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:48.161 22:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:48.161 22:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:48.161 22:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:48.161 22:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:48.161 22:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:48.161 22:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.161 22:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.161 22:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.161 22:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:48.161 22:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:48.161 22:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:48.161 22:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:48.161 22:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:48.161 22:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:48.161 22:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:48.161 22:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:48.161 22:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:48.161 22:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:48.161 22:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:48.161 22:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:48.161 22:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.161 22:33:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.101 nvme0n1 00:23:49.101 22:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.101 22:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:49.101 22:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:49.101 22:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.101 22:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.101 22:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.101 22:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:49.101 22:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:49.101 22:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.101 22:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.101 22:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.101 22:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:49.101 22:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:23:49.101 22:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:49.101 22:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:49.101 22:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:49.101 22:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:49.101 22:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmFmNjNmNjhiODhkY2Y4ZWY3MDkxODgwN2MzMzFjOTeOYqNP: 00:23:49.101 22:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjMzYTkzNzY2MWY3OTYxMTMyOTAxOGFjNzI2OGFiZDeRopdL: 00:23:49.101 22:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:49.101 22:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:49.101 22:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmFmNjNmNjhiODhkY2Y4ZWY3MDkxODgwN2MzMzFjOTeOYqNP: 00:23:49.101 22:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjMzYTkzNzY2MWY3OTYxMTMyOTAxOGFjNzI2OGFiZDeRopdL: ]] 00:23:49.101 22:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjMzYTkzNzY2MWY3OTYxMTMyOTAxOGFjNzI2OGFiZDeRopdL: 00:23:49.101 22:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:23:49.101 22:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:49.101 22:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:49.101 22:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:49.101 22:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:49.101 22:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:49.101 22:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:49.101 22:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.101 22:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.101 22:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.101 22:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:49.101 22:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:49.101 22:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:49.101 22:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:49.101 22:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:49.101 22:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:49.101 22:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:49.101 22:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:49.101 22:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:49.101 22:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:49.101 22:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:49.101 22:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:49.101 22:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.101 22:33:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.479 nvme0n1 00:23:50.479 22:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.479 22:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:50.479 22:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:50.479 22:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.479 22:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.479 22:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.479 22:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:50.479 22:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:50.479 22:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.479 22:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.479 22:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.479 22:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:50.479 22:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:23:50.479 22:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:50.479 22:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:50.479 22:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:50.479 22:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:50.479 22:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OGNiMmU5MDIxODFhMzc2MDFlZGUwZTU0ZjllNDhhNGRlYmU2Y2ZkNzIxODMyNjNloyKljw==: 00:23:50.479 22:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Zjc3YjEyNTU2NzNkMWM3MDg3N2VlOTJjZGI0OGQ4MzC+iXbo: 00:23:50.479 22:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:50.479 22:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:50.479 22:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OGNiMmU5MDIxODFhMzc2MDFlZGUwZTU0ZjllNDhhNGRlYmU2Y2ZkNzIxODMyNjNloyKljw==: 00:23:50.479 22:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Zjc3YjEyNTU2NzNkMWM3MDg3N2VlOTJjZGI0OGQ4MzC+iXbo: ]] 00:23:50.479 22:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Zjc3YjEyNTU2NzNkMWM3MDg3N2VlOTJjZGI0OGQ4MzC+iXbo: 00:23:50.479 22:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:23:50.479 22:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:50.479 22:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:50.479 22:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:50.479 22:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:50.479 22:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:50.479 22:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:50.479 22:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.479 22:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.479 22:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.479 22:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:50.479 22:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:50.479 22:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:50.479 22:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:50.479 22:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:50.479 22:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:50.479 22:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:50.479 22:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:50.479 22:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:50.479 22:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:50.479 22:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:50.479 22:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:50.479 22:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.479 22:33:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.418 nvme0n1 00:23:51.418 22:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.418 22:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:51.418 22:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:51.418 22:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.418 22:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.418 22:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.418 22:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:51.418 22:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:51.418 22:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.418 22:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.418 22:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.418 22:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:51.418 22:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:23:51.418 22:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:51.418 22:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:51.678 22:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:51.678 22:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:51.678 22:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NzYwNjk1OWJkZGM4Y2IzODNiYzViYTk0YzU2ZmQwOTU1ODZiMjVjMDM2ZTM4ODdhM2IwNzU3ZWQzN2UzMDVjMRz82Us=: 00:23:51.678 22:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:51.678 22:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:51.678 22:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:51.678 22:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NzYwNjk1OWJkZGM4Y2IzODNiYzViYTk0YzU2ZmQwOTU1ODZiMjVjMDM2ZTM4ODdhM2IwNzU3ZWQzN2UzMDVjMRz82Us=: 00:23:51.678 22:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:51.678 22:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:23:51.678 22:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:51.678 22:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:51.678 22:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:51.678 22:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:51.678 22:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:51.678 22:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:51.678 22:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.678 22:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.678 22:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.678 22:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:51.678 22:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:51.678 22:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:51.678 22:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:51.678 22:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:51.678 22:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:51.678 22:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:51.678 22:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:51.678 22:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:51.678 22:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:51.678 22:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:51.678 22:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:51.678 22:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.678 22:33:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.617 nvme0n1 00:23:52.617 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.617 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:52.617 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:52.617 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.617 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.617 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.617 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.617 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:52.617 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.617 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.617 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.617 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:52.617 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:52.617 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:52.617 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:52.617 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:52.617 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2RjNzVlNmU5YWJlYjJjYTYxZmYyYzkyN2E1NGFhOThkYjdkZTkxNGU2ODFlZDIxTp4lhw==: 00:23:52.617 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MzJhYzgxOTQ4YjQ4OTZkMzk0Yzk5Zjg2MTIyYmI0YmFkY2ViNWU1MzJjOTQwYjYwXuLstA==: 00:23:52.617 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:52.617 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:52.617 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2RjNzVlNmU5YWJlYjJjYTYxZmYyYzkyN2E1NGFhOThkYjdkZTkxNGU2ODFlZDIxTp4lhw==: 00:23:52.617 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MzJhYzgxOTQ4YjQ4OTZkMzk0Yzk5Zjg2MTIyYmI0YmFkY2ViNWU1MzJjOTQwYjYwXuLstA==: ]] 00:23:52.617 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MzJhYzgxOTQ4YjQ4OTZkMzk0Yzk5Zjg2MTIyYmI0YmFkY2ViNWU1MzJjOTQwYjYwXuLstA==: 00:23:52.617 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:52.617 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.617 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.876 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.876 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:23:52.876 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:52.876 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:52.876 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:52.876 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:52.876 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:52.876 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:52.876 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:52.876 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:52.876 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:52.876 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:52.876 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:52.876 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:23:52.876 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:52.876 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:52.876 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:52.876 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:52.876 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:52.876 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:52.876 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.876 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.876 request: 00:23:52.876 { 00:23:52.876 "name": "nvme0", 00:23:52.876 "trtype": "tcp", 00:23:52.876 "traddr": "10.0.0.1", 00:23:52.876 "adrfam": "ipv4", 00:23:52.876 "trsvcid": "4420", 00:23:52.876 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:52.876 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:52.876 "prchk_reftag": false, 00:23:52.876 "prchk_guard": false, 00:23:52.876 "hdgst": false, 00:23:52.876 "ddgst": false, 00:23:52.876 "method": "bdev_nvme_attach_controller", 00:23:52.876 "req_id": 1 00:23:52.876 } 00:23:52.876 Got JSON-RPC error response 00:23:52.876 response: 00:23:52.876 { 00:23:52.876 "code": -5, 00:23:52.876 "message": "Input/output error" 00:23:52.876 } 00:23:52.876 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:52.876 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:23:52.876 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:52.876 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:52.876 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:52.876 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:23:52.876 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.876 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:23:52.876 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.876 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.876 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:23:52.876 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:23:52.876 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:52.876 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:52.876 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:52.876 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:52.876 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:52.876 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:52.876 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:52.876 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:52.876 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:52.876 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:52.876 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:52.876 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:23:52.876 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:52.876 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:52.876 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:52.876 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:52.876 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:52.876 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:52.876 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.876 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.876 request: 00:23:52.876 { 00:23:52.876 "name": "nvme0", 00:23:52.876 "trtype": "tcp", 00:23:52.876 "traddr": "10.0.0.1", 00:23:52.876 "adrfam": "ipv4", 00:23:52.876 "trsvcid": "4420", 00:23:52.876 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:52.876 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:52.876 "prchk_reftag": false, 00:23:52.876 "prchk_guard": false, 00:23:52.876 "hdgst": false, 00:23:52.876 "ddgst": false, 00:23:52.876 "dhchap_key": "key2", 00:23:52.876 "method": "bdev_nvme_attach_controller", 00:23:52.876 "req_id": 1 00:23:52.876 } 00:23:52.876 Got JSON-RPC error response 00:23:52.876 response: 00:23:52.876 { 00:23:52.876 "code": -5, 00:23:52.876 "message": "Input/output error" 00:23:52.876 } 00:23:52.876 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:52.876 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:23:52.876 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:52.876 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:52.876 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:52.876 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:23:52.876 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.876 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:23:52.876 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.876 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.135 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:23:53.135 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:23:53.135 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:53.135 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:53.135 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:53.135 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:53.135 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:53.135 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:53.135 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:53.135 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:53.135 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:53.135 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:53.135 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:53.135 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:23:53.135 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:53.135 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:53.135 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:53.135 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:53.135 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:53.135 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:53.135 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.135 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.135 request: 00:23:53.135 { 00:23:53.135 "name": "nvme0", 00:23:53.135 "trtype": "tcp", 00:23:53.135 "traddr": "10.0.0.1", 00:23:53.135 "adrfam": "ipv4", 00:23:53.135 "trsvcid": "4420", 00:23:53.135 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:53.135 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:53.135 "prchk_reftag": false, 00:23:53.135 "prchk_guard": false, 00:23:53.135 "hdgst": false, 00:23:53.135 "ddgst": false, 00:23:53.135 "dhchap_key": "key1", 00:23:53.135 "dhchap_ctrlr_key": "ckey2", 00:23:53.135 "method": "bdev_nvme_attach_controller", 00:23:53.135 "req_id": 1 00:23:53.135 } 00:23:53.135 Got JSON-RPC error response 00:23:53.135 response: 00:23:53.135 { 00:23:53.135 "code": -5, 00:23:53.135 "message": "Input/output error" 00:23:53.135 } 00:23:53.135 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:53.135 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:23:53.135 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:53.135 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:53.135 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:53.135 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:23:53.135 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:23:53.135 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:23:53.135 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:53.135 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:23:53.135 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:53.135 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:23:53.135 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:53.135 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:53.135 rmmod nvme_tcp 00:23:53.135 rmmod nvme_fabrics 00:23:53.135 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:53.135 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:23:53.135 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:23:53.135 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 3912553 ']' 00:23:53.135 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 3912553 00:23:53.135 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 3912553 ']' 00:23:53.135 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 3912553 00:23:53.135 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:23:53.135 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:53.135 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3912553 00:23:53.135 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:53.135 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:53.135 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3912553' 00:23:53.135 killing process with pid 3912553 00:23:53.135 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 3912553 00:23:53.135 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 3912553 00:23:53.393 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:53.393 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:53.393 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:53.393 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:53.393 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:53.393 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:53.393 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:53.393 22:33:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:55.295 22:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:55.295 22:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:55.295 22:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:55.295 22:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:23:55.295 22:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:23:55.295 22:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:23:55.295 22:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:55.295 22:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:55.295 22:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:55.295 22:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:55.295 22:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:23:55.295 22:33:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:23:55.553 22:33:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:56.488 0000:00:04.7 (8086 3c27): ioatdma -> vfio-pci 00:23:56.488 0000:00:04.6 (8086 3c26): ioatdma -> vfio-pci 00:23:56.488 0000:00:04.5 (8086 3c25): ioatdma -> vfio-pci 00:23:56.488 0000:00:04.4 (8086 3c24): ioatdma -> vfio-pci 00:23:56.488 0000:00:04.3 (8086 3c23): ioatdma -> vfio-pci 00:23:56.488 0000:00:04.2 (8086 3c22): ioatdma -> vfio-pci 00:23:56.488 0000:00:04.1 (8086 3c21): ioatdma -> vfio-pci 00:23:56.488 0000:00:04.0 (8086 3c20): ioatdma -> vfio-pci 00:23:56.488 0000:80:04.7 (8086 3c27): ioatdma -> vfio-pci 00:23:56.488 0000:80:04.6 (8086 3c26): ioatdma -> vfio-pci 00:23:56.488 0000:80:04.5 (8086 3c25): ioatdma -> vfio-pci 00:23:56.488 0000:80:04.4 (8086 3c24): ioatdma -> vfio-pci 00:23:56.488 0000:80:04.3 (8086 3c23): ioatdma -> vfio-pci 00:23:56.488 0000:80:04.2 (8086 3c22): ioatdma -> vfio-pci 00:23:56.488 0000:80:04.1 (8086 3c21): ioatdma -> vfio-pci 00:23:56.488 0000:80:04.0 (8086 3c20): ioatdma -> vfio-pci 00:23:57.422 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:23:57.422 22:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Nn1 /tmp/spdk.key-null.fLD /tmp/spdk.key-sha256.yPt /tmp/spdk.key-sha384.JIU /tmp/spdk.key-sha512.d7a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:23:57.422 22:33:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:58.360 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:23:58.360 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:23:58.360 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:23:58.360 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:23:58.360 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:23:58.360 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:23:58.360 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:23:58.360 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:23:58.360 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:23:58.360 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:23:58.360 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:23:58.360 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:23:58.360 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:23:58.360 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:23:58.360 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:23:58.360 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:23:58.360 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:23:58.619 00:23:58.619 real 0m53.472s 00:23:58.619 user 0m51.237s 00:23:58.619 sys 0m5.193s 00:23:58.619 22:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:58.619 22:33:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.619 ************************************ 00:23:58.619 END TEST nvmf_auth_host 00:23:58.619 ************************************ 00:23:58.619 22:33:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:23:58.619 22:33:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:23:58.619 22:33:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:58.619 22:33:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:58.619 22:33:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.619 ************************************ 00:23:58.619 START TEST nvmf_digest 00:23:58.619 ************************************ 00:23:58.619 22:33:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:23:58.619 * Looking for test storage... 00:23:58.619 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:58.619 22:33:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:58.619 22:33:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:23:58.619 22:33:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:58.619 22:33:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:58.619 22:33:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:58.619 22:33:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:58.619 22:33:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:58.619 22:33:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:58.619 22:33:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:58.619 22:33:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:58.619 22:33:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:58.619 22:33:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:58.619 22:33:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:23:58.619 22:33:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:23:58.619 22:33:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:58.619 22:33:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:58.619 22:33:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:58.619 22:33:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:58.619 22:33:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:58.619 22:33:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:58.620 22:33:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:58.620 22:33:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:58.620 22:33:24 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.620 22:33:24 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.620 22:33:24 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.620 22:33:24 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:23:58.620 22:33:24 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.620 22:33:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:23:58.620 22:33:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:58.620 22:33:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:58.620 22:33:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:58.620 22:33:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:58.620 22:33:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:58.620 22:33:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:58.620 22:33:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:58.620 22:33:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:58.620 22:33:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:23:58.620 22:33:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:23:58.620 22:33:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:23:58.620 22:33:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:23:58.620 22:33:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:23:58.620 22:33:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:58.620 22:33:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:58.620 22:33:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:58.620 22:33:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:58.620 22:33:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:58.620 22:33:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:58.620 22:33:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:58.620 22:33:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:58.620 22:33:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:58.620 22:33:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:58.620 22:33:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:23:58.620 22:33:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:00.526 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:00.526 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:24:00.526 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:00.526 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:00.526 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:00.526 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:00.526 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:00.526 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:24:00.526 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:00.526 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:24:00.526 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:24:00.526 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:24:00.526 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:24:00.526 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:24:00.526 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:24:00.526 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:00.526 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:00.526 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:00.526 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:00.526 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:00.526 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:24:00.527 Found 0000:08:00.0 (0x8086 - 0x159b) 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:24:00.527 Found 0000:08:00.1 (0x8086 - 0x159b) 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:24:00.527 Found net devices under 0000:08:00.0: cvl_0_0 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:24:00.527 Found net devices under 0000:08:00.1: cvl_0_1 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:00.527 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:00.527 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:24:00.527 00:24:00.527 --- 10.0.0.2 ping statistics --- 00:24:00.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.527 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:00.527 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:00.527 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:24:00.527 00:24:00.527 --- 10.0.0.1 ping statistics --- 00:24:00.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.527 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:24:00.527 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:00.528 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:00.528 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:00.528 ************************************ 00:24:00.528 START TEST nvmf_digest_clean 00:24:00.528 ************************************ 00:24:00.528 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:24:00.528 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:24:00.528 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:24:00.528 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:24:00.528 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:24:00.528 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:24:00.528 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:00.528 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:00.528 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:00.528 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=3921120 00:24:00.528 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:00.528 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 3921120 00:24:00.528 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3921120 ']' 00:24:00.528 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:00.528 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:00.528 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:00.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:00.528 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:00.528 22:33:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:00.528 [2024-07-24 22:33:25.967988] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:24:00.528 [2024-07-24 22:33:25.968090] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:00.528 EAL: No free 2048 kB hugepages reported on node 1 00:24:00.528 [2024-07-24 22:33:26.038263] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:00.528 [2024-07-24 22:33:26.156720] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:00.528 [2024-07-24 22:33:26.156786] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:00.528 [2024-07-24 22:33:26.156801] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:00.528 [2024-07-24 22:33:26.156820] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:00.528 [2024-07-24 22:33:26.156832] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:00.528 [2024-07-24 22:33:26.156871] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:00.528 22:33:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:00.528 22:33:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:00.528 22:33:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:00.528 22:33:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:00.528 22:33:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:00.787 22:33:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:00.787 22:33:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:24:00.787 22:33:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:24:00.787 22:33:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:24:00.787 22:33:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.787 22:33:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:00.787 null0 00:24:00.787 [2024-07-24 22:33:26.347055] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:00.787 [2024-07-24 22:33:26.371272] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:00.787 22:33:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.787 22:33:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:24:00.787 22:33:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:00.787 22:33:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:00.787 22:33:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:24:00.787 22:33:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:24:00.787 22:33:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:24:00.787 22:33:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:00.787 22:33:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3921229 00:24:00.787 22:33:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3921229 /var/tmp/bperf.sock 00:24:00.787 22:33:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:00.787 22:33:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3921229 ']' 00:24:00.787 22:33:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:00.787 22:33:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:00.787 22:33:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:00.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:00.787 22:33:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:00.787 22:33:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:00.787 [2024-07-24 22:33:26.423360] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:24:00.787 [2024-07-24 22:33:26.423450] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3921229 ] 00:24:00.787 EAL: No free 2048 kB hugepages reported on node 1 00:24:00.787 [2024-07-24 22:33:26.484941] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:01.046 [2024-07-24 22:33:26.601523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:01.046 22:33:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:01.046 22:33:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:01.046 22:33:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:01.046 22:33:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:01.046 22:33:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:01.614 22:33:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:01.614 22:33:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:01.872 nvme0n1 00:24:01.872 22:33:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:01.872 22:33:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:02.130 Running I/O for 2 seconds... 00:24:04.035 00:24:04.035 Latency(us) 00:24:04.035 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:04.035 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:04.035 nvme0n1 : 2.00 16699.04 65.23 0.00 0.00 7655.54 4004.98 18350.08 00:24:04.035 =================================================================================================================== 00:24:04.035 Total : 16699.04 65.23 0.00 0.00 7655.54 4004.98 18350.08 00:24:04.035 0 00:24:04.035 22:33:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:04.035 22:33:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:04.035 22:33:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:04.035 22:33:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:04.035 | select(.opcode=="crc32c") 00:24:04.035 | "\(.module_name) \(.executed)"' 00:24:04.035 22:33:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:04.621 22:33:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:04.621 22:33:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:04.621 22:33:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:04.621 22:33:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:04.621 22:33:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3921229 00:24:04.621 22:33:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3921229 ']' 00:24:04.621 22:33:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3921229 00:24:04.621 22:33:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:04.621 22:33:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:04.621 22:33:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3921229 00:24:04.621 22:33:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:04.621 22:33:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:04.621 22:33:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3921229' 00:24:04.621 killing process with pid 3921229 00:24:04.621 22:33:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3921229 00:24:04.621 Received shutdown signal, test time was about 2.000000 seconds 00:24:04.621 00:24:04.621 Latency(us) 00:24:04.621 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:04.621 =================================================================================================================== 00:24:04.621 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:04.621 22:33:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3921229 00:24:04.621 22:33:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:24:04.621 22:33:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:04.621 22:33:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:04.621 22:33:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:24:04.621 22:33:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:04.621 22:33:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:04.621 22:33:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:04.621 22:33:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3921543 00:24:04.621 22:33:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3921543 /var/tmp/bperf.sock 00:24:04.621 22:33:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:04.621 22:33:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3921543 ']' 00:24:04.621 22:33:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:04.621 22:33:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:04.621 22:33:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:04.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:04.621 22:33:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:04.621 22:33:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:04.621 [2024-07-24 22:33:30.304230] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:24:04.621 [2024-07-24 22:33:30.304326] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3921543 ] 00:24:04.621 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:04.621 Zero copy mechanism will not be used. 00:24:04.879 EAL: No free 2048 kB hugepages reported on node 1 00:24:04.879 [2024-07-24 22:33:30.364042] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:04.879 [2024-07-24 22:33:30.480531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:04.879 22:33:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:04.879 22:33:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:04.879 22:33:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:04.879 22:33:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:04.879 22:33:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:05.447 22:33:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:05.447 22:33:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:05.704 nvme0n1 00:24:05.704 22:33:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:05.704 22:33:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:05.963 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:05.963 Zero copy mechanism will not be used. 00:24:05.963 Running I/O for 2 seconds... 00:24:07.898 00:24:07.898 Latency(us) 00:24:07.898 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:07.898 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:07.898 nvme0n1 : 2.00 4425.93 553.24 0.00 0.00 3609.57 879.88 11213.94 00:24:07.898 =================================================================================================================== 00:24:07.898 Total : 4425.93 553.24 0.00 0.00 3609.57 879.88 11213.94 00:24:07.898 0 00:24:07.898 22:33:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:07.898 22:33:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:07.898 22:33:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:07.898 22:33:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:07.898 | select(.opcode=="crc32c") 00:24:07.898 | "\(.module_name) \(.executed)"' 00:24:07.898 22:33:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:08.157 22:33:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:08.157 22:33:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:08.157 22:33:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:08.157 22:33:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:08.157 22:33:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3921543 00:24:08.157 22:33:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3921543 ']' 00:24:08.157 22:33:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3921543 00:24:08.157 22:33:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:08.157 22:33:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:08.157 22:33:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3921543 00:24:08.157 22:33:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:08.157 22:33:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:08.157 22:33:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3921543' 00:24:08.157 killing process with pid 3921543 00:24:08.157 22:33:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3921543 00:24:08.157 Received shutdown signal, test time was about 2.000000 seconds 00:24:08.157 00:24:08.157 Latency(us) 00:24:08.157 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:08.157 =================================================================================================================== 00:24:08.157 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:08.157 22:33:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3921543 00:24:08.416 22:33:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:24:08.416 22:33:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:08.416 22:33:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:08.416 22:33:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:08.416 22:33:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:24:08.416 22:33:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:24:08.416 22:33:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:08.416 22:33:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3921884 00:24:08.416 22:33:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3921884 /var/tmp/bperf.sock 00:24:08.416 22:33:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:08.416 22:33:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3921884 ']' 00:24:08.416 22:33:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:08.416 22:33:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:08.416 22:33:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:08.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:08.416 22:33:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:08.416 22:33:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:08.416 [2024-07-24 22:33:34.108080] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:24:08.416 [2024-07-24 22:33:34.108173] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3921884 ] 00:24:08.674 EAL: No free 2048 kB hugepages reported on node 1 00:24:08.674 [2024-07-24 22:33:34.168876] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:08.674 [2024-07-24 22:33:34.285518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:08.674 22:33:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:08.674 22:33:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:08.674 22:33:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:08.674 22:33:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:08.674 22:33:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:09.239 22:33:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:09.239 22:33:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:09.497 nvme0n1 00:24:09.497 22:33:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:09.497 22:33:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:09.756 Running I/O for 2 seconds... 00:24:11.663 00:24:11.663 Latency(us) 00:24:11.663 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:11.663 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:11.663 nvme0n1 : 2.01 18536.17 72.41 0.00 0.00 6887.77 3106.89 10971.21 00:24:11.663 =================================================================================================================== 00:24:11.663 Total : 18536.17 72.41 0.00 0.00 6887.77 3106.89 10971.21 00:24:11.663 0 00:24:11.663 22:33:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:11.663 22:33:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:11.663 22:33:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:11.663 22:33:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:11.663 22:33:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:11.663 | select(.opcode=="crc32c") 00:24:11.663 | "\(.module_name) \(.executed)"' 00:24:11.921 22:33:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:11.921 22:33:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:11.921 22:33:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:11.921 22:33:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:11.921 22:33:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3921884 00:24:11.921 22:33:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3921884 ']' 00:24:11.921 22:33:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3921884 00:24:11.921 22:33:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:11.921 22:33:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:11.921 22:33:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3921884 00:24:11.921 22:33:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:11.921 22:33:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:11.921 22:33:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3921884' 00:24:11.921 killing process with pid 3921884 00:24:11.921 22:33:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3921884 00:24:11.921 Received shutdown signal, test time was about 2.000000 seconds 00:24:11.921 00:24:11.921 Latency(us) 00:24:11.921 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:11.921 =================================================================================================================== 00:24:11.921 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:11.921 22:33:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3921884 00:24:12.179 22:33:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:24:12.179 22:33:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:12.179 22:33:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:12.179 22:33:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:12.179 22:33:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:12.179 22:33:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:12.179 22:33:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:12.179 22:33:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3922276 00:24:12.179 22:33:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3922276 /var/tmp/bperf.sock 00:24:12.179 22:33:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:12.179 22:33:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3922276 ']' 00:24:12.179 22:33:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:12.179 22:33:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:12.179 22:33:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:12.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:12.179 22:33:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:12.179 22:33:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:12.179 [2024-07-24 22:33:37.856621] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:24:12.179 [2024-07-24 22:33:37.856717] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3922276 ] 00:24:12.179 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:12.179 Zero copy mechanism will not be used. 00:24:12.437 EAL: No free 2048 kB hugepages reported on node 1 00:24:12.437 [2024-07-24 22:33:37.915963] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.437 [2024-07-24 22:33:38.032636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:12.437 22:33:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:12.437 22:33:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:12.437 22:33:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:12.437 22:33:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:12.437 22:33:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:13.005 22:33:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:13.005 22:33:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:13.005 nvme0n1 00:24:13.266 22:33:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:13.266 22:33:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:13.266 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:13.266 Zero copy mechanism will not be used. 00:24:13.266 Running I/O for 2 seconds... 00:24:15.171 00:24:15.171 Latency(us) 00:24:15.171 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:15.171 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:24:15.171 nvme0n1 : 2.00 5706.86 713.36 0.00 0.00 2792.49 1893.26 6116.69 00:24:15.171 =================================================================================================================== 00:24:15.171 Total : 5706.86 713.36 0.00 0.00 2792.49 1893.26 6116.69 00:24:15.171 0 00:24:15.171 22:33:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:15.171 22:33:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:15.171 22:33:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:15.171 22:33:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:15.171 | select(.opcode=="crc32c") 00:24:15.171 | "\(.module_name) \(.executed)"' 00:24:15.171 22:33:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:15.737 22:33:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:15.737 22:33:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:15.737 22:33:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:15.737 22:33:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:15.737 22:33:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3922276 00:24:15.737 22:33:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3922276 ']' 00:24:15.737 22:33:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3922276 00:24:15.737 22:33:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:15.737 22:33:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:15.737 22:33:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3922276 00:24:15.737 22:33:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:15.737 22:33:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:15.737 22:33:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3922276' 00:24:15.737 killing process with pid 3922276 00:24:15.737 22:33:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3922276 00:24:15.737 Received shutdown signal, test time was about 2.000000 seconds 00:24:15.737 00:24:15.737 Latency(us) 00:24:15.737 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:15.737 =================================================================================================================== 00:24:15.737 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:15.737 22:33:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3922276 00:24:15.737 22:33:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3921120 00:24:15.737 22:33:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3921120 ']' 00:24:15.737 22:33:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3921120 00:24:15.737 22:33:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:15.737 22:33:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:15.737 22:33:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3921120 00:24:15.737 22:33:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:15.737 22:33:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:15.737 22:33:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3921120' 00:24:15.737 killing process with pid 3921120 00:24:15.737 22:33:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3921120 00:24:15.737 22:33:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3921120 00:24:15.996 00:24:15.996 real 0m15.711s 00:24:15.996 user 0m31.042s 00:24:15.996 sys 0m4.224s 00:24:15.996 22:33:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:15.996 22:33:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:15.996 ************************************ 00:24:15.996 END TEST nvmf_digest_clean 00:24:15.996 ************************************ 00:24:15.996 22:33:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:24:15.996 22:33:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:15.996 22:33:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:15.996 22:33:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:15.996 ************************************ 00:24:15.996 START TEST nvmf_digest_error 00:24:15.996 ************************************ 00:24:15.996 22:33:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:24:15.996 22:33:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:24:15.996 22:33:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:15.996 22:33:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:15.996 22:33:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:15.996 22:33:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=3922616 00:24:15.996 22:33:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:15.996 22:33:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 3922616 00:24:15.996 22:33:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3922616 ']' 00:24:15.996 22:33:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:15.996 22:33:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:15.996 22:33:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:15.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:15.996 22:33:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:15.996 22:33:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:16.254 [2024-07-24 22:33:41.740044] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:24:16.254 [2024-07-24 22:33:41.740130] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:16.254 EAL: No free 2048 kB hugepages reported on node 1 00:24:16.254 [2024-07-24 22:33:41.806412] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:16.254 [2024-07-24 22:33:41.924201] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:16.254 [2024-07-24 22:33:41.924270] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:16.254 [2024-07-24 22:33:41.924287] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:16.254 [2024-07-24 22:33:41.924300] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:16.254 [2024-07-24 22:33:41.924322] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:16.254 [2024-07-24 22:33:41.924361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:16.513 22:33:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:16.513 22:33:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:16.513 22:33:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:16.513 22:33:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:16.513 22:33:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:16.513 22:33:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:16.513 22:33:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:24:16.513 22:33:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.513 22:33:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:16.513 [2024-07-24 22:33:42.013020] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:24:16.513 22:33:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.513 22:33:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:24:16.513 22:33:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:24:16.513 22:33:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.513 22:33:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:16.513 null0 00:24:16.513 [2024-07-24 22:33:42.121156] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:16.513 [2024-07-24 22:33:42.145388] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:16.513 22:33:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.513 22:33:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:24:16.513 22:33:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:16.513 22:33:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:24:16.513 22:33:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:24:16.513 22:33:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:24:16.513 22:33:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3922724 00:24:16.513 22:33:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3922724 /var/tmp/bperf.sock 00:24:16.513 22:33:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3922724 ']' 00:24:16.513 22:33:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:16.513 22:33:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:16.513 22:33:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:24:16.513 22:33:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:16.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:16.513 22:33:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:16.513 22:33:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:16.513 [2024-07-24 22:33:42.198431] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:24:16.513 [2024-07-24 22:33:42.198534] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3922724 ] 00:24:16.773 EAL: No free 2048 kB hugepages reported on node 1 00:24:16.773 [2024-07-24 22:33:42.261036] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:16.773 [2024-07-24 22:33:42.377814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:17.031 22:33:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:17.031 22:33:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:17.031 22:33:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:17.031 22:33:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:17.289 22:33:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:17.289 22:33:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.289 22:33:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:17.289 22:33:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.289 22:33:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:17.289 22:33:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:17.547 nvme0n1 00:24:17.548 22:33:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:24:17.548 22:33:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:17.548 22:33:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:17.548 22:33:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:17.548 22:33:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:17.548 22:33:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:17.818 Running I/O for 2 seconds... 00:24:17.818 [2024-07-24 22:33:43.350001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:17.818 [2024-07-24 22:33:43.350073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:12691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.818 [2024-07-24 22:33:43.350094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.818 [2024-07-24 22:33:43.363190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:17.818 [2024-07-24 22:33:43.363226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:20648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.818 [2024-07-24 22:33:43.363246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.818 [2024-07-24 22:33:43.380683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:17.818 [2024-07-24 22:33:43.380732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.818 [2024-07-24 22:33:43.380753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.818 [2024-07-24 22:33:43.393360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:17.818 [2024-07-24 22:33:43.393401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.818 [2024-07-24 22:33:43.393421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.818 [2024-07-24 22:33:43.409601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:17.818 [2024-07-24 22:33:43.409634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:23027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.818 [2024-07-24 22:33:43.409653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.818 [2024-07-24 22:33:43.423738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:17.818 [2024-07-24 22:33:43.423771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:7654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.818 [2024-07-24 22:33:43.423789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.818 [2024-07-24 22:33:43.437023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:17.818 [2024-07-24 22:33:43.437057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:22024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.818 [2024-07-24 22:33:43.437076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.818 [2024-07-24 22:33:43.451109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:17.818 [2024-07-24 22:33:43.451141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:15118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.818 [2024-07-24 22:33:43.451160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.818 [2024-07-24 22:33:43.466866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:17.818 [2024-07-24 22:33:43.466899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:17320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.818 [2024-07-24 22:33:43.466917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.818 [2024-07-24 22:33:43.481503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:17.818 [2024-07-24 22:33:43.481536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.818 [2024-07-24 22:33:43.481555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.818 [2024-07-24 22:33:43.494620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:17.818 [2024-07-24 22:33:43.494653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:19210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.818 [2024-07-24 22:33:43.494671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:17.818 [2024-07-24 22:33:43.508698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:17.818 [2024-07-24 22:33:43.508731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:4094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.818 [2024-07-24 22:33:43.508749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.080 [2024-07-24 22:33:43.523641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:18.080 [2024-07-24 22:33:43.523676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:25271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.080 [2024-07-24 22:33:43.523695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.080 [2024-07-24 22:33:43.539340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:18.080 [2024-07-24 22:33:43.539372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:9206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.080 [2024-07-24 22:33:43.539391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.080 [2024-07-24 22:33:43.552768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:18.080 [2024-07-24 22:33:43.552800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.080 [2024-07-24 22:33:43.552819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.080 [2024-07-24 22:33:43.569927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:18.080 [2024-07-24 22:33:43.569960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:22039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.080 [2024-07-24 22:33:43.569979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.080 [2024-07-24 22:33:43.583554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:18.080 [2024-07-24 22:33:43.583586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.080 [2024-07-24 22:33:43.583605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.080 [2024-07-24 22:33:43.600118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:18.080 [2024-07-24 22:33:43.600151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:1909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.080 [2024-07-24 22:33:43.600169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.080 [2024-07-24 22:33:43.612901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:18.080 [2024-07-24 22:33:43.612941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.080 [2024-07-24 22:33:43.612959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.080 [2024-07-24 22:33:43.630706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:18.080 [2024-07-24 22:33:43.630740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.080 [2024-07-24 22:33:43.630766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.080 [2024-07-24 22:33:43.646558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:18.080 [2024-07-24 22:33:43.646591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:3294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.080 [2024-07-24 22:33:43.646610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.080 [2024-07-24 22:33:43.663221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:18.080 [2024-07-24 22:33:43.663262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:2953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.080 [2024-07-24 22:33:43.663286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.080 [2024-07-24 22:33:43.676137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:18.080 [2024-07-24 22:33:43.676170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:2558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.080 [2024-07-24 22:33:43.676188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.080 [2024-07-24 22:33:43.691742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:18.081 [2024-07-24 22:33:43.691775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:9867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.081 [2024-07-24 22:33:43.691793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.081 [2024-07-24 22:33:43.706963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:18.081 [2024-07-24 22:33:43.706996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.081 [2024-07-24 22:33:43.707015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.081 [2024-07-24 22:33:43.719817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:18.081 [2024-07-24 22:33:43.719849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:4673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.081 [2024-07-24 22:33:43.719868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.081 [2024-07-24 22:33:43.733942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:18.081 [2024-07-24 22:33:43.733976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:5049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.081 [2024-07-24 22:33:43.733994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.081 [2024-07-24 22:33:43.749045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:18.081 [2024-07-24 22:33:43.749081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.081 [2024-07-24 22:33:43.749100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.081 [2024-07-24 22:33:43.763455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:18.081 [2024-07-24 22:33:43.763511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.081 [2024-07-24 22:33:43.763531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.081 [2024-07-24 22:33:43.776370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:18.081 [2024-07-24 22:33:43.776401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.081 [2024-07-24 22:33:43.776420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.339 [2024-07-24 22:33:43.793859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:18.339 [2024-07-24 22:33:43.793894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:10672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.339 [2024-07-24 22:33:43.793917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.339 [2024-07-24 22:33:43.810284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:18.339 [2024-07-24 22:33:43.810319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:9958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.339 [2024-07-24 22:33:43.810341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.339 [2024-07-24 22:33:43.823689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:18.339 [2024-07-24 22:33:43.823722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:9685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.339 [2024-07-24 22:33:43.823740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.339 [2024-07-24 22:33:43.837055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:18.339 [2024-07-24 22:33:43.837089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:15691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.339 [2024-07-24 22:33:43.837108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.339 [2024-07-24 22:33:43.853813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:18.339 [2024-07-24 22:33:43.853846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.339 [2024-07-24 22:33:43.853869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.339 [2024-07-24 22:33:43.869134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:18.339 [2024-07-24 22:33:43.869178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:14771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.339 [2024-07-24 22:33:43.869198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.339 [2024-07-24 22:33:43.882723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:18.339 [2024-07-24 22:33:43.882764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:19396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.339 [2024-07-24 22:33:43.882782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.339 [2024-07-24 22:33:43.899916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:18.339 [2024-07-24 22:33:43.899949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:18733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.339 [2024-07-24 22:33:43.899968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.339 [2024-07-24 22:33:43.918400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:18.339 [2024-07-24 22:33:43.918433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:24491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.339 [2024-07-24 22:33:43.918451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.339 [2024-07-24 22:33:43.930478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:18.339 [2024-07-24 22:33:43.930521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:9380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.339 [2024-07-24 22:33:43.930545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.339 [2024-07-24 22:33:43.948291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:18.339 [2024-07-24 22:33:43.948325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.339 [2024-07-24 22:33:43.948344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.339 [2024-07-24 22:33:43.965959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:18.339 [2024-07-24 22:33:43.965994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:17105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.339 [2024-07-24 22:33:43.966013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.339 [2024-07-24 22:33:43.982749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:18.339 [2024-07-24 22:33:43.982782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:12247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.339 [2024-07-24 22:33:43.982801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.339 [2024-07-24 22:33:43.995932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:18.339 [2024-07-24 22:33:43.995966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:19745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.339 [2024-07-24 22:33:43.995985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.339 [2024-07-24 22:33:44.014706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:18.339 [2024-07-24 22:33:44.014739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.339 [2024-07-24 22:33:44.014758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.339 [2024-07-24 22:33:44.033301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:18.339 [2024-07-24 22:33:44.033343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:23477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.339 [2024-07-24 22:33:44.033370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.599 [2024-07-24 22:33:44.050963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:18.599 [2024-07-24 22:33:44.050997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.599 [2024-07-24 22:33:44.051016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.599 [2024-07-24 22:33:44.064749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:18.599 [2024-07-24 22:33:44.064782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.599 [2024-07-24 22:33:44.064801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.599 [2024-07-24 22:33:44.080134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:18.599 [2024-07-24 22:33:44.080166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:10490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.599 [2024-07-24 22:33:44.080185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.599 [2024-07-24 22:33:44.094894] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:18.599 [2024-07-24 22:33:44.094926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:20491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.599 [2024-07-24 22:33:44.094945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.599 [2024-07-24 22:33:44.107921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:18.599 [2024-07-24 22:33:44.107952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:8790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.599 [2024-07-24 22:33:44.107971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.599 [2024-07-24 22:33:44.123475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:18.599 [2024-07-24 22:33:44.123521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:8397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.599 [2024-07-24 22:33:44.123540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.599 [2024-07-24 22:33:44.136076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:18.599 [2024-07-24 22:33:44.136108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:6533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.599 [2024-07-24 22:33:44.136127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.599 [2024-07-24 22:33:44.151235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:18.599 [2024-07-24 22:33:44.151267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:2225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.599 [2024-07-24 22:33:44.151286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.599 [2024-07-24 22:33:44.168884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:18.599 [2024-07-24 22:33:44.168925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:23535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.599 [2024-07-24 22:33:44.168944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.599 [2024-07-24 22:33:44.182646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:18.599 [2024-07-24 22:33:44.182679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:16699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.599 [2024-07-24 22:33:44.182698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.599 [2024-07-24 22:33:44.201113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:18.599 [2024-07-24 22:33:44.201154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:23024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.599 [2024-07-24 22:33:44.201172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.599 [2024-07-24 22:33:44.218206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:18.599 [2024-07-24 22:33:44.218238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.599 [2024-07-24 22:33:44.218261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.599 [2024-07-24 22:33:44.231273] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:18.599 [2024-07-24 22:33:44.231306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:20548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.599 [2024-07-24 22:33:44.231325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.599 [2024-07-24 22:33:44.247823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:18.599 [2024-07-24 22:33:44.247855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:19890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.599 [2024-07-24 22:33:44.247873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.599 [2024-07-24 22:33:44.263543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:18.599 [2024-07-24 22:33:44.263575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:2174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.599 [2024-07-24 22:33:44.263595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.599 [2024-07-24 22:33:44.275951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:18.599 [2024-07-24 22:33:44.275983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.599 [2024-07-24 22:33:44.276002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.599 [2024-07-24 22:33:44.291439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:18.599 [2024-07-24 22:33:44.291472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:22640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.599 [2024-07-24 22:33:44.291506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.860 [2024-07-24 22:33:44.309124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:18.860 [2024-07-24 22:33:44.309159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.860 [2024-07-24 22:33:44.309178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.860 [2024-07-24 22:33:44.321964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:18.860 [2024-07-24 22:33:44.321997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:18594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.860 [2024-07-24 22:33:44.322016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.860 [2024-07-24 22:33:44.338892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:18.860 [2024-07-24 22:33:44.338930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:15763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.860 [2024-07-24 22:33:44.338949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.860 [2024-07-24 22:33:44.352458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:18.860 [2024-07-24 22:33:44.352503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:21950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.860 [2024-07-24 22:33:44.352524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.860 [2024-07-24 22:33:44.369738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:18.860 [2024-07-24 22:33:44.369776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.860 [2024-07-24 22:33:44.369795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.860 [2024-07-24 22:33:44.385795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:18.860 [2024-07-24 22:33:44.385829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:2305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.860 [2024-07-24 22:33:44.385848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.860 [2024-07-24 22:33:44.399159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:18.860 [2024-07-24 22:33:44.399194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:8182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.860 [2024-07-24 22:33:44.399213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.860 [2024-07-24 22:33:44.416027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:18.860 [2024-07-24 22:33:44.416062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:11178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.860 [2024-07-24 22:33:44.416081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.860 [2024-07-24 22:33:44.431325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:18.860 [2024-07-24 22:33:44.431373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:14382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.860 [2024-07-24 22:33:44.431392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.860 [2024-07-24 22:33:44.445958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:18.860 [2024-07-24 22:33:44.445992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.860 [2024-07-24 22:33:44.446011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.860 [2024-07-24 22:33:44.459139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:18.860 [2024-07-24 22:33:44.459172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:12120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.860 [2024-07-24 22:33:44.459191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.860 [2024-07-24 22:33:44.473435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:18.860 [2024-07-24 22:33:44.473468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:24031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.860 [2024-07-24 22:33:44.473493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.860 [2024-07-24 22:33:44.487770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:18.860 [2024-07-24 22:33:44.487803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.860 [2024-07-24 22:33:44.487822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.860 [2024-07-24 22:33:44.501986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:18.860 [2024-07-24 22:33:44.502018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:15595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.860 [2024-07-24 22:33:44.502037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.860 [2024-07-24 22:33:44.516842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:18.860 [2024-07-24 22:33:44.516875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:6840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.860 [2024-07-24 22:33:44.516893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.860 [2024-07-24 22:33:44.531073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:18.860 [2024-07-24 22:33:44.531106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:11834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.860 [2024-07-24 22:33:44.531125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.860 [2024-07-24 22:33:44.545324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:18.860 [2024-07-24 22:33:44.545358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:21536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.860 [2024-07-24 22:33:44.545376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.860 [2024-07-24 22:33:44.560039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:18.860 [2024-07-24 22:33:44.560073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:17325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.860 [2024-07-24 22:33:44.560096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.119 [2024-07-24 22:33:44.574660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:19.119 [2024-07-24 22:33:44.574694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:13252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.119 [2024-07-24 22:33:44.574713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.119 [2024-07-24 22:33:44.588728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:19.119 [2024-07-24 22:33:44.588765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:18116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.119 [2024-07-24 22:33:44.588784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.119 [2024-07-24 22:33:44.605324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:19.119 [2024-07-24 22:33:44.605358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:8848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.119 [2024-07-24 22:33:44.605382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.119 [2024-07-24 22:33:44.622221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:19.119 [2024-07-24 22:33:44.622254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:8471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.119 [2024-07-24 22:33:44.622273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.119 [2024-07-24 22:33:44.635267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:19.119 [2024-07-24 22:33:44.635301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:11484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.119 [2024-07-24 22:33:44.635320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.119 [2024-07-24 22:33:44.652000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:19.119 [2024-07-24 22:33:44.652034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:17339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.119 [2024-07-24 22:33:44.652053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.119 [2024-07-24 22:33:44.667981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:19.119 [2024-07-24 22:33:44.668018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:23901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.119 [2024-07-24 22:33:44.668037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.119 [2024-07-24 22:33:44.680412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:19.119 [2024-07-24 22:33:44.680445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.119 [2024-07-24 22:33:44.680477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.120 [2024-07-24 22:33:44.696123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:19.120 [2024-07-24 22:33:44.696156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:24337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.120 [2024-07-24 22:33:44.696175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.120 [2024-07-24 22:33:44.712201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:19.120 [2024-07-24 22:33:44.712236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:14730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.120 [2024-07-24 22:33:44.712254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.120 [2024-07-24 22:33:44.724905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:19.120 [2024-07-24 22:33:44.724937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:20017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.120 [2024-07-24 22:33:44.724956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.120 [2024-07-24 22:33:44.744664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:19.120 [2024-07-24 22:33:44.744705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:13666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.120 [2024-07-24 22:33:44.744725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.120 [2024-07-24 22:33:44.759178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:19.120 [2024-07-24 22:33:44.759216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:15394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.120 [2024-07-24 22:33:44.759238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.120 [2024-07-24 22:33:44.771109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:19.120 [2024-07-24 22:33:44.771142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.120 [2024-07-24 22:33:44.771161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.120 [2024-07-24 22:33:44.789507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:19.120 [2024-07-24 22:33:44.789541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.120 [2024-07-24 22:33:44.789559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.120 [2024-07-24 22:33:44.805660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:19.120 [2024-07-24 22:33:44.805693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:21754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.120 [2024-07-24 22:33:44.805712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.120 [2024-07-24 22:33:44.818966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:19.120 [2024-07-24 22:33:44.819009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.120 [2024-07-24 22:33:44.819028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.379 [2024-07-24 22:33:44.837889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:19.379 [2024-07-24 22:33:44.837924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:22090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.379 [2024-07-24 22:33:44.837942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.379 [2024-07-24 22:33:44.850406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:19.380 [2024-07-24 22:33:44.850438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.380 [2024-07-24 22:33:44.850456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.380 [2024-07-24 22:33:44.868717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:19.380 [2024-07-24 22:33:44.868751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:1192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.380 [2024-07-24 22:33:44.868769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.380 [2024-07-24 22:33:44.885656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:19.380 [2024-07-24 22:33:44.885689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:1348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.380 [2024-07-24 22:33:44.885708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.380 [2024-07-24 22:33:44.898722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:19.380 [2024-07-24 22:33:44.898755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:21763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.380 [2024-07-24 22:33:44.898773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.380 [2024-07-24 22:33:44.912307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:19.380 [2024-07-24 22:33:44.912341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.380 [2024-07-24 22:33:44.912359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.380 [2024-07-24 22:33:44.929039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:19.380 [2024-07-24 22:33:44.929072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.380 [2024-07-24 22:33:44.929091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.380 [2024-07-24 22:33:44.946217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:19.380 [2024-07-24 22:33:44.946251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.380 [2024-07-24 22:33:44.946271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.380 [2024-07-24 22:33:44.959919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:19.380 [2024-07-24 22:33:44.959952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:9251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.380 [2024-07-24 22:33:44.959970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.380 [2024-07-24 22:33:44.973493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:19.380 [2024-07-24 22:33:44.973525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.380 [2024-07-24 22:33:44.973543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.380 [2024-07-24 22:33:44.987752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:19.380 [2024-07-24 22:33:44.987784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:4293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.380 [2024-07-24 22:33:44.987803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.380 [2024-07-24 22:33:45.002336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:19.380 [2024-07-24 22:33:45.002370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.380 [2024-07-24 22:33:45.002388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.380 [2024-07-24 22:33:45.017815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:19.380 [2024-07-24 22:33:45.017848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:17154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.380 [2024-07-24 22:33:45.017866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.380 [2024-07-24 22:33:45.030556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:19.380 [2024-07-24 22:33:45.030598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:9393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.380 [2024-07-24 22:33:45.030616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.380 [2024-07-24 22:33:45.044782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:19.380 [2024-07-24 22:33:45.044815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.380 [2024-07-24 22:33:45.044833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.380 [2024-07-24 22:33:45.059788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:19.380 [2024-07-24 22:33:45.059823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:13728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.380 [2024-07-24 22:33:45.059841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.380 [2024-07-24 22:33:45.076231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:19.380 [2024-07-24 22:33:45.076266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:22830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.380 [2024-07-24 22:33:45.076295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.639 [2024-07-24 22:33:45.089165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:19.639 [2024-07-24 22:33:45.089199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.639 [2024-07-24 22:33:45.089226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.639 [2024-07-24 22:33:45.106089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:19.639 [2024-07-24 22:33:45.106123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:22163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.639 [2024-07-24 22:33:45.106149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.639 [2024-07-24 22:33:45.118973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:19.639 [2024-07-24 22:33:45.119006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:11018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.639 [2024-07-24 22:33:45.119024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.639 [2024-07-24 22:33:45.133921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:19.639 [2024-07-24 22:33:45.133955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:18530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.639 [2024-07-24 22:33:45.133973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.639 [2024-07-24 22:33:45.151164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:19.639 [2024-07-24 22:33:45.151204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.639 [2024-07-24 22:33:45.151223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.639 [2024-07-24 22:33:45.166181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:19.640 [2024-07-24 22:33:45.166217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:9301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.640 [2024-07-24 22:33:45.166235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.640 [2024-07-24 22:33:45.179453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:19.640 [2024-07-24 22:33:45.179498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:19150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.640 [2024-07-24 22:33:45.179518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.640 [2024-07-24 22:33:45.195811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:19.640 [2024-07-24 22:33:45.195843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:2698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.640 [2024-07-24 22:33:45.195863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.640 [2024-07-24 22:33:45.210529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:19.640 [2024-07-24 22:33:45.210561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.640 [2024-07-24 22:33:45.210580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.640 [2024-07-24 22:33:45.224146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:19.640 [2024-07-24 22:33:45.224182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:14833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.640 [2024-07-24 22:33:45.224201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.640 [2024-07-24 22:33:45.238988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:19.640 [2024-07-24 22:33:45.239021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.640 [2024-07-24 22:33:45.239040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.640 [2024-07-24 22:33:45.252036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:19.640 [2024-07-24 22:33:45.252069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.640 [2024-07-24 22:33:45.252087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.640 [2024-07-24 22:33:45.268704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:19.640 [2024-07-24 22:33:45.268738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.640 [2024-07-24 22:33:45.268756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.640 [2024-07-24 22:33:45.287377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:19.640 [2024-07-24 22:33:45.287414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:13789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.640 [2024-07-24 22:33:45.287433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.640 [2024-07-24 22:33:45.303768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:19.640 [2024-07-24 22:33:45.303799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:6410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.640 [2024-07-24 22:33:45.303818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.640 [2024-07-24 22:33:45.316393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:19.640 [2024-07-24 22:33:45.316425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.640 [2024-07-24 22:33:45.316444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.640 [2024-07-24 22:33:45.331098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xdb02c0) 00:24:19.640 [2024-07-24 22:33:45.331129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:4151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.640 [2024-07-24 22:33:45.331154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.640 00:24:19.640 Latency(us) 00:24:19.640 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:19.640 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:19.640 nvme0n1 : 2.01 16793.87 65.60 0.00 0.00 7611.38 3932.16 23495.87 00:24:19.640 =================================================================================================================== 00:24:19.640 Total : 16793.87 65.60 0.00 0.00 7611.38 3932.16 23495.87 00:24:19.640 0 00:24:19.898 22:33:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:19.898 22:33:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:19.898 22:33:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:19.898 22:33:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:19.898 | .driver_specific 00:24:19.898 | .nvme_error 00:24:19.898 | .status_code 00:24:19.898 | .command_transient_transport_error' 00:24:20.157 22:33:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 132 > 0 )) 00:24:20.157 22:33:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3922724 00:24:20.157 22:33:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3922724 ']' 00:24:20.157 22:33:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3922724 00:24:20.157 22:33:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:20.157 22:33:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:20.157 22:33:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3922724 00:24:20.157 22:33:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:20.157 22:33:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:20.157 22:33:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3922724' 00:24:20.157 killing process with pid 3922724 00:24:20.157 22:33:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3922724 00:24:20.157 Received shutdown signal, test time was about 2.000000 seconds 00:24:20.157 00:24:20.157 Latency(us) 00:24:20.157 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:20.157 =================================================================================================================== 00:24:20.157 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:20.157 22:33:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3922724 00:24:20.415 22:33:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:24:20.415 22:33:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:20.415 22:33:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:24:20.415 22:33:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:24:20.415 22:33:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:24:20.415 22:33:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3923040 00:24:20.415 22:33:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:24:20.415 22:33:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3923040 /var/tmp/bperf.sock 00:24:20.415 22:33:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3923040 ']' 00:24:20.415 22:33:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:20.415 22:33:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:20.415 22:33:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:20.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:20.415 22:33:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:20.415 22:33:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:20.415 [2024-07-24 22:33:45.943899] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:24:20.415 [2024-07-24 22:33:45.943992] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3923040 ] 00:24:20.415 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:20.415 Zero copy mechanism will not be used. 00:24:20.415 EAL: No free 2048 kB hugepages reported on node 1 00:24:20.415 [2024-07-24 22:33:46.005261] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:20.672 [2024-07-24 22:33:46.124239] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:20.672 22:33:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:20.672 22:33:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:20.672 22:33:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:20.672 22:33:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:20.929 22:33:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:20.929 22:33:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:20.929 22:33:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:20.929 22:33:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:20.929 22:33:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:20.929 22:33:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:21.188 nvme0n1 00:24:21.188 22:33:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:24:21.188 22:33:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:21.188 22:33:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:21.448 22:33:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:21.448 22:33:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:21.448 22:33:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:21.448 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:21.448 Zero copy mechanism will not be used. 00:24:21.448 Running I/O for 2 seconds... 00:24:21.448 [2024-07-24 22:33:46.999429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:21.448 [2024-07-24 22:33:46.999498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.448 [2024-07-24 22:33:46.999522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.448 [2024-07-24 22:33:47.008919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:21.448 [2024-07-24 22:33:47.008956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.448 [2024-07-24 22:33:47.008975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.448 [2024-07-24 22:33:47.018412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:21.448 [2024-07-24 22:33:47.018446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.448 [2024-07-24 22:33:47.018465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.448 [2024-07-24 22:33:47.027988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:21.448 [2024-07-24 22:33:47.028024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.448 [2024-07-24 22:33:47.028043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.448 [2024-07-24 22:33:47.036763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:21.448 [2024-07-24 22:33:47.036799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.448 [2024-07-24 22:33:47.036819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.448 [2024-07-24 22:33:47.046787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:21.448 [2024-07-24 22:33:47.046821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.449 [2024-07-24 22:33:47.046841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.449 [2024-07-24 22:33:47.056769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:21.449 [2024-07-24 22:33:47.056804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.449 [2024-07-24 22:33:47.056823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.449 [2024-07-24 22:33:47.066747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:21.449 [2024-07-24 22:33:47.066782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.449 [2024-07-24 22:33:47.066801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.449 [2024-07-24 22:33:47.076624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:21.449 [2024-07-24 22:33:47.076669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.449 [2024-07-24 22:33:47.076689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.449 [2024-07-24 22:33:47.085934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:21.449 [2024-07-24 22:33:47.085968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.449 [2024-07-24 22:33:47.085988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.449 [2024-07-24 22:33:47.095497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:21.449 [2024-07-24 22:33:47.095531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.449 [2024-07-24 22:33:47.095550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.449 [2024-07-24 22:33:47.105220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:21.449 [2024-07-24 22:33:47.105255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.449 [2024-07-24 22:33:47.105274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.449 [2024-07-24 22:33:47.116114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:21.449 [2024-07-24 22:33:47.116149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.449 [2024-07-24 22:33:47.116169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.449 [2024-07-24 22:33:47.125494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:21.449 [2024-07-24 22:33:47.125567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.449 [2024-07-24 22:33:47.125587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.449 [2024-07-24 22:33:47.133919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:21.449 [2024-07-24 22:33:47.133953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.449 [2024-07-24 22:33:47.133972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.449 [2024-07-24 22:33:47.143296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:21.449 [2024-07-24 22:33:47.143330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.449 [2024-07-24 22:33:47.143349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.710 [2024-07-24 22:33:47.153020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:21.710 [2024-07-24 22:33:47.153055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.710 [2024-07-24 22:33:47.153081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.710 [2024-07-24 22:33:47.162226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:21.710 [2024-07-24 22:33:47.162260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.710 [2024-07-24 22:33:47.162279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.710 [2024-07-24 22:33:47.171828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:21.710 [2024-07-24 22:33:47.171862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.710 [2024-07-24 22:33:47.171881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.710 [2024-07-24 22:33:47.181160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:21.710 [2024-07-24 22:33:47.181194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.710 [2024-07-24 22:33:47.181212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.710 [2024-07-24 22:33:47.190254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:21.710 [2024-07-24 22:33:47.190288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.710 [2024-07-24 22:33:47.190307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.710 [2024-07-24 22:33:47.199658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:21.710 [2024-07-24 22:33:47.199692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.710 [2024-07-24 22:33:47.199711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.710 [2024-07-24 22:33:47.208837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:21.710 [2024-07-24 22:33:47.208871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.710 [2024-07-24 22:33:47.208890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.710 [2024-07-24 22:33:47.218368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:21.710 [2024-07-24 22:33:47.218402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.710 [2024-07-24 22:33:47.218421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.710 [2024-07-24 22:33:47.227798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:21.710 [2024-07-24 22:33:47.227831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.710 [2024-07-24 22:33:47.227850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.710 [2024-07-24 22:33:47.237211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:21.710 [2024-07-24 22:33:47.237251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.710 [2024-07-24 22:33:47.237271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.710 [2024-07-24 22:33:47.246669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:21.710 [2024-07-24 22:33:47.246704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.710 [2024-07-24 22:33:47.246724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.710 [2024-07-24 22:33:47.256830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:21.710 [2024-07-24 22:33:47.256864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.710 [2024-07-24 22:33:47.256882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.710 [2024-07-24 22:33:47.265422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:21.710 [2024-07-24 22:33:47.265458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.710 [2024-07-24 22:33:47.265477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.710 [2024-07-24 22:33:47.274862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:21.710 [2024-07-24 22:33:47.274898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.710 [2024-07-24 22:33:47.274918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.710 [2024-07-24 22:33:47.284323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:21.710 [2024-07-24 22:33:47.284357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.710 [2024-07-24 22:33:47.284376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.710 [2024-07-24 22:33:47.293695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:21.710 [2024-07-24 22:33:47.293731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.710 [2024-07-24 22:33:47.293751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.710 [2024-07-24 22:33:47.303079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:21.710 [2024-07-24 22:33:47.303114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.710 [2024-07-24 22:33:47.303133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.710 [2024-07-24 22:33:47.312605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:21.710 [2024-07-24 22:33:47.312640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.710 [2024-07-24 22:33:47.312660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.710 [2024-07-24 22:33:47.322099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:21.710 [2024-07-24 22:33:47.322133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.710 [2024-07-24 22:33:47.322152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.711 [2024-07-24 22:33:47.332272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:21.711 [2024-07-24 22:33:47.332307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.711 [2024-07-24 22:33:47.332326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.711 [2024-07-24 22:33:47.341684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:21.711 [2024-07-24 22:33:47.341721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.711 [2024-07-24 22:33:47.341739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.711 [2024-07-24 22:33:47.350265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:21.711 [2024-07-24 22:33:47.350300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.711 [2024-07-24 22:33:47.350319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.711 [2024-07-24 22:33:47.359710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:21.711 [2024-07-24 22:33:47.359744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.711 [2024-07-24 22:33:47.359764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.711 [2024-07-24 22:33:47.369131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:21.711 [2024-07-24 22:33:47.369165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.711 [2024-07-24 22:33:47.369184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.711 [2024-07-24 22:33:47.378453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:21.711 [2024-07-24 22:33:47.378496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.711 [2024-07-24 22:33:47.378516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.711 [2024-07-24 22:33:47.387916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:21.711 [2024-07-24 22:33:47.387949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.711 [2024-07-24 22:33:47.387968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.711 [2024-07-24 22:33:47.397300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:21.711 [2024-07-24 22:33:47.397335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.711 [2024-07-24 22:33:47.397361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.711 [2024-07-24 22:33:47.407493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:21.711 [2024-07-24 22:33:47.407529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.711 [2024-07-24 22:33:47.407821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.972 [2024-07-24 22:33:47.416043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:21.972 [2024-07-24 22:33:47.416078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.972 [2024-07-24 22:33:47.416097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.972 [2024-07-24 22:33:47.425466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:21.972 [2024-07-24 22:33:47.425507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.972 [2024-07-24 22:33:47.425527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.972 [2024-07-24 22:33:47.434936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:21.972 [2024-07-24 22:33:47.434970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.972 [2024-07-24 22:33:47.434989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.972 [2024-07-24 22:33:47.444382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:21.972 [2024-07-24 22:33:47.444417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.972 [2024-07-24 22:33:47.444437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.972 [2024-07-24 22:33:47.453713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:21.972 [2024-07-24 22:33:47.453747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.972 [2024-07-24 22:33:47.453766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.972 [2024-07-24 22:33:47.463088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:21.972 [2024-07-24 22:33:47.463122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.972 [2024-07-24 22:33:47.463141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.972 [2024-07-24 22:33:47.472340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:21.972 [2024-07-24 22:33:47.472374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.972 [2024-07-24 22:33:47.472393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.972 [2024-07-24 22:33:47.481805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:21.972 [2024-07-24 22:33:47.481840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.972 [2024-07-24 22:33:47.481860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.973 [2024-07-24 22:33:47.491233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:21.973 [2024-07-24 22:33:47.491267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.973 [2024-07-24 22:33:47.491286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.973 [2024-07-24 22:33:47.501306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:21.973 [2024-07-24 22:33:47.501340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.973 [2024-07-24 22:33:47.501359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.973 [2024-07-24 22:33:47.509990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:21.973 [2024-07-24 22:33:47.510026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.973 [2024-07-24 22:33:47.510045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.973 [2024-07-24 22:33:47.519291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:21.973 [2024-07-24 22:33:47.519327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.973 [2024-07-24 22:33:47.519346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.973 [2024-07-24 22:33:47.528624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:21.973 [2024-07-24 22:33:47.528660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.973 [2024-07-24 22:33:47.528680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.973 [2024-07-24 22:33:47.538033] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:21.973 [2024-07-24 22:33:47.538068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.973 [2024-07-24 22:33:47.538086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.973 [2024-07-24 22:33:47.547298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:21.973 [2024-07-24 22:33:47.547333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.973 [2024-07-24 22:33:47.547352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.973 [2024-07-24 22:33:47.556612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:21.973 [2024-07-24 22:33:47.556647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.973 [2024-07-24 22:33:47.556673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.973 [2024-07-24 22:33:47.565960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:21.973 [2024-07-24 22:33:47.565996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.973 [2024-07-24 22:33:47.566015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.973 [2024-07-24 22:33:47.575246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:21.973 [2024-07-24 22:33:47.575281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.973 [2024-07-24 22:33:47.575300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.973 [2024-07-24 22:33:47.584753] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:21.973 [2024-07-24 22:33:47.584788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.973 [2024-07-24 22:33:47.584807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.973 [2024-07-24 22:33:47.594000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:21.973 [2024-07-24 22:33:47.594034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.973 [2024-07-24 22:33:47.594054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.973 [2024-07-24 22:33:47.603375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:21.973 [2024-07-24 22:33:47.603411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.973 [2024-07-24 22:33:47.603431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.973 [2024-07-24 22:33:47.612835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:21.973 [2024-07-24 22:33:47.612872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.973 [2024-07-24 22:33:47.612891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.973 [2024-07-24 22:33:47.622323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:21.973 [2024-07-24 22:33:47.622357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.973 [2024-07-24 22:33:47.622377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.973 [2024-07-24 22:33:47.631708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:21.973 [2024-07-24 22:33:47.631743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.973 [2024-07-24 22:33:47.631762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:21.973 [2024-07-24 22:33:47.640916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:21.973 [2024-07-24 22:33:47.640963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.973 [2024-07-24 22:33:47.640983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:21.973 [2024-07-24 22:33:47.650542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:21.973 [2024-07-24 22:33:47.650577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.974 [2024-07-24 22:33:47.650597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:21.974 [2024-07-24 22:33:47.659551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:21.974 [2024-07-24 22:33:47.659586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.974 [2024-07-24 22:33:47.659605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:21.974 [2024-07-24 22:33:47.668888] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:21.974 [2024-07-24 22:33:47.668922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.974 [2024-07-24 22:33:47.668941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.235 [2024-07-24 22:33:47.678821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.235 [2024-07-24 22:33:47.678893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.235 [2024-07-24 22:33:47.679188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.235 [2024-07-24 22:33:47.687245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.235 [2024-07-24 22:33:47.687279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.235 [2024-07-24 22:33:47.687298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.235 [2024-07-24 22:33:47.696472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.235 [2024-07-24 22:33:47.696514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.235 [2024-07-24 22:33:47.696534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.235 [2024-07-24 22:33:47.706052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.235 [2024-07-24 22:33:47.706095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.235 [2024-07-24 22:33:47.706115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.235 [2024-07-24 22:33:47.715295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.235 [2024-07-24 22:33:47.715332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.235 [2024-07-24 22:33:47.715351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.235 [2024-07-24 22:33:47.724574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.235 [2024-07-24 22:33:47.724608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.235 [2024-07-24 22:33:47.724627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.235 [2024-07-24 22:33:47.733870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.235 [2024-07-24 22:33:47.733906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.235 [2024-07-24 22:33:47.733925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.235 [2024-07-24 22:33:47.743182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.235 [2024-07-24 22:33:47.743217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.235 [2024-07-24 22:33:47.743237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.235 [2024-07-24 22:33:47.752556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.235 [2024-07-24 22:33:47.752590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.235 [2024-07-24 22:33:47.752610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.235 [2024-07-24 22:33:47.762167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.235 [2024-07-24 22:33:47.762205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.235 [2024-07-24 22:33:47.762224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.235 [2024-07-24 22:33:47.771959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.235 [2024-07-24 22:33:47.772000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.235 [2024-07-24 22:33:47.772019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.235 [2024-07-24 22:33:47.781703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.235 [2024-07-24 22:33:47.781743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.235 [2024-07-24 22:33:47.781762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.236 [2024-07-24 22:33:47.791501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.236 [2024-07-24 22:33:47.791546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.236 [2024-07-24 22:33:47.791566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.236 [2024-07-24 22:33:47.801794] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.236 [2024-07-24 22:33:47.801832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.236 [2024-07-24 22:33:47.801900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.236 [2024-07-24 22:33:47.811092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.236 [2024-07-24 22:33:47.811132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.236 [2024-07-24 22:33:47.811151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.236 [2024-07-24 22:33:47.820952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.236 [2024-07-24 22:33:47.820990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.236 [2024-07-24 22:33:47.821009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.236 [2024-07-24 22:33:47.831387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.236 [2024-07-24 22:33:47.831426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.236 [2024-07-24 22:33:47.831445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.236 [2024-07-24 22:33:47.841653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.236 [2024-07-24 22:33:47.841692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.236 [2024-07-24 22:33:47.841712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.236 [2024-07-24 22:33:47.851681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.236 [2024-07-24 22:33:47.851720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.236 [2024-07-24 22:33:47.851740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.236 [2024-07-24 22:33:47.861824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.236 [2024-07-24 22:33:47.861863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.236 [2024-07-24 22:33:47.861882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.236 [2024-07-24 22:33:47.871684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.236 [2024-07-24 22:33:47.871722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.236 [2024-07-24 22:33:47.871742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.236 [2024-07-24 22:33:47.881840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.236 [2024-07-24 22:33:47.881881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.236 [2024-07-24 22:33:47.881900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.236 [2024-07-24 22:33:47.891769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.236 [2024-07-24 22:33:47.891821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.236 [2024-07-24 22:33:47.891841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.236 [2024-07-24 22:33:47.901840] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.236 [2024-07-24 22:33:47.901878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.236 [2024-07-24 22:33:47.901899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.236 [2024-07-24 22:33:47.911389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.236 [2024-07-24 22:33:47.911428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.236 [2024-07-24 22:33:47.911448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.236 [2024-07-24 22:33:47.921731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.236 [2024-07-24 22:33:47.921771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.236 [2024-07-24 22:33:47.921791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.236 [2024-07-24 22:33:47.931641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.236 [2024-07-24 22:33:47.931678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.236 [2024-07-24 22:33:47.931697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.496 [2024-07-24 22:33:47.940855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.496 [2024-07-24 22:33:47.940894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.496 [2024-07-24 22:33:47.940914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.496 [2024-07-24 22:33:47.950940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.496 [2024-07-24 22:33:47.950980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.496 [2024-07-24 22:33:47.950999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.496 [2024-07-24 22:33:47.960452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.496 [2024-07-24 22:33:47.960500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.496 [2024-07-24 22:33:47.960520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.496 [2024-07-24 22:33:47.970648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.496 [2024-07-24 22:33:47.970686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.496 [2024-07-24 22:33:47.970719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.496 [2024-07-24 22:33:47.980574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.496 [2024-07-24 22:33:47.980612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.496 [2024-07-24 22:33:47.980632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.496 [2024-07-24 22:33:47.990130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.496 [2024-07-24 22:33:47.990170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.496 [2024-07-24 22:33:47.990189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.496 [2024-07-24 22:33:47.999927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.496 [2024-07-24 22:33:47.999967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.496 [2024-07-24 22:33:47.999986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.496 [2024-07-24 22:33:48.010205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.496 [2024-07-24 22:33:48.010245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.496 [2024-07-24 22:33:48.010265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.496 [2024-07-24 22:33:48.020353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.496 [2024-07-24 22:33:48.020401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.496 [2024-07-24 22:33:48.020420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.496 [2024-07-24 22:33:48.030385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.496 [2024-07-24 22:33:48.030426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.496 [2024-07-24 22:33:48.030445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.496 [2024-07-24 22:33:48.040536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.496 [2024-07-24 22:33:48.040575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.496 [2024-07-24 22:33:48.040594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.496 [2024-07-24 22:33:48.050678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.496 [2024-07-24 22:33:48.050717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.496 [2024-07-24 22:33:48.050737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.496 [2024-07-24 22:33:48.060812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.496 [2024-07-24 22:33:48.060864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.496 [2024-07-24 22:33:48.060884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.496 [2024-07-24 22:33:48.070829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.496 [2024-07-24 22:33:48.070868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.496 [2024-07-24 22:33:48.070887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.496 [2024-07-24 22:33:48.080682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.496 [2024-07-24 22:33:48.080720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.496 [2024-07-24 22:33:48.080739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.496 [2024-07-24 22:33:48.090657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.496 [2024-07-24 22:33:48.090695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.496 [2024-07-24 22:33:48.090714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.496 [2024-07-24 22:33:48.100451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.496 [2024-07-24 22:33:48.100500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.496 [2024-07-24 22:33:48.100521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.496 [2024-07-24 22:33:48.110649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.496 [2024-07-24 22:33:48.110687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.496 [2024-07-24 22:33:48.110706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.496 [2024-07-24 22:33:48.120805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.496 [2024-07-24 22:33:48.120844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.496 [2024-07-24 22:33:48.120864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.496 [2024-07-24 22:33:48.130858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.496 [2024-07-24 22:33:48.130897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.496 [2024-07-24 22:33:48.130916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.496 [2024-07-24 22:33:48.141125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.496 [2024-07-24 22:33:48.141163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.496 [2024-07-24 22:33:48.141182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.496 [2024-07-24 22:33:48.151180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.496 [2024-07-24 22:33:48.151219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.496 [2024-07-24 22:33:48.151238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.496 [2024-07-24 22:33:48.161092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.496 [2024-07-24 22:33:48.161133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.496 [2024-07-24 22:33:48.161152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.496 [2024-07-24 22:33:48.171283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.496 [2024-07-24 22:33:48.171324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.496 [2024-07-24 22:33:48.171343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.496 [2024-07-24 22:33:48.181245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.496 [2024-07-24 22:33:48.181284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.496 [2024-07-24 22:33:48.181304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.496 [2024-07-24 22:33:48.191340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.496 [2024-07-24 22:33:48.191380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.496 [2024-07-24 22:33:48.191399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.756 [2024-07-24 22:33:48.201325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.756 [2024-07-24 22:33:48.201367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.756 [2024-07-24 22:33:48.201386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.756 [2024-07-24 22:33:48.211388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.756 [2024-07-24 22:33:48.211430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.756 [2024-07-24 22:33:48.211449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.756 [2024-07-24 22:33:48.221370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.756 [2024-07-24 22:33:48.221408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.756 [2024-07-24 22:33:48.221427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.756 [2024-07-24 22:33:48.231420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.756 [2024-07-24 22:33:48.231458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.756 [2024-07-24 22:33:48.231497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.756 [2024-07-24 22:33:48.241293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.756 [2024-07-24 22:33:48.241332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.756 [2024-07-24 22:33:48.241351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.756 [2024-07-24 22:33:48.251446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.756 [2024-07-24 22:33:48.251492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.756 [2024-07-24 22:33:48.251514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.756 [2024-07-24 22:33:48.261711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.756 [2024-07-24 22:33:48.261792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.756 [2024-07-24 22:33:48.261813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.756 [2024-07-24 22:33:48.271198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.756 [2024-07-24 22:33:48.271237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.756 [2024-07-24 22:33:48.271256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.756 [2024-07-24 22:33:48.281348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.757 [2024-07-24 22:33:48.281389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.757 [2024-07-24 22:33:48.281409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.757 [2024-07-24 22:33:48.291360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.757 [2024-07-24 22:33:48.291398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.757 [2024-07-24 22:33:48.291417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.757 [2024-07-24 22:33:48.301207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.757 [2024-07-24 22:33:48.301247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.757 [2024-07-24 22:33:48.301266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.757 [2024-07-24 22:33:48.311111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.757 [2024-07-24 22:33:48.311150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.757 [2024-07-24 22:33:48.311204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.757 [2024-07-24 22:33:48.321123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.757 [2024-07-24 22:33:48.321173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.757 [2024-07-24 22:33:48.321194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.757 [2024-07-24 22:33:48.330828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.757 [2024-07-24 22:33:48.330868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.757 [2024-07-24 22:33:48.330888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.757 [2024-07-24 22:33:48.340645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.757 [2024-07-24 22:33:48.340684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.757 [2024-07-24 22:33:48.340703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.757 [2024-07-24 22:33:48.350638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.757 [2024-07-24 22:33:48.350680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.757 [2024-07-24 22:33:48.350701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.757 [2024-07-24 22:33:48.360955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.757 [2024-07-24 22:33:48.360995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.757 [2024-07-24 22:33:48.361014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.757 [2024-07-24 22:33:48.370989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.757 [2024-07-24 22:33:48.371028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.757 [2024-07-24 22:33:48.371047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.757 [2024-07-24 22:33:48.381061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.757 [2024-07-24 22:33:48.381100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.757 [2024-07-24 22:33:48.381120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.757 [2024-07-24 22:33:48.390814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.757 [2024-07-24 22:33:48.390853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.757 [2024-07-24 22:33:48.390872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.757 [2024-07-24 22:33:48.401010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.757 [2024-07-24 22:33:48.401048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.757 [2024-07-24 22:33:48.401067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.757 [2024-07-24 22:33:48.409985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.757 [2024-07-24 22:33:48.410116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.757 [2024-07-24 22:33:48.410165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.757 [2024-07-24 22:33:48.420007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.757 [2024-07-24 22:33:48.420047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.757 [2024-07-24 22:33:48.420066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:22.757 [2024-07-24 22:33:48.430212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.757 [2024-07-24 22:33:48.430251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.757 [2024-07-24 22:33:48.430270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:22.757 [2024-07-24 22:33:48.436878] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.757 [2024-07-24 22:33:48.436914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.757 [2024-07-24 22:33:48.436933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:22.757 [2024-07-24 22:33:48.445421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.757 [2024-07-24 22:33:48.445461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.757 [2024-07-24 22:33:48.445488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:22.757 [2024-07-24 22:33:48.455125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:22.757 [2024-07-24 22:33:48.455164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:22.757 [2024-07-24 22:33:48.455183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.017 [2024-07-24 22:33:48.464896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:23.017 [2024-07-24 22:33:48.464935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.017 [2024-07-24 22:33:48.464955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.017 [2024-07-24 22:33:48.474948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:23.017 [2024-07-24 22:33:48.474988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.017 [2024-07-24 22:33:48.475007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.017 [2024-07-24 22:33:48.484995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:23.017 [2024-07-24 22:33:48.485035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.017 [2024-07-24 22:33:48.485069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.017 [2024-07-24 22:33:48.494890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:23.017 [2024-07-24 22:33:48.494927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.017 [2024-07-24 22:33:48.494947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.017 [2024-07-24 22:33:48.504781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:23.017 [2024-07-24 22:33:48.504820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.017 [2024-07-24 22:33:48.504840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.017 [2024-07-24 22:33:48.514814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:23.017 [2024-07-24 22:33:48.514854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.017 [2024-07-24 22:33:48.514872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.017 [2024-07-24 22:33:48.524426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:23.017 [2024-07-24 22:33:48.524465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.017 [2024-07-24 22:33:48.524490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.017 [2024-07-24 22:33:48.534680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:23.017 [2024-07-24 22:33:48.534725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.017 [2024-07-24 22:33:48.534745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.017 [2024-07-24 22:33:48.544162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:23.017 [2024-07-24 22:33:48.544199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.017 [2024-07-24 22:33:48.544218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.017 [2024-07-24 22:33:48.553784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:23.017 [2024-07-24 22:33:48.553820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.017 [2024-07-24 22:33:48.553839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.017 [2024-07-24 22:33:48.563573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:23.017 [2024-07-24 22:33:48.563613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.018 [2024-07-24 22:33:48.563632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.018 [2024-07-24 22:33:48.573174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:23.018 [2024-07-24 22:33:48.573209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.018 [2024-07-24 22:33:48.573228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.018 [2024-07-24 22:33:48.582187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:23.018 [2024-07-24 22:33:48.582223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.018 [2024-07-24 22:33:48.582381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.018 [2024-07-24 22:33:48.592581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:23.018 [2024-07-24 22:33:48.592622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.018 [2024-07-24 22:33:48.592641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.018 [2024-07-24 22:33:48.599262] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:23.018 [2024-07-24 22:33:48.599331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.018 [2024-07-24 22:33:48.599515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.018 [2024-07-24 22:33:48.608134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:23.018 [2024-07-24 22:33:48.608170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.018 [2024-07-24 22:33:48.608188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.018 [2024-07-24 22:33:48.617778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:23.018 [2024-07-24 22:33:48.617814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.018 [2024-07-24 22:33:48.617833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.018 [2024-07-24 22:33:48.627700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:23.018 [2024-07-24 22:33:48.627735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.018 [2024-07-24 22:33:48.627753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.018 [2024-07-24 22:33:48.637722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:23.018 [2024-07-24 22:33:48.637757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.018 [2024-07-24 22:33:48.637776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.018 [2024-07-24 22:33:48.647326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:23.018 [2024-07-24 22:33:48.647360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.018 [2024-07-24 22:33:48.647387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.018 [2024-07-24 22:33:48.657277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:23.018 [2024-07-24 22:33:48.657311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.018 [2024-07-24 22:33:48.657330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.018 [2024-07-24 22:33:48.667319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:23.018 [2024-07-24 22:33:48.667353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.018 [2024-07-24 22:33:48.667372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.018 [2024-07-24 22:33:48.676659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:23.018 [2024-07-24 22:33:48.676694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.018 [2024-07-24 22:33:48.676713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.018 [2024-07-24 22:33:48.686819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:23.018 [2024-07-24 22:33:48.686855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.018 [2024-07-24 22:33:48.686874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.018 [2024-07-24 22:33:48.697122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:23.018 [2024-07-24 22:33:48.697159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.018 [2024-07-24 22:33:48.697177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.018 [2024-07-24 22:33:48.706666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:23.018 [2024-07-24 22:33:48.706700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.018 [2024-07-24 22:33:48.706720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.018 [2024-07-24 22:33:48.716502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:23.018 [2024-07-24 22:33:48.716536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.018 [2024-07-24 22:33:48.716555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.277 [2024-07-24 22:33:48.725451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:23.277 [2024-07-24 22:33:48.725496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.277 [2024-07-24 22:33:48.725676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.277 [2024-07-24 22:33:48.735138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:23.277 [2024-07-24 22:33:48.735181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.277 [2024-07-24 22:33:48.735201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.277 [2024-07-24 22:33:48.745335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:23.277 [2024-07-24 22:33:48.745370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.277 [2024-07-24 22:33:48.745389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.277 [2024-07-24 22:33:48.754806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:23.277 [2024-07-24 22:33:48.754841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.277 [2024-07-24 22:33:48.754860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.277 [2024-07-24 22:33:48.764656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:23.277 [2024-07-24 22:33:48.764690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.277 [2024-07-24 22:33:48.764709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.277 [2024-07-24 22:33:48.774746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:23.277 [2024-07-24 22:33:48.774780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.277 [2024-07-24 22:33:48.774800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.277 [2024-07-24 22:33:48.784585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:23.277 [2024-07-24 22:33:48.784621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.277 [2024-07-24 22:33:48.784640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.277 [2024-07-24 22:33:48.794274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:23.277 [2024-07-24 22:33:48.794310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.277 [2024-07-24 22:33:48.794329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.277 [2024-07-24 22:33:48.804295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:23.278 [2024-07-24 22:33:48.804329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.278 [2024-07-24 22:33:48.804348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.278 [2024-07-24 22:33:48.814337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:23.278 [2024-07-24 22:33:48.814379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.278 [2024-07-24 22:33:48.814398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.278 [2024-07-24 22:33:48.823919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:23.278 [2024-07-24 22:33:48.823953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.278 [2024-07-24 22:33:48.823972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.278 [2024-07-24 22:33:48.833703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:23.278 [2024-07-24 22:33:48.833737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.278 [2024-07-24 22:33:48.833756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.278 [2024-07-24 22:33:48.843536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:23.278 [2024-07-24 22:33:48.843570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.278 [2024-07-24 22:33:48.843589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.278 [2024-07-24 22:33:48.853328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:23.278 [2024-07-24 22:33:48.853364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.278 [2024-07-24 22:33:48.853383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.278 [2024-07-24 22:33:48.862683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:23.278 [2024-07-24 22:33:48.862717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.278 [2024-07-24 22:33:48.862736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.278 [2024-07-24 22:33:48.868234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:23.278 [2024-07-24 22:33:48.868268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.278 [2024-07-24 22:33:48.868288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.278 [2024-07-24 22:33:48.878078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:23.278 [2024-07-24 22:33:48.878113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.278 [2024-07-24 22:33:48.878132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.278 [2024-07-24 22:33:48.887942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:23.278 [2024-07-24 22:33:48.887977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.278 [2024-07-24 22:33:48.887998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.278 [2024-07-24 22:33:48.897396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:23.278 [2024-07-24 22:33:48.897432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.278 [2024-07-24 22:33:48.897463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.278 [2024-07-24 22:33:48.907395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:23.278 [2024-07-24 22:33:48.907430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.278 [2024-07-24 22:33:48.907450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.278 [2024-07-24 22:33:48.916884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:23.278 [2024-07-24 22:33:48.916919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.278 [2024-07-24 22:33:48.916938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.278 [2024-07-24 22:33:48.926532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:23.278 [2024-07-24 22:33:48.926568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.278 [2024-07-24 22:33:48.926594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.278 [2024-07-24 22:33:48.936470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:23.278 [2024-07-24 22:33:48.936516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.278 [2024-07-24 22:33:48.936536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.278 [2024-07-24 22:33:48.946344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:23.278 [2024-07-24 22:33:48.946382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.278 [2024-07-24 22:33:48.946401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.278 [2024-07-24 22:33:48.956371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:23.278 [2024-07-24 22:33:48.956405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.278 [2024-07-24 22:33:48.956424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.278 [2024-07-24 22:33:48.965733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:23.278 [2024-07-24 22:33:48.965769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.278 [2024-07-24 22:33:48.965788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:23.278 [2024-07-24 22:33:48.975078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:23.278 [2024-07-24 22:33:48.975113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.278 [2024-07-24 22:33:48.975132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:23.536 [2024-07-24 22:33:48.984387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:23.536 [2024-07-24 22:33:48.984429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.536 [2024-07-24 22:33:48.984449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:23.536 [2024-07-24 22:33:48.993726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19819d0) 00:24:23.536 [2024-07-24 22:33:48.993759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:23.536 [2024-07-24 22:33:48.993779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:23.536 00:24:23.536 Latency(us) 00:24:23.536 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:23.536 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:23.536 nvme0n1 : 2.04 3159.54 394.94 0.00 0.00 4963.89 1128.68 45438.29 00:24:23.536 =================================================================================================================== 00:24:23.536 Total : 3159.54 394.94 0.00 0.00 4963.89 1128.68 45438.29 00:24:23.536 0 00:24:23.536 22:33:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:23.536 22:33:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:23.536 22:33:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:23.536 | .driver_specific 00:24:23.536 | .nvme_error 00:24:23.536 | .status_code 00:24:23.536 | .command_transient_transport_error' 00:24:23.536 22:33:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:23.795 22:33:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 208 > 0 )) 00:24:23.795 22:33:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3923040 00:24:23.795 22:33:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3923040 ']' 00:24:23.795 22:33:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3923040 00:24:23.795 22:33:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:23.795 22:33:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:23.795 22:33:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3923040 00:24:23.795 22:33:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:23.795 22:33:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:23.795 22:33:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3923040' 00:24:23.795 killing process with pid 3923040 00:24:23.795 22:33:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3923040 00:24:23.795 Received shutdown signal, test time was about 2.000000 seconds 00:24:23.795 00:24:23.795 Latency(us) 00:24:23.795 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:23.795 =================================================================================================================== 00:24:23.795 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:23.795 22:33:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3923040 00:24:24.053 22:33:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:24:24.053 22:33:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:24.053 22:33:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:24:24.053 22:33:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:24:24.053 22:33:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:24:24.053 22:33:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3923440 00:24:24.053 22:33:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3923440 /var/tmp/bperf.sock 00:24:24.053 22:33:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:24:24.053 22:33:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3923440 ']' 00:24:24.053 22:33:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:24.053 22:33:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:24.053 22:33:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:24.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:24.053 22:33:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:24.053 22:33:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:24.053 [2024-07-24 22:33:49.637438] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:24:24.053 [2024-07-24 22:33:49.637541] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3923440 ] 00:24:24.053 EAL: No free 2048 kB hugepages reported on node 1 00:24:24.053 [2024-07-24 22:33:49.697669] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:24.313 [2024-07-24 22:33:49.814487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:24.313 22:33:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:24.313 22:33:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:24.313 22:33:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:24.313 22:33:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:24.572 22:33:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:24.572 22:33:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.572 22:33:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:24.572 22:33:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.572 22:33:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:24.572 22:33:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:25.141 nvme0n1 00:24:25.141 22:33:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:24:25.141 22:33:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.141 22:33:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:25.141 22:33:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.141 22:33:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:25.141 22:33:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:25.141 Running I/O for 2 seconds... 00:24:25.400 [2024-07-24 22:33:50.869111] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:25.400 [2024-07-24 22:33:50.869373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.400 [2024-07-24 22:33:50.869411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:25.400 [2024-07-24 22:33:50.884001] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:25.400 [2024-07-24 22:33:50.884225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.400 [2024-07-24 22:33:50.884256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:25.400 [2024-07-24 22:33:50.898836] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:25.400 [2024-07-24 22:33:50.899057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.401 [2024-07-24 22:33:50.899087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:25.401 [2024-07-24 22:33:50.913635] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:25.401 [2024-07-24 22:33:50.913857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.401 [2024-07-24 22:33:50.913887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:25.401 [2024-07-24 22:33:50.928414] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:25.401 [2024-07-24 22:33:50.928651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.401 [2024-07-24 22:33:50.928682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:25.401 [2024-07-24 22:33:50.943139] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:25.401 [2024-07-24 22:33:50.943364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.401 [2024-07-24 22:33:50.943392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:25.401 [2024-07-24 22:33:50.957826] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:25.401 [2024-07-24 22:33:50.958048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.401 [2024-07-24 22:33:50.958077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:25.401 [2024-07-24 22:33:50.972476] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:25.401 [2024-07-24 22:33:50.972706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.401 [2024-07-24 22:33:50.972736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:25.401 [2024-07-24 22:33:50.987106] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:25.401 [2024-07-24 22:33:50.987324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.401 [2024-07-24 22:33:50.987352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:25.401 [2024-07-24 22:33:51.001763] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:25.401 [2024-07-24 22:33:51.001984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.401 [2024-07-24 22:33:51.002013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:25.401 [2024-07-24 22:33:51.016426] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:25.401 [2024-07-24 22:33:51.016653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:17049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.401 [2024-07-24 22:33:51.016683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:25.401 [2024-07-24 22:33:51.031069] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:25.401 [2024-07-24 22:33:51.031288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.401 [2024-07-24 22:33:51.031318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:25.401 [2024-07-24 22:33:51.045662] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:25.401 [2024-07-24 22:33:51.045879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.401 [2024-07-24 22:33:51.045908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:25.401 [2024-07-24 22:33:51.060326] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:25.401 [2024-07-24 22:33:51.060552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.401 [2024-07-24 22:33:51.060581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:25.401 [2024-07-24 22:33:51.074945] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:25.401 [2024-07-24 22:33:51.075168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.401 [2024-07-24 22:33:51.075197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:25.401 [2024-07-24 22:33:51.089578] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:25.401 [2024-07-24 22:33:51.089799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.401 [2024-07-24 22:33:51.089828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:25.401 [2024-07-24 22:33:51.104326] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:25.401 [2024-07-24 22:33:51.104558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.401 [2024-07-24 22:33:51.104587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:25.660 [2024-07-24 22:33:51.119067] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:25.660 [2024-07-24 22:33:51.119290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.660 [2024-07-24 22:33:51.119319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:25.660 [2024-07-24 22:33:51.133749] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:25.660 [2024-07-24 22:33:51.133973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.660 [2024-07-24 22:33:51.134002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:25.660 [2024-07-24 22:33:51.148403] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:25.660 [2024-07-24 22:33:51.148630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.660 [2024-07-24 22:33:51.148659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:25.660 [2024-07-24 22:33:51.163041] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:25.660 [2024-07-24 22:33:51.163260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.660 [2024-07-24 22:33:51.163289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:25.660 [2024-07-24 22:33:51.177682] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:25.661 [2024-07-24 22:33:51.177901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.661 [2024-07-24 22:33:51.177929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:25.661 [2024-07-24 22:33:51.192266] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:25.661 [2024-07-24 22:33:51.192498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.661 [2024-07-24 22:33:51.192527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:25.661 [2024-07-24 22:33:51.206922] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:25.661 [2024-07-24 22:33:51.207146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.661 [2024-07-24 22:33:51.207175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:25.661 [2024-07-24 22:33:51.221532] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:25.661 [2024-07-24 22:33:51.221751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.661 [2024-07-24 22:33:51.221785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:25.661 [2024-07-24 22:33:51.236212] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:25.661 [2024-07-24 22:33:51.236436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.661 [2024-07-24 22:33:51.236466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:25.661 [2024-07-24 22:33:51.250902] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:25.661 [2024-07-24 22:33:51.251122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.661 [2024-07-24 22:33:51.251152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:25.661 [2024-07-24 22:33:51.265615] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:25.661 [2024-07-24 22:33:51.265841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.661 [2024-07-24 22:33:51.265870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:25.661 [2024-07-24 22:33:51.280461] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:25.661 [2024-07-24 22:33:51.280710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.661 [2024-07-24 22:33:51.280739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:25.661 [2024-07-24 22:33:51.295266] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:25.661 [2024-07-24 22:33:51.295494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.661 [2024-07-24 22:33:51.295524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:25.661 [2024-07-24 22:33:51.309986] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:25.661 [2024-07-24 22:33:51.310210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.661 [2024-07-24 22:33:51.310240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:25.661 [2024-07-24 22:33:51.324766] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:25.661 [2024-07-24 22:33:51.324991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.661 [2024-07-24 22:33:51.325020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:25.661 [2024-07-24 22:33:51.339509] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:25.661 [2024-07-24 22:33:51.339729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.661 [2024-07-24 22:33:51.339758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:25.661 [2024-07-24 22:33:51.354255] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:25.661 [2024-07-24 22:33:51.354500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.661 [2024-07-24 22:33:51.354537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:25.920 [2024-07-24 22:33:51.369212] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:25.920 [2024-07-24 22:33:51.369433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.920 [2024-07-24 22:33:51.369463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:25.920 [2024-07-24 22:33:51.383964] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:25.920 [2024-07-24 22:33:51.384188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.920 [2024-07-24 22:33:51.384218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:25.920 [2024-07-24 22:33:51.398738] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:25.920 [2024-07-24 22:33:51.398963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.920 [2024-07-24 22:33:51.398992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:25.920 [2024-07-24 22:33:51.413556] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:25.920 [2024-07-24 22:33:51.413782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.920 [2024-07-24 22:33:51.413811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:25.920 [2024-07-24 22:33:51.428283] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:25.920 [2024-07-24 22:33:51.428507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.920 [2024-07-24 22:33:51.428536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:25.920 [2024-07-24 22:33:51.443010] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:25.920 [2024-07-24 22:33:51.443231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.920 [2024-07-24 22:33:51.443261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:25.920 [2024-07-24 22:33:51.457773] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:25.920 [2024-07-24 22:33:51.457997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.920 [2024-07-24 22:33:51.458026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:25.920 [2024-07-24 22:33:51.472514] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:25.920 [2024-07-24 22:33:51.472746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.920 [2024-07-24 22:33:51.472775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:25.920 [2024-07-24 22:33:51.487239] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:25.920 [2024-07-24 22:33:51.487467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.920 [2024-07-24 22:33:51.487504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:25.920 [2024-07-24 22:33:51.502032] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:25.920 [2024-07-24 22:33:51.502252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.920 [2024-07-24 22:33:51.502282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:25.920 [2024-07-24 22:33:51.516827] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:25.920 [2024-07-24 22:33:51.517048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.920 [2024-07-24 22:33:51.517077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:25.920 [2024-07-24 22:33:51.531520] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:25.920 [2024-07-24 22:33:51.531741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.920 [2024-07-24 22:33:51.531770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:25.920 [2024-07-24 22:33:51.546188] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:25.920 [2024-07-24 22:33:51.546410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.920 [2024-07-24 22:33:51.546438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:25.920 [2024-07-24 22:33:51.560876] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:25.920 [2024-07-24 22:33:51.561094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.920 [2024-07-24 22:33:51.561123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:25.920 [2024-07-24 22:33:51.575627] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:25.920 [2024-07-24 22:33:51.575852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.920 [2024-07-24 22:33:51.575881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:25.920 [2024-07-24 22:33:51.590336] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:25.920 [2024-07-24 22:33:51.590562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.920 [2024-07-24 22:33:51.590591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:25.920 [2024-07-24 22:33:51.605050] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:25.920 [2024-07-24 22:33:51.605271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.920 [2024-07-24 22:33:51.605313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:25.920 [2024-07-24 22:33:51.619862] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:25.920 [2024-07-24 22:33:51.620091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:25.920 [2024-07-24 22:33:51.620120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.181 [2024-07-24 22:33:51.635043] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:26.181 [2024-07-24 22:33:51.635270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.181 [2024-07-24 22:33:51.635301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.181 [2024-07-24 22:33:51.649887] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:26.181 [2024-07-24 22:33:51.650113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.181 [2024-07-24 22:33:51.650142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.181 [2024-07-24 22:33:51.664701] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:26.181 [2024-07-24 22:33:51.664923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.182 [2024-07-24 22:33:51.664952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.182 [2024-07-24 22:33:51.679444] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:26.182 [2024-07-24 22:33:51.679670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.182 [2024-07-24 22:33:51.679700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.182 [2024-07-24 22:33:51.694199] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:26.182 [2024-07-24 22:33:51.694421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.182 [2024-07-24 22:33:51.694450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.182 [2024-07-24 22:33:51.708980] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:26.182 [2024-07-24 22:33:51.709200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.182 [2024-07-24 22:33:51.709230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.182 [2024-07-24 22:33:51.723786] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:26.182 [2024-07-24 22:33:51.724008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.182 [2024-07-24 22:33:51.724038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.182 [2024-07-24 22:33:51.738496] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:26.182 [2024-07-24 22:33:51.738731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.182 [2024-07-24 22:33:51.738761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.182 [2024-07-24 22:33:51.753263] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:26.182 [2024-07-24 22:33:51.753492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.182 [2024-07-24 22:33:51.753521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.182 [2024-07-24 22:33:51.768201] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:26.182 [2024-07-24 22:33:51.768423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.182 [2024-07-24 22:33:51.768453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.182 [2024-07-24 22:33:51.782926] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:26.182 [2024-07-24 22:33:51.783147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.182 [2024-07-24 22:33:51.783176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.182 [2024-07-24 22:33:51.797672] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:26.182 [2024-07-24 22:33:51.797895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.182 [2024-07-24 22:33:51.797924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.182 [2024-07-24 22:33:51.812448] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:26.182 [2024-07-24 22:33:51.812686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.182 [2024-07-24 22:33:51.812715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.182 [2024-07-24 22:33:51.827180] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:26.182 [2024-07-24 22:33:51.827397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.182 [2024-07-24 22:33:51.827426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.182 [2024-07-24 22:33:51.841898] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:26.182 [2024-07-24 22:33:51.842125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.182 [2024-07-24 22:33:51.842154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.182 [2024-07-24 22:33:51.856583] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:26.182 [2024-07-24 22:33:51.856805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.182 [2024-07-24 22:33:51.856834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.182 [2024-07-24 22:33:51.871308] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:26.182 [2024-07-24 22:33:51.871536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.182 [2024-07-24 22:33:51.871564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.443 [2024-07-24 22:33:51.886190] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:26.443 [2024-07-24 22:33:51.886412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.443 [2024-07-24 22:33:51.886441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.443 [2024-07-24 22:33:51.901015] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:26.443 [2024-07-24 22:33:51.901240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.443 [2024-07-24 22:33:51.901268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.443 [2024-07-24 22:33:51.915736] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:26.443 [2024-07-24 22:33:51.915957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.443 [2024-07-24 22:33:51.915986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.443 [2024-07-24 22:33:51.930422] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:26.443 [2024-07-24 22:33:51.930650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.443 [2024-07-24 22:33:51.930679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.443 [2024-07-24 22:33:51.945114] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:26.443 [2024-07-24 22:33:51.945334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.443 [2024-07-24 22:33:51.945362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.443 [2024-07-24 22:33:51.959883] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:26.443 [2024-07-24 22:33:51.960103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.443 [2024-07-24 22:33:51.960132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.443 [2024-07-24 22:33:51.974683] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:26.443 [2024-07-24 22:33:51.974902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.443 [2024-07-24 22:33:51.974931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.443 [2024-07-24 22:33:51.989432] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:26.443 [2024-07-24 22:33:51.989671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.443 [2024-07-24 22:33:51.989710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.443 [2024-07-24 22:33:52.004113] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:26.443 [2024-07-24 22:33:52.004334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.443 [2024-07-24 22:33:52.004363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.443 [2024-07-24 22:33:52.018797] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:26.443 [2024-07-24 22:33:52.019018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.443 [2024-07-24 22:33:52.019046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.443 [2024-07-24 22:33:52.033489] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:26.443 [2024-07-24 22:33:52.033711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.443 [2024-07-24 22:33:52.033739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.443 [2024-07-24 22:33:52.048067] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:26.443 [2024-07-24 22:33:52.048286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.443 [2024-07-24 22:33:52.048315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.443 [2024-07-24 22:33:52.062725] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:26.443 [2024-07-24 22:33:52.062944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.443 [2024-07-24 22:33:52.062973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.443 [2024-07-24 22:33:52.077341] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:26.443 [2024-07-24 22:33:52.077568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.443 [2024-07-24 22:33:52.077596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.443 [2024-07-24 22:33:52.092005] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:26.443 [2024-07-24 22:33:52.092223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.443 [2024-07-24 22:33:52.092252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.443 [2024-07-24 22:33:52.106624] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:26.444 [2024-07-24 22:33:52.106842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.444 [2024-07-24 22:33:52.106871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.444 [2024-07-24 22:33:52.121215] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:26.444 [2024-07-24 22:33:52.121446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.444 [2024-07-24 22:33:52.121474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.444 [2024-07-24 22:33:52.135848] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:26.444 [2024-07-24 22:33:52.136074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.444 [2024-07-24 22:33:52.136102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.704 [2024-07-24 22:33:52.150676] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:26.704 [2024-07-24 22:33:52.150896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.704 [2024-07-24 22:33:52.150925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.704 [2024-07-24 22:33:52.165295] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:26.704 [2024-07-24 22:33:52.165514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.704 [2024-07-24 22:33:52.165543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.704 [2024-07-24 22:33:52.179927] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:26.704 [2024-07-24 22:33:52.180147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.704 [2024-07-24 22:33:52.180175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.704 [2024-07-24 22:33:52.194547] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:26.704 [2024-07-24 22:33:52.194765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.704 [2024-07-24 22:33:52.194794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.704 [2024-07-24 22:33:52.209135] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:26.704 [2024-07-24 22:33:52.209352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.704 [2024-07-24 22:33:52.209380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.704 [2024-07-24 22:33:52.223734] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:26.704 [2024-07-24 22:33:52.223952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.704 [2024-07-24 22:33:52.223980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.704 [2024-07-24 22:33:52.238350] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:26.704 [2024-07-24 22:33:52.238573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.704 [2024-07-24 22:33:52.238601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.704 [2024-07-24 22:33:52.253246] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:26.704 [2024-07-24 22:33:52.253464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.704 [2024-07-24 22:33:52.253500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.704 [2024-07-24 22:33:52.268003] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:26.704 [2024-07-24 22:33:52.268222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.704 [2024-07-24 22:33:52.268250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.704 [2024-07-24 22:33:52.282657] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:26.704 [2024-07-24 22:33:52.282876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.704 [2024-07-24 22:33:52.282904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.704 [2024-07-24 22:33:52.297233] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:26.704 [2024-07-24 22:33:52.297456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.704 [2024-07-24 22:33:52.297490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.704 [2024-07-24 22:33:52.311835] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:26.704 [2024-07-24 22:33:52.312052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.704 [2024-07-24 22:33:52.312081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.704 [2024-07-24 22:33:52.326439] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:26.704 [2024-07-24 22:33:52.326664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.704 [2024-07-24 22:33:52.326692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.704 [2024-07-24 22:33:52.341043] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:26.704 [2024-07-24 22:33:52.341258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.704 [2024-07-24 22:33:52.341286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.704 [2024-07-24 22:33:52.355644] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:26.704 [2024-07-24 22:33:52.355861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.704 [2024-07-24 22:33:52.355889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.704 [2024-07-24 22:33:52.370205] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:26.704 [2024-07-24 22:33:52.370419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.704 [2024-07-24 22:33:52.370461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.704 [2024-07-24 22:33:52.384798] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:26.704 [2024-07-24 22:33:52.385022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.704 [2024-07-24 22:33:52.385051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.704 [2024-07-24 22:33:52.399406] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:26.704 [2024-07-24 22:33:52.399633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.704 [2024-07-24 22:33:52.399661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.965 [2024-07-24 22:33:52.414289] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:26.965 [2024-07-24 22:33:52.414514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.965 [2024-07-24 22:33:52.414542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.965 [2024-07-24 22:33:52.428926] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:26.965 [2024-07-24 22:33:52.429150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.965 [2024-07-24 22:33:52.429178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.965 [2024-07-24 22:33:52.443675] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:26.965 [2024-07-24 22:33:52.443895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.965 [2024-07-24 22:33:52.443924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.965 [2024-07-24 22:33:52.458502] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:26.965 [2024-07-24 22:33:52.458718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.965 [2024-07-24 22:33:52.458747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.965 [2024-07-24 22:33:52.473288] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:26.965 [2024-07-24 22:33:52.473508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.965 [2024-07-24 22:33:52.473537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.965 [2024-07-24 22:33:52.487908] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:26.965 [2024-07-24 22:33:52.488126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.965 [2024-07-24 22:33:52.488154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.965 [2024-07-24 22:33:52.502512] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:26.965 [2024-07-24 22:33:52.502738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.965 [2024-07-24 22:33:52.502766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.965 [2024-07-24 22:33:52.517310] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:26.965 [2024-07-24 22:33:52.517528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.965 [2024-07-24 22:33:52.517557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.965 [2024-07-24 22:33:52.531963] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:26.965 [2024-07-24 22:33:52.532182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.965 [2024-07-24 22:33:52.532210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.965 [2024-07-24 22:33:52.546667] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:26.965 [2024-07-24 22:33:52.546884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.965 [2024-07-24 22:33:52.546911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.965 [2024-07-24 22:33:52.561393] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:26.965 [2024-07-24 22:33:52.561618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.965 [2024-07-24 22:33:52.561646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.965 [2024-07-24 22:33:52.576024] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:26.965 [2024-07-24 22:33:52.576243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.965 [2024-07-24 22:33:52.576271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.965 [2024-07-24 22:33:52.590711] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:26.965 [2024-07-24 22:33:52.590933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.965 [2024-07-24 22:33:52.590962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.965 [2024-07-24 22:33:52.605290] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:26.965 [2024-07-24 22:33:52.605507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.965 [2024-07-24 22:33:52.605536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.965 [2024-07-24 22:33:52.619869] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:26.965 [2024-07-24 22:33:52.620089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.965 [2024-07-24 22:33:52.620118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.966 [2024-07-24 22:33:52.634456] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:26.966 [2024-07-24 22:33:52.634688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.966 [2024-07-24 22:33:52.634718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.966 [2024-07-24 22:33:52.649104] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:26.966 [2024-07-24 22:33:52.649323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.966 [2024-07-24 22:33:52.649352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:26.966 [2024-07-24 22:33:52.664067] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:26.966 [2024-07-24 22:33:52.664287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:26.966 [2024-07-24 22:33:52.664317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:27.224 [2024-07-24 22:33:52.678995] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:27.224 [2024-07-24 22:33:52.679217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.224 [2024-07-24 22:33:52.679246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:27.224 [2024-07-24 22:33:52.693714] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:27.224 [2024-07-24 22:33:52.693937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.224 [2024-07-24 22:33:52.693966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:27.224 [2024-07-24 22:33:52.708407] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:27.224 [2024-07-24 22:33:52.708633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.224 [2024-07-24 22:33:52.708662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:27.224 [2024-07-24 22:33:52.723035] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:27.224 [2024-07-24 22:33:52.723252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.224 [2024-07-24 22:33:52.723280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:27.224 [2024-07-24 22:33:52.737647] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:27.224 [2024-07-24 22:33:52.737865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.224 [2024-07-24 22:33:52.737894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:27.224 [2024-07-24 22:33:52.752269] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:27.224 [2024-07-24 22:33:52.752494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.224 [2024-07-24 22:33:52.752530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:27.224 [2024-07-24 22:33:52.766869] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:27.224 [2024-07-24 22:33:52.767085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.225 [2024-07-24 22:33:52.767113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:27.225 [2024-07-24 22:33:52.781718] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:27.225 [2024-07-24 22:33:52.781935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.225 [2024-07-24 22:33:52.781964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:27.225 [2024-07-24 22:33:52.796303] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:27.225 [2024-07-24 22:33:52.796529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.225 [2024-07-24 22:33:52.796558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:27.225 [2024-07-24 22:33:52.810968] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:27.225 [2024-07-24 22:33:52.811186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.225 [2024-07-24 22:33:52.811223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:27.225 [2024-07-24 22:33:52.825678] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:27.225 [2024-07-24 22:33:52.825896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.225 [2024-07-24 22:33:52.825924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:27.225 [2024-07-24 22:33:52.840273] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:27.225 [2024-07-24 22:33:52.840505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.225 [2024-07-24 22:33:52.840533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:27.225 [2024-07-24 22:33:52.854926] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23844d0) with pdu=0x2000190fe2e8 00:24:27.225 [2024-07-24 22:33:52.855141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:27.225 [2024-07-24 22:33:52.855169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:27.225 00:24:27.225 Latency(us) 00:24:27.225 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:27.225 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:27.225 nvme0n1 : 2.01 17304.45 67.60 0.00 0.00 7378.58 6990.51 15922.82 00:24:27.225 =================================================================================================================== 00:24:27.225 Total : 17304.45 67.60 0.00 0.00 7378.58 6990.51 15922.82 00:24:27.225 0 00:24:27.225 22:33:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:27.225 22:33:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:27.225 22:33:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:27.225 | .driver_specific 00:24:27.225 | .nvme_error 00:24:27.225 | .status_code 00:24:27.225 | .command_transient_transport_error' 00:24:27.225 22:33:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:27.483 22:33:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 136 > 0 )) 00:24:27.483 22:33:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3923440 00:24:27.483 22:33:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3923440 ']' 00:24:27.483 22:33:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3923440 00:24:27.483 22:33:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:27.483 22:33:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:27.483 22:33:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3923440 00:24:27.741 22:33:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:27.741 22:33:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:27.741 22:33:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3923440' 00:24:27.741 killing process with pid 3923440 00:24:27.741 22:33:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3923440 00:24:27.741 Received shutdown signal, test time was about 2.000000 seconds 00:24:27.741 00:24:27.741 Latency(us) 00:24:27.741 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:27.741 =================================================================================================================== 00:24:27.741 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:27.741 22:33:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3923440 00:24:27.741 22:33:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:24:27.741 22:33:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:27.741 22:33:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:24:27.741 22:33:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:24:27.741 22:33:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:24:27.741 22:33:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3923751 00:24:27.741 22:33:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3923751 /var/tmp/bperf.sock 00:24:27.741 22:33:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:24:27.741 22:33:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3923751 ']' 00:24:27.741 22:33:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:27.741 22:33:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:27.741 22:33:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:27.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:27.741 22:33:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:27.741 22:33:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:27.999 [2024-07-24 22:33:53.453056] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:24:27.999 [2024-07-24 22:33:53.453145] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3923751 ] 00:24:27.999 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:27.999 Zero copy mechanism will not be used. 00:24:27.999 EAL: No free 2048 kB hugepages reported on node 1 00:24:27.999 [2024-07-24 22:33:53.512736] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:27.999 [2024-07-24 22:33:53.629086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:28.257 22:33:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:28.257 22:33:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:28.257 22:33:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:28.257 22:33:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:28.516 22:33:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:28.516 22:33:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.516 22:33:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:28.516 22:33:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.516 22:33:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:28.516 22:33:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:28.777 nvme0n1 00:24:28.777 22:33:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:24:28.777 22:33:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:28.777 22:33:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:28.777 22:33:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:28.777 22:33:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:28.777 22:33:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:28.777 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:28.777 Zero copy mechanism will not be used. 00:24:28.777 Running I/O for 2 seconds... 00:24:28.777 [2024-07-24 22:33:54.451579] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:28.777 [2024-07-24 22:33:54.451967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.777 [2024-07-24 22:33:54.452007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:28.777 [2024-07-24 22:33:54.458343] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:28.777 [2024-07-24 22:33:54.458731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.777 [2024-07-24 22:33:54.458764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:28.777 [2024-07-24 22:33:54.464936] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:28.777 [2024-07-24 22:33:54.465285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.777 [2024-07-24 22:33:54.465316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:28.777 [2024-07-24 22:33:54.471434] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:28.777 [2024-07-24 22:33:54.471794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.777 [2024-07-24 22:33:54.471825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:28.777 [2024-07-24 22:33:54.477984] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:28.777 [2024-07-24 22:33:54.478339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.777 [2024-07-24 22:33:54.478371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.036 [2024-07-24 22:33:54.484515] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.036 [2024-07-24 22:33:54.484863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.036 [2024-07-24 22:33:54.484894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.036 [2024-07-24 22:33:54.490976] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.036 [2024-07-24 22:33:54.491325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.036 [2024-07-24 22:33:54.491356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.036 [2024-07-24 22:33:54.497461] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.037 [2024-07-24 22:33:54.497824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.037 [2024-07-24 22:33:54.497855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.037 [2024-07-24 22:33:54.504091] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.037 [2024-07-24 22:33:54.504434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.037 [2024-07-24 22:33:54.504466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.037 [2024-07-24 22:33:54.510636] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.037 [2024-07-24 22:33:54.510983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.037 [2024-07-24 22:33:54.511015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.037 [2024-07-24 22:33:54.517116] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.037 [2024-07-24 22:33:54.517463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.037 [2024-07-24 22:33:54.517502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.037 [2024-07-24 22:33:54.523590] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.037 [2024-07-24 22:33:54.523934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.037 [2024-07-24 22:33:54.523965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.037 [2024-07-24 22:33:54.530062] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.037 [2024-07-24 22:33:54.530407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.037 [2024-07-24 22:33:54.530438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.037 [2024-07-24 22:33:54.536539] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.037 [2024-07-24 22:33:54.536884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.037 [2024-07-24 22:33:54.536915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.037 [2024-07-24 22:33:54.542950] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.037 [2024-07-24 22:33:54.543294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.037 [2024-07-24 22:33:54.543324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.037 [2024-07-24 22:33:54.549422] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.037 [2024-07-24 22:33:54.549777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.037 [2024-07-24 22:33:54.549808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.037 [2024-07-24 22:33:54.555897] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.037 [2024-07-24 22:33:54.556249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.037 [2024-07-24 22:33:54.556279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.037 [2024-07-24 22:33:54.562353] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.037 [2024-07-24 22:33:54.562705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.037 [2024-07-24 22:33:54.562736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.037 [2024-07-24 22:33:54.568768] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.037 [2024-07-24 22:33:54.569115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.037 [2024-07-24 22:33:54.569153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.037 [2024-07-24 22:33:54.575189] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.037 [2024-07-24 22:33:54.575549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.037 [2024-07-24 22:33:54.575579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.037 [2024-07-24 22:33:54.581729] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.037 [2024-07-24 22:33:54.582076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.037 [2024-07-24 22:33:54.582106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.037 [2024-07-24 22:33:54.588838] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.037 [2024-07-24 22:33:54.589185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.037 [2024-07-24 22:33:54.589216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.037 [2024-07-24 22:33:54.596168] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.037 [2024-07-24 22:33:54.596522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.037 [2024-07-24 22:33:54.596553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.037 [2024-07-24 22:33:54.602797] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.037 [2024-07-24 22:33:54.603148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.037 [2024-07-24 22:33:54.603178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.037 [2024-07-24 22:33:54.609259] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.037 [2024-07-24 22:33:54.609611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.037 [2024-07-24 22:33:54.609642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.037 [2024-07-24 22:33:54.616440] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.037 [2024-07-24 22:33:54.616797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.037 [2024-07-24 22:33:54.616828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.037 [2024-07-24 22:33:54.623875] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.037 [2024-07-24 22:33:54.624223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.037 [2024-07-24 22:33:54.624253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.037 [2024-07-24 22:33:54.630702] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.037 [2024-07-24 22:33:54.631051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.037 [2024-07-24 22:33:54.631082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.037 [2024-07-24 22:33:54.637299] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.037 [2024-07-24 22:33:54.637654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.037 [2024-07-24 22:33:54.637685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.037 [2024-07-24 22:33:54.643816] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.037 [2024-07-24 22:33:54.644162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.037 [2024-07-24 22:33:54.644193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.037 [2024-07-24 22:33:54.651665] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.037 [2024-07-24 22:33:54.652029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.037 [2024-07-24 22:33:54.652060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.037 [2024-07-24 22:33:54.658206] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.037 [2024-07-24 22:33:54.658558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.037 [2024-07-24 22:33:54.658589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.037 [2024-07-24 22:33:54.664648] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.037 [2024-07-24 22:33:54.664995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.037 [2024-07-24 22:33:54.665025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.037 [2024-07-24 22:33:54.671081] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.037 [2024-07-24 22:33:54.671425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.037 [2024-07-24 22:33:54.671456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.038 [2024-07-24 22:33:54.677416] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.038 [2024-07-24 22:33:54.677769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.038 [2024-07-24 22:33:54.677800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.038 [2024-07-24 22:33:54.683782] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.038 [2024-07-24 22:33:54.684127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.038 [2024-07-24 22:33:54.684165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.038 [2024-07-24 22:33:54.690144] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.038 [2024-07-24 22:33:54.690495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.038 [2024-07-24 22:33:54.690533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.038 [2024-07-24 22:33:54.696702] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.038 [2024-07-24 22:33:54.697047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.038 [2024-07-24 22:33:54.697078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.038 [2024-07-24 22:33:54.703143] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.038 [2024-07-24 22:33:54.703499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.038 [2024-07-24 22:33:54.703537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.038 [2024-07-24 22:33:54.709574] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.038 [2024-07-24 22:33:54.709924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.038 [2024-07-24 22:33:54.709955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.038 [2024-07-24 22:33:54.715847] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.038 [2024-07-24 22:33:54.716194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.038 [2024-07-24 22:33:54.716225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.038 [2024-07-24 22:33:54.722206] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.038 [2024-07-24 22:33:54.722566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.038 [2024-07-24 22:33:54.722596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.038 [2024-07-24 22:33:54.728551] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.038 [2024-07-24 22:33:54.728902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.038 [2024-07-24 22:33:54.728933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.038 [2024-07-24 22:33:54.734880] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.038 [2024-07-24 22:33:54.735228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.038 [2024-07-24 22:33:54.735258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.298 [2024-07-24 22:33:54.741276] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.298 [2024-07-24 22:33:54.741640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.298 [2024-07-24 22:33:54.741670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.298 [2024-07-24 22:33:54.747736] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.298 [2024-07-24 22:33:54.748082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.298 [2024-07-24 22:33:54.748112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.298 [2024-07-24 22:33:54.754085] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.298 [2024-07-24 22:33:54.754430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.298 [2024-07-24 22:33:54.754460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.298 [2024-07-24 22:33:54.761620] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.298 [2024-07-24 22:33:54.761981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.298 [2024-07-24 22:33:54.762011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.298 [2024-07-24 22:33:54.768115] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.298 [2024-07-24 22:33:54.768461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.298 [2024-07-24 22:33:54.768499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.298 [2024-07-24 22:33:54.774504] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.298 [2024-07-24 22:33:54.774851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.298 [2024-07-24 22:33:54.774881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.298 [2024-07-24 22:33:54.780854] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.298 [2024-07-24 22:33:54.781196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.298 [2024-07-24 22:33:54.781226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.298 [2024-07-24 22:33:54.787127] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.298 [2024-07-24 22:33:54.787472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.298 [2024-07-24 22:33:54.787509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.298 [2024-07-24 22:33:54.793682] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.298 [2024-07-24 22:33:54.794034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.298 [2024-07-24 22:33:54.794063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.298 [2024-07-24 22:33:54.800046] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.298 [2024-07-24 22:33:54.800393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.298 [2024-07-24 22:33:54.800423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.298 [2024-07-24 22:33:54.806428] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.298 [2024-07-24 22:33:54.806783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.298 [2024-07-24 22:33:54.806814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.298 [2024-07-24 22:33:54.812797] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.298 [2024-07-24 22:33:54.813149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.298 [2024-07-24 22:33:54.813179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.298 [2024-07-24 22:33:54.819053] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.298 [2024-07-24 22:33:54.819387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.298 [2024-07-24 22:33:54.819417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.298 [2024-07-24 22:33:54.825396] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.298 [2024-07-24 22:33:54.825728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.298 [2024-07-24 22:33:54.825758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.298 [2024-07-24 22:33:54.831681] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.298 [2024-07-24 22:33:54.832026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.298 [2024-07-24 22:33:54.832056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.298 [2024-07-24 22:33:54.838121] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.298 [2024-07-24 22:33:54.838467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.298 [2024-07-24 22:33:54.838505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.298 [2024-07-24 22:33:54.844497] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.298 [2024-07-24 22:33:54.844843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.298 [2024-07-24 22:33:54.844873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.298 [2024-07-24 22:33:54.850857] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.298 [2024-07-24 22:33:54.851202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.298 [2024-07-24 22:33:54.851241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.298 [2024-07-24 22:33:54.857274] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.298 [2024-07-24 22:33:54.857628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.298 [2024-07-24 22:33:54.857658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.298 [2024-07-24 22:33:54.863693] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.298 [2024-07-24 22:33:54.864039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.298 [2024-07-24 22:33:54.864070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.298 [2024-07-24 22:33:54.870033] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.298 [2024-07-24 22:33:54.870375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.298 [2024-07-24 22:33:54.870406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.298 [2024-07-24 22:33:54.876569] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.298 [2024-07-24 22:33:54.876916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.298 [2024-07-24 22:33:54.876947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.298 [2024-07-24 22:33:54.884492] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.298 [2024-07-24 22:33:54.884839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.298 [2024-07-24 22:33:54.884869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.298 [2024-07-24 22:33:54.893061] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.299 [2024-07-24 22:33:54.893425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.299 [2024-07-24 22:33:54.893456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.299 [2024-07-24 22:33:54.902000] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.299 [2024-07-24 22:33:54.902346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.299 [2024-07-24 22:33:54.902376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.299 [2024-07-24 22:33:54.910693] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.299 [2024-07-24 22:33:54.911042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.299 [2024-07-24 22:33:54.911073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.299 [2024-07-24 22:33:54.919250] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.299 [2024-07-24 22:33:54.919613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.299 [2024-07-24 22:33:54.919645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.299 [2024-07-24 22:33:54.928039] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.299 [2024-07-24 22:33:54.928384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.299 [2024-07-24 22:33:54.928415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.299 [2024-07-24 22:33:54.936580] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.299 [2024-07-24 22:33:54.936925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.299 [2024-07-24 22:33:54.936956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.299 [2024-07-24 22:33:54.945019] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.299 [2024-07-24 22:33:54.945383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.299 [2024-07-24 22:33:54.945413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.299 [2024-07-24 22:33:54.952697] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.299 [2024-07-24 22:33:54.953043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.299 [2024-07-24 22:33:54.953073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.299 [2024-07-24 22:33:54.959301] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.299 [2024-07-24 22:33:54.959655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.299 [2024-07-24 22:33:54.959685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.299 [2024-07-24 22:33:54.966200] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.299 [2024-07-24 22:33:54.966558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.299 [2024-07-24 22:33:54.966589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.299 [2024-07-24 22:33:54.973780] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.299 [2024-07-24 22:33:54.974143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.299 [2024-07-24 22:33:54.974173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.299 [2024-07-24 22:33:54.981517] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.299 [2024-07-24 22:33:54.981880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.299 [2024-07-24 22:33:54.981911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.299 [2024-07-24 22:33:54.989500] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.299 [2024-07-24 22:33:54.989846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.299 [2024-07-24 22:33:54.989876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.299 [2024-07-24 22:33:54.996597] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.299 [2024-07-24 22:33:54.996947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.299 [2024-07-24 22:33:54.996978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.558 [2024-07-24 22:33:55.003119] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.558 [2024-07-24 22:33:55.003468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.558 [2024-07-24 22:33:55.003508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.558 [2024-07-24 22:33:55.009681] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.558 [2024-07-24 22:33:55.010033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.558 [2024-07-24 22:33:55.010064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.558 [2024-07-24 22:33:55.017402] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.558 [2024-07-24 22:33:55.017758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.558 [2024-07-24 22:33:55.017788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.558 [2024-07-24 22:33:55.025898] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.558 [2024-07-24 22:33:55.026262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.558 [2024-07-24 22:33:55.026293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.558 [2024-07-24 22:33:55.034360] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.558 [2024-07-24 22:33:55.034715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.558 [2024-07-24 22:33:55.034745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.558 [2024-07-24 22:33:55.042954] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.558 [2024-07-24 22:33:55.043142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.558 [2024-07-24 22:33:55.043173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.558 [2024-07-24 22:33:55.051289] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.558 [2024-07-24 22:33:55.051698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.558 [2024-07-24 22:33:55.051739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.558 [2024-07-24 22:33:55.058268] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.558 [2024-07-24 22:33:55.058603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.558 [2024-07-24 22:33:55.058634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.558 [2024-07-24 22:33:55.064451] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.558 [2024-07-24 22:33:55.064793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.558 [2024-07-24 22:33:55.064824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.558 [2024-07-24 22:33:55.070998] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.558 [2024-07-24 22:33:55.071326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.558 [2024-07-24 22:33:55.071357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.558 [2024-07-24 22:33:55.078030] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.558 [2024-07-24 22:33:55.078340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.558 [2024-07-24 22:33:55.078371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.558 [2024-07-24 22:33:55.084130] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.558 [2024-07-24 22:33:55.084440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.559 [2024-07-24 22:33:55.084470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.559 [2024-07-24 22:33:55.090110] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.559 [2024-07-24 22:33:55.090421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.559 [2024-07-24 22:33:55.090451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.559 [2024-07-24 22:33:55.096103] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.559 [2024-07-24 22:33:55.096411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.559 [2024-07-24 22:33:55.096440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.559 [2024-07-24 22:33:55.102184] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.559 [2024-07-24 22:33:55.102500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.559 [2024-07-24 22:33:55.102529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.559 [2024-07-24 22:33:55.108197] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.559 [2024-07-24 22:33:55.108534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.559 [2024-07-24 22:33:55.108564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.559 [2024-07-24 22:33:55.114270] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.559 [2024-07-24 22:33:55.114586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.559 [2024-07-24 22:33:55.114615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.559 [2024-07-24 22:33:55.120344] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.559 [2024-07-24 22:33:55.120659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.559 [2024-07-24 22:33:55.120690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.559 [2024-07-24 22:33:55.126368] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.559 [2024-07-24 22:33:55.126688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.559 [2024-07-24 22:33:55.126718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.559 [2024-07-24 22:33:55.132458] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.559 [2024-07-24 22:33:55.132778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.559 [2024-07-24 22:33:55.132807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.559 [2024-07-24 22:33:55.138453] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.559 [2024-07-24 22:33:55.138775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.559 [2024-07-24 22:33:55.138805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.559 [2024-07-24 22:33:55.144701] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.559 [2024-07-24 22:33:55.145014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.559 [2024-07-24 22:33:55.145045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.559 [2024-07-24 22:33:55.150699] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.559 [2024-07-24 22:33:55.151015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.559 [2024-07-24 22:33:55.151044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.559 [2024-07-24 22:33:55.156847] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.559 [2024-07-24 22:33:55.157159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.559 [2024-07-24 22:33:55.157189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.559 [2024-07-24 22:33:55.163004] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.559 [2024-07-24 22:33:55.163315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.559 [2024-07-24 22:33:55.163345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.559 [2024-07-24 22:33:55.169086] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.559 [2024-07-24 22:33:55.169405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.559 [2024-07-24 22:33:55.169435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.559 [2024-07-24 22:33:55.175108] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.559 [2024-07-24 22:33:55.175425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.559 [2024-07-24 22:33:55.175455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.559 [2024-07-24 22:33:55.181256] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.559 [2024-07-24 22:33:55.181576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.559 [2024-07-24 22:33:55.181607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.559 [2024-07-24 22:33:55.187251] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.559 [2024-07-24 22:33:55.187570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.559 [2024-07-24 22:33:55.187600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.559 [2024-07-24 22:33:55.193337] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.559 [2024-07-24 22:33:55.193653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.559 [2024-07-24 22:33:55.193683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.559 [2024-07-24 22:33:55.199319] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.559 [2024-07-24 22:33:55.199639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.559 [2024-07-24 22:33:55.199669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.559 [2024-07-24 22:33:55.205452] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.559 [2024-07-24 22:33:55.205775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.559 [2024-07-24 22:33:55.205805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.559 [2024-07-24 22:33:55.211555] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.559 [2024-07-24 22:33:55.211865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.559 [2024-07-24 22:33:55.211902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.559 [2024-07-24 22:33:55.217788] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.559 [2024-07-24 22:33:55.218099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.559 [2024-07-24 22:33:55.218129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.559 [2024-07-24 22:33:55.223833] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.559 [2024-07-24 22:33:55.224146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.559 [2024-07-24 22:33:55.224175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.559 [2024-07-24 22:33:55.229934] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.559 [2024-07-24 22:33:55.230250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.559 [2024-07-24 22:33:55.230280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.559 [2024-07-24 22:33:55.236041] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.559 [2024-07-24 22:33:55.236350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.559 [2024-07-24 22:33:55.236380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.559 [2024-07-24 22:33:55.242098] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.559 [2024-07-24 22:33:55.242409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.559 [2024-07-24 22:33:55.242439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.559 [2024-07-24 22:33:55.248084] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.560 [2024-07-24 22:33:55.248402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.560 [2024-07-24 22:33:55.248432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.560 [2024-07-24 22:33:55.254181] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.560 [2024-07-24 22:33:55.254497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.560 [2024-07-24 22:33:55.254526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.560 [2024-07-24 22:33:55.260261] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.560 [2024-07-24 22:33:55.260579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.560 [2024-07-24 22:33:55.260609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.823 [2024-07-24 22:33:55.266412] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.823 [2024-07-24 22:33:55.266732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.823 [2024-07-24 22:33:55.266762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.823 [2024-07-24 22:33:55.272400] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.823 [2024-07-24 22:33:55.272717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.823 [2024-07-24 22:33:55.272747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.823 [2024-07-24 22:33:55.278399] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.823 [2024-07-24 22:33:55.278710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.823 [2024-07-24 22:33:55.278740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.823 [2024-07-24 22:33:55.285009] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.823 [2024-07-24 22:33:55.285320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.823 [2024-07-24 22:33:55.285350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.823 [2024-07-24 22:33:55.291413] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.823 [2024-07-24 22:33:55.291734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.823 [2024-07-24 22:33:55.291765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.823 [2024-07-24 22:33:55.297715] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.823 [2024-07-24 22:33:55.298044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.823 [2024-07-24 22:33:55.298074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.823 [2024-07-24 22:33:55.304770] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.823 [2024-07-24 22:33:55.305080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.823 [2024-07-24 22:33:55.305110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.823 [2024-07-24 22:33:55.311718] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.823 [2024-07-24 22:33:55.312027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.823 [2024-07-24 22:33:55.312056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.823 [2024-07-24 22:33:55.317698] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.823 [2024-07-24 22:33:55.318013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.823 [2024-07-24 22:33:55.318051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.823 [2024-07-24 22:33:55.324635] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.823 [2024-07-24 22:33:55.324944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.823 [2024-07-24 22:33:55.324973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.823 [2024-07-24 22:33:55.330739] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.823 [2024-07-24 22:33:55.331049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.823 [2024-07-24 22:33:55.331078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.823 [2024-07-24 22:33:55.336909] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.823 [2024-07-24 22:33:55.337225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.823 [2024-07-24 22:33:55.337254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.823 [2024-07-24 22:33:55.342987] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.823 [2024-07-24 22:33:55.343297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.823 [2024-07-24 22:33:55.343326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.823 [2024-07-24 22:33:55.348960] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.823 [2024-07-24 22:33:55.349277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.823 [2024-07-24 22:33:55.349306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.823 [2024-07-24 22:33:55.355001] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.823 [2024-07-24 22:33:55.355311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.823 [2024-07-24 22:33:55.355341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.823 [2024-07-24 22:33:55.361013] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.824 [2024-07-24 22:33:55.361321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.824 [2024-07-24 22:33:55.361351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.824 [2024-07-24 22:33:55.367074] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.824 [2024-07-24 22:33:55.367382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.824 [2024-07-24 22:33:55.367412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.824 [2024-07-24 22:33:55.373030] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.824 [2024-07-24 22:33:55.373347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.824 [2024-07-24 22:33:55.373377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.824 [2024-07-24 22:33:55.379042] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.824 [2024-07-24 22:33:55.379364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.824 [2024-07-24 22:33:55.379394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.824 [2024-07-24 22:33:55.385197] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.824 [2024-07-24 22:33:55.385517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.824 [2024-07-24 22:33:55.385548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.824 [2024-07-24 22:33:55.391134] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.824 [2024-07-24 22:33:55.391442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.824 [2024-07-24 22:33:55.391472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.824 [2024-07-24 22:33:55.397235] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.824 [2024-07-24 22:33:55.397550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.824 [2024-07-24 22:33:55.397580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.824 [2024-07-24 22:33:55.403351] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.824 [2024-07-24 22:33:55.403670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.824 [2024-07-24 22:33:55.403699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.824 [2024-07-24 22:33:55.409335] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.824 [2024-07-24 22:33:55.409653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.824 [2024-07-24 22:33:55.409682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.824 [2024-07-24 22:33:55.415389] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.824 [2024-07-24 22:33:55.415706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.824 [2024-07-24 22:33:55.415737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.824 [2024-07-24 22:33:55.421448] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.824 [2024-07-24 22:33:55.421769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.824 [2024-07-24 22:33:55.421799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.824 [2024-07-24 22:33:55.427436] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.824 [2024-07-24 22:33:55.427758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.824 [2024-07-24 22:33:55.427788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.824 [2024-07-24 22:33:55.433428] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.824 [2024-07-24 22:33:55.433751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.824 [2024-07-24 22:33:55.433781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.824 [2024-07-24 22:33:55.439396] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.824 [2024-07-24 22:33:55.439712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.824 [2024-07-24 22:33:55.439742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.824 [2024-07-24 22:33:55.445472] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.824 [2024-07-24 22:33:55.445795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.824 [2024-07-24 22:33:55.445824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.824 [2024-07-24 22:33:55.451507] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.824 [2024-07-24 22:33:55.451817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.824 [2024-07-24 22:33:55.451847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.824 [2024-07-24 22:33:55.457460] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.824 [2024-07-24 22:33:55.457784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.824 [2024-07-24 22:33:55.457811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.824 [2024-07-24 22:33:55.463457] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.824 [2024-07-24 22:33:55.463777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.824 [2024-07-24 22:33:55.463806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.824 [2024-07-24 22:33:55.469502] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.824 [2024-07-24 22:33:55.469816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.824 [2024-07-24 22:33:55.469845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.824 [2024-07-24 22:33:55.475602] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.824 [2024-07-24 22:33:55.475914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.824 [2024-07-24 22:33:55.475951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.825 [2024-07-24 22:33:55.481665] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.825 [2024-07-24 22:33:55.481980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.825 [2024-07-24 22:33:55.482010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.825 [2024-07-24 22:33:55.487668] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.825 [2024-07-24 22:33:55.487985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.825 [2024-07-24 22:33:55.488015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.825 [2024-07-24 22:33:55.493792] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.825 [2024-07-24 22:33:55.494105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.825 [2024-07-24 22:33:55.494135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.825 [2024-07-24 22:33:55.499800] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.825 [2024-07-24 22:33:55.500110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.825 [2024-07-24 22:33:55.500140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.825 [2024-07-24 22:33:55.505730] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.825 [2024-07-24 22:33:55.506037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.825 [2024-07-24 22:33:55.506067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:29.825 [2024-07-24 22:33:55.511795] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.825 [2024-07-24 22:33:55.512108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.825 [2024-07-24 22:33:55.512139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:29.825 [2024-07-24 22:33:55.517743] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:29.825 [2024-07-24 22:33:55.518052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:29.825 [2024-07-24 22:33:55.518082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:29.825 [2024-07-24 22:33:55.523762] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.107 [2024-07-24 22:33:55.524072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.107 [2024-07-24 22:33:55.524104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.107 [2024-07-24 22:33:55.529841] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.107 [2024-07-24 22:33:55.530166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.107 [2024-07-24 22:33:55.530197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.107 [2024-07-24 22:33:55.536183] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.107 [2024-07-24 22:33:55.536502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.107 [2024-07-24 22:33:55.536533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.107 [2024-07-24 22:33:55.542359] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.107 [2024-07-24 22:33:55.542678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.107 [2024-07-24 22:33:55.542709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.107 [2024-07-24 22:33:55.548580] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.107 [2024-07-24 22:33:55.548891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.107 [2024-07-24 22:33:55.548921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.107 [2024-07-24 22:33:55.554714] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.107 [2024-07-24 22:33:55.555025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.107 [2024-07-24 22:33:55.555054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.107 [2024-07-24 22:33:55.560820] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.107 [2024-07-24 22:33:55.561132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.107 [2024-07-24 22:33:55.561161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.107 [2024-07-24 22:33:55.566873] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.107 [2024-07-24 22:33:55.567181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.107 [2024-07-24 22:33:55.567210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.107 [2024-07-24 22:33:55.572954] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.107 [2024-07-24 22:33:55.573265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.107 [2024-07-24 22:33:55.573294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.107 [2024-07-24 22:33:55.578974] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.107 [2024-07-24 22:33:55.579291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.107 [2024-07-24 22:33:55.579321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.107 [2024-07-24 22:33:55.585055] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.107 [2024-07-24 22:33:55.585367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.107 [2024-07-24 22:33:55.585398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.107 [2024-07-24 22:33:55.591034] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.107 [2024-07-24 22:33:55.591343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.107 [2024-07-24 22:33:55.591373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.107 [2024-07-24 22:33:55.597032] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.107 [2024-07-24 22:33:55.597345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.107 [2024-07-24 22:33:55.597375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.107 [2024-07-24 22:33:55.603055] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.107 [2024-07-24 22:33:55.603368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.107 [2024-07-24 22:33:55.603398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.107 [2024-07-24 22:33:55.609076] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.107 [2024-07-24 22:33:55.609387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.107 [2024-07-24 22:33:55.609416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.107 [2024-07-24 22:33:55.615111] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.107 [2024-07-24 22:33:55.615423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.107 [2024-07-24 22:33:55.615453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.107 [2024-07-24 22:33:55.621204] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.107 [2024-07-24 22:33:55.621522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.107 [2024-07-24 22:33:55.621552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.107 [2024-07-24 22:33:55.627159] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.107 [2024-07-24 22:33:55.627471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.107 [2024-07-24 22:33:55.627508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.107 [2024-07-24 22:33:55.633072] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.107 [2024-07-24 22:33:55.633383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.107 [2024-07-24 22:33:55.633421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.107 [2024-07-24 22:33:55.639158] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.107 [2024-07-24 22:33:55.639469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.107 [2024-07-24 22:33:55.639505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.107 [2024-07-24 22:33:55.645155] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.107 [2024-07-24 22:33:55.645467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.107 [2024-07-24 22:33:55.645505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.107 [2024-07-24 22:33:55.651130] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.107 [2024-07-24 22:33:55.651442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.107 [2024-07-24 22:33:55.651472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.107 [2024-07-24 22:33:55.657215] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.107 [2024-07-24 22:33:55.657536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.107 [2024-07-24 22:33:55.657566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.107 [2024-07-24 22:33:55.663265] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.107 [2024-07-24 22:33:55.663590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.108 [2024-07-24 22:33:55.663620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.108 [2024-07-24 22:33:55.669390] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.108 [2024-07-24 22:33:55.669713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.108 [2024-07-24 22:33:55.669744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.108 [2024-07-24 22:33:55.675464] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.108 [2024-07-24 22:33:55.675795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.108 [2024-07-24 22:33:55.675826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.108 [2024-07-24 22:33:55.681439] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.108 [2024-07-24 22:33:55.681781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.108 [2024-07-24 22:33:55.681812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.108 [2024-07-24 22:33:55.687648] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.108 [2024-07-24 22:33:55.687966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.108 [2024-07-24 22:33:55.687995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.108 [2024-07-24 22:33:55.693718] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.108 [2024-07-24 22:33:55.694028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.108 [2024-07-24 22:33:55.694058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.108 [2024-07-24 22:33:55.699764] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.108 [2024-07-24 22:33:55.700073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.108 [2024-07-24 22:33:55.700103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.108 [2024-07-24 22:33:55.705900] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.108 [2024-07-24 22:33:55.706210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.108 [2024-07-24 22:33:55.706240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.108 [2024-07-24 22:33:55.711890] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.108 [2024-07-24 22:33:55.712199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.108 [2024-07-24 22:33:55.712230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.108 [2024-07-24 22:33:55.718008] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.108 [2024-07-24 22:33:55.718321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.108 [2024-07-24 22:33:55.718351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.108 [2024-07-24 22:33:55.724095] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.108 [2024-07-24 22:33:55.724408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.108 [2024-07-24 22:33:55.724438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.108 [2024-07-24 22:33:55.730165] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.108 [2024-07-24 22:33:55.730477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.108 [2024-07-24 22:33:55.730513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.108 [2024-07-24 22:33:55.736173] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.108 [2024-07-24 22:33:55.736488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.108 [2024-07-24 22:33:55.736525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.108 [2024-07-24 22:33:55.742228] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.108 [2024-07-24 22:33:55.742547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.108 [2024-07-24 22:33:55.742578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.108 [2024-07-24 22:33:55.748229] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.108 [2024-07-24 22:33:55.748549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.108 [2024-07-24 22:33:55.748579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.108 [2024-07-24 22:33:55.754243] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.108 [2024-07-24 22:33:55.754559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.108 [2024-07-24 22:33:55.754588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.108 [2024-07-24 22:33:55.760270] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.108 [2024-07-24 22:33:55.760587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.108 [2024-07-24 22:33:55.760617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.108 [2024-07-24 22:33:55.766344] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.108 [2024-07-24 22:33:55.766663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.108 [2024-07-24 22:33:55.766694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.108 [2024-07-24 22:33:55.772349] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.108 [2024-07-24 22:33:55.772674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.108 [2024-07-24 22:33:55.772704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.108 [2024-07-24 22:33:55.778372] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.108 [2024-07-24 22:33:55.778688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.108 [2024-07-24 22:33:55.778718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.108 [2024-07-24 22:33:55.784473] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.108 [2024-07-24 22:33:55.784794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.108 [2024-07-24 22:33:55.784824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.108 [2024-07-24 22:33:55.790736] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.108 [2024-07-24 22:33:55.791063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.108 [2024-07-24 22:33:55.791102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.374 [2024-07-24 22:33:55.796998] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.374 [2024-07-24 22:33:55.797306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.374 [2024-07-24 22:33:55.797336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.374 [2024-07-24 22:33:55.803134] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.374 [2024-07-24 22:33:55.803448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.374 [2024-07-24 22:33:55.803485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.374 [2024-07-24 22:33:55.809166] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.374 [2024-07-24 22:33:55.809492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.374 [2024-07-24 22:33:55.809522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.374 [2024-07-24 22:33:55.815475] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.374 [2024-07-24 22:33:55.815810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.374 [2024-07-24 22:33:55.815840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.374 [2024-07-24 22:33:55.821672] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.374 [2024-07-24 22:33:55.821987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.374 [2024-07-24 22:33:55.822017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.374 [2024-07-24 22:33:55.827798] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.374 [2024-07-24 22:33:55.828109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.374 [2024-07-24 22:33:55.828139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.374 [2024-07-24 22:33:55.833865] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.374 [2024-07-24 22:33:55.834178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.374 [2024-07-24 22:33:55.834208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.374 [2024-07-24 22:33:55.839935] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.374 [2024-07-24 22:33:55.840244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.374 [2024-07-24 22:33:55.840273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.374 [2024-07-24 22:33:55.846014] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.374 [2024-07-24 22:33:55.846324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.374 [2024-07-24 22:33:55.846354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.374 [2024-07-24 22:33:55.852046] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.374 [2024-07-24 22:33:55.852364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.374 [2024-07-24 22:33:55.852394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.374 [2024-07-24 22:33:55.858125] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.374 [2024-07-24 22:33:55.858437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.374 [2024-07-24 22:33:55.858467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.374 [2024-07-24 22:33:55.864202] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.374 [2024-07-24 22:33:55.864526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.374 [2024-07-24 22:33:55.864557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.374 [2024-07-24 22:33:55.870217] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.374 [2024-07-24 22:33:55.870541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.374 [2024-07-24 22:33:55.870571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.374 [2024-07-24 22:33:55.876187] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.374 [2024-07-24 22:33:55.876505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.374 [2024-07-24 22:33:55.876535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.374 [2024-07-24 22:33:55.882219] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.374 [2024-07-24 22:33:55.882536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.374 [2024-07-24 22:33:55.882567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.374 [2024-07-24 22:33:55.888286] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.374 [2024-07-24 22:33:55.888607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.374 [2024-07-24 22:33:55.888637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.374 [2024-07-24 22:33:55.894298] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.374 [2024-07-24 22:33:55.894622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.374 [2024-07-24 22:33:55.894660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.374 [2024-07-24 22:33:55.900291] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.374 [2024-07-24 22:33:55.900607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.374 [2024-07-24 22:33:55.900637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.374 [2024-07-24 22:33:55.906299] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.374 [2024-07-24 22:33:55.906623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.374 [2024-07-24 22:33:55.906653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.374 [2024-07-24 22:33:55.912396] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.374 [2024-07-24 22:33:55.912713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.374 [2024-07-24 22:33:55.912744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.374 [2024-07-24 22:33:55.918431] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.374 [2024-07-24 22:33:55.918757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.374 [2024-07-24 22:33:55.918789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.374 [2024-07-24 22:33:55.924393] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.374 [2024-07-24 22:33:55.924709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.374 [2024-07-24 22:33:55.924740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.374 [2024-07-24 22:33:55.930382] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.374 [2024-07-24 22:33:55.930697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.374 [2024-07-24 22:33:55.930728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.374 [2024-07-24 22:33:55.936453] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.374 [2024-07-24 22:33:55.936776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.374 [2024-07-24 22:33:55.936806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.374 [2024-07-24 22:33:55.942462] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.374 [2024-07-24 22:33:55.942787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.375 [2024-07-24 22:33:55.942817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.375 [2024-07-24 22:33:55.948529] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.375 [2024-07-24 22:33:55.948849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.375 [2024-07-24 22:33:55.948878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.375 [2024-07-24 22:33:55.954686] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.375 [2024-07-24 22:33:55.954997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.375 [2024-07-24 22:33:55.955027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.375 [2024-07-24 22:33:55.960717] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.375 [2024-07-24 22:33:55.961027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.375 [2024-07-24 22:33:55.961056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.375 [2024-07-24 22:33:55.966754] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.375 [2024-07-24 22:33:55.967063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.375 [2024-07-24 22:33:55.967093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.375 [2024-07-24 22:33:55.972919] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.375 [2024-07-24 22:33:55.973230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.375 [2024-07-24 22:33:55.973259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.375 [2024-07-24 22:33:55.978987] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.375 [2024-07-24 22:33:55.979302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.375 [2024-07-24 22:33:55.979333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.375 [2024-07-24 22:33:55.985022] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.375 [2024-07-24 22:33:55.985331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.375 [2024-07-24 22:33:55.985361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.375 [2024-07-24 22:33:55.991050] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.375 [2024-07-24 22:33:55.991360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.375 [2024-07-24 22:33:55.991390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.375 [2024-07-24 22:33:55.997061] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.375 [2024-07-24 22:33:55.997373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.375 [2024-07-24 22:33:55.997403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.375 [2024-07-24 22:33:56.003137] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.375 [2024-07-24 22:33:56.003448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.375 [2024-07-24 22:33:56.003493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.375 [2024-07-24 22:33:56.009125] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.375 [2024-07-24 22:33:56.009436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.375 [2024-07-24 22:33:56.009466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.375 [2024-07-24 22:33:56.015098] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.375 [2024-07-24 22:33:56.015405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.375 [2024-07-24 22:33:56.015436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.375 [2024-07-24 22:33:56.021111] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.375 [2024-07-24 22:33:56.021420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.375 [2024-07-24 22:33:56.021450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.375 [2024-07-24 22:33:56.027112] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.375 [2024-07-24 22:33:56.027420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.375 [2024-07-24 22:33:56.027450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.375 [2024-07-24 22:33:56.033156] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.375 [2024-07-24 22:33:56.033465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.375 [2024-07-24 22:33:56.033505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.375 [2024-07-24 22:33:56.039094] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.375 [2024-07-24 22:33:56.039404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.375 [2024-07-24 22:33:56.039434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.375 [2024-07-24 22:33:56.045270] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.375 [2024-07-24 22:33:56.045587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.375 [2024-07-24 22:33:56.045617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.375 [2024-07-24 22:33:56.051336] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.375 [2024-07-24 22:33:56.051654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.375 [2024-07-24 22:33:56.051693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.375 [2024-07-24 22:33:56.057954] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.375 [2024-07-24 22:33:56.058293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.375 [2024-07-24 22:33:56.058323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.375 [2024-07-24 22:33:56.065859] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.375 [2024-07-24 22:33:56.066278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.375 [2024-07-24 22:33:56.066308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.375 [2024-07-24 22:33:56.074391] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.375 [2024-07-24 22:33:56.074820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.375 [2024-07-24 22:33:56.074850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.634 [2024-07-24 22:33:56.083084] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.634 [2024-07-24 22:33:56.083491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.634 [2024-07-24 22:33:56.083520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.635 [2024-07-24 22:33:56.091719] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.635 [2024-07-24 22:33:56.092148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.635 [2024-07-24 22:33:56.092179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.635 [2024-07-24 22:33:56.100277] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.635 [2024-07-24 22:33:56.100634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.635 [2024-07-24 22:33:56.100665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.635 [2024-07-24 22:33:56.108686] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.635 [2024-07-24 22:33:56.109095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.635 [2024-07-24 22:33:56.109125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.635 [2024-07-24 22:33:56.117109] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.635 [2024-07-24 22:33:56.117501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.635 [2024-07-24 22:33:56.117532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.635 [2024-07-24 22:33:56.125963] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.635 [2024-07-24 22:33:56.126306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.635 [2024-07-24 22:33:56.126337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.635 [2024-07-24 22:33:56.134558] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.635 [2024-07-24 22:33:56.134953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.635 [2024-07-24 22:33:56.134984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.635 [2024-07-24 22:33:56.143149] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.635 [2024-07-24 22:33:56.143575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.635 [2024-07-24 22:33:56.143610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.635 [2024-07-24 22:33:56.151685] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.635 [2024-07-24 22:33:56.152027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.635 [2024-07-24 22:33:56.152065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.635 [2024-07-24 22:33:56.158463] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.635 [2024-07-24 22:33:56.158788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.635 [2024-07-24 22:33:56.158817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.635 [2024-07-24 22:33:56.164549] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.635 [2024-07-24 22:33:56.164863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.635 [2024-07-24 22:33:56.164894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.635 [2024-07-24 22:33:56.170639] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.635 [2024-07-24 22:33:56.170948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.635 [2024-07-24 22:33:56.170978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.635 [2024-07-24 22:33:56.176746] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.635 [2024-07-24 22:33:56.177056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.635 [2024-07-24 22:33:56.177086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.635 [2024-07-24 22:33:56.182752] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.635 [2024-07-24 22:33:56.183062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.635 [2024-07-24 22:33:56.183103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.635 [2024-07-24 22:33:56.188861] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.635 [2024-07-24 22:33:56.189171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.635 [2024-07-24 22:33:56.189202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.635 [2024-07-24 22:33:56.194895] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.635 [2024-07-24 22:33:56.195202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.635 [2024-07-24 22:33:56.195232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.635 [2024-07-24 22:33:56.200986] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.635 [2024-07-24 22:33:56.201298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.635 [2024-07-24 22:33:56.201328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.635 [2024-07-24 22:33:56.207046] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.635 [2024-07-24 22:33:56.207354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.635 [2024-07-24 22:33:56.207384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.635 [2024-07-24 22:33:56.213083] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.635 [2024-07-24 22:33:56.213395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.635 [2024-07-24 22:33:56.213425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.635 [2024-07-24 22:33:56.219205] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.635 [2024-07-24 22:33:56.219524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.635 [2024-07-24 22:33:56.219555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.635 [2024-07-24 22:33:56.225241] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.635 [2024-07-24 22:33:56.225564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.635 [2024-07-24 22:33:56.225617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.635 [2024-07-24 22:33:56.231318] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.635 [2024-07-24 22:33:56.231635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.635 [2024-07-24 22:33:56.231666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.635 [2024-07-24 22:33:56.237356] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.635 [2024-07-24 22:33:56.237683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.635 [2024-07-24 22:33:56.237713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.635 [2024-07-24 22:33:56.243451] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.635 [2024-07-24 22:33:56.243771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.635 [2024-07-24 22:33:56.243801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.635 [2024-07-24 22:33:56.249592] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.635 [2024-07-24 22:33:56.249905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.635 [2024-07-24 22:33:56.249935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.635 [2024-07-24 22:33:56.255635] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.635 [2024-07-24 22:33:56.255946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.635 [2024-07-24 22:33:56.255977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.635 [2024-07-24 22:33:56.261577] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.635 [2024-07-24 22:33:56.261887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.635 [2024-07-24 22:33:56.261917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.635 [2024-07-24 22:33:56.267575] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.636 [2024-07-24 22:33:56.267886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.636 [2024-07-24 22:33:56.267917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.636 [2024-07-24 22:33:56.273616] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.636 [2024-07-24 22:33:56.273925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.636 [2024-07-24 22:33:56.273954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.636 [2024-07-24 22:33:56.279753] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.636 [2024-07-24 22:33:56.280064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.636 [2024-07-24 22:33:56.280094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.636 [2024-07-24 22:33:56.285822] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.636 [2024-07-24 22:33:56.286131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.636 [2024-07-24 22:33:56.286162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.636 [2024-07-24 22:33:56.291820] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.636 [2024-07-24 22:33:56.292131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.636 [2024-07-24 22:33:56.292162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.636 [2024-07-24 22:33:56.297859] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.636 [2024-07-24 22:33:56.298170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.636 [2024-07-24 22:33:56.298200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.636 [2024-07-24 22:33:56.304004] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.636 [2024-07-24 22:33:56.304316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.636 [2024-07-24 22:33:56.304346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.636 [2024-07-24 22:33:56.310109] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.636 [2024-07-24 22:33:56.310418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.636 [2024-07-24 22:33:56.310448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.636 [2024-07-24 22:33:56.316091] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.636 [2024-07-24 22:33:56.316401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.636 [2024-07-24 22:33:56.316432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.636 [2024-07-24 22:33:56.322077] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.636 [2024-07-24 22:33:56.322388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.636 [2024-07-24 22:33:56.322417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.636 [2024-07-24 22:33:56.328051] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.636 [2024-07-24 22:33:56.328360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.636 [2024-07-24 22:33:56.328390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.636 [2024-07-24 22:33:56.334046] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.636 [2024-07-24 22:33:56.334356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.636 [2024-07-24 22:33:56.334385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.894 [2024-07-24 22:33:56.340110] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.894 [2024-07-24 22:33:56.340420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.894 [2024-07-24 22:33:56.340457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.894 [2024-07-24 22:33:56.346135] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.894 [2024-07-24 22:33:56.346452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.894 [2024-07-24 22:33:56.346489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.894 [2024-07-24 22:33:56.352181] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.894 [2024-07-24 22:33:56.352504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.895 [2024-07-24 22:33:56.352534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.895 [2024-07-24 22:33:56.358228] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.895 [2024-07-24 22:33:56.358544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.895 [2024-07-24 22:33:56.358575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.895 [2024-07-24 22:33:56.364260] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.895 [2024-07-24 22:33:56.364575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.895 [2024-07-24 22:33:56.364605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.895 [2024-07-24 22:33:56.370277] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.895 [2024-07-24 22:33:56.370596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.895 [2024-07-24 22:33:56.370625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.895 [2024-07-24 22:33:56.376225] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.895 [2024-07-24 22:33:56.376544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.895 [2024-07-24 22:33:56.376574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.895 [2024-07-24 22:33:56.382319] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.895 [2024-07-24 22:33:56.382640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.895 [2024-07-24 22:33:56.382670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.895 [2024-07-24 22:33:56.388329] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.895 [2024-07-24 22:33:56.388646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.895 [2024-07-24 22:33:56.388677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.895 [2024-07-24 22:33:56.394396] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.895 [2024-07-24 22:33:56.394719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.895 [2024-07-24 22:33:56.394749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.895 [2024-07-24 22:33:56.400335] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.895 [2024-07-24 22:33:56.400651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.895 [2024-07-24 22:33:56.400682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.895 [2024-07-24 22:33:56.406345] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.895 [2024-07-24 22:33:56.406665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.895 [2024-07-24 22:33:56.406695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.895 [2024-07-24 22:33:56.412405] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.895 [2024-07-24 22:33:56.412723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.895 [2024-07-24 22:33:56.412753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.895 [2024-07-24 22:33:56.418380] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.895 [2024-07-24 22:33:56.418701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.895 [2024-07-24 22:33:56.418732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.895 [2024-07-24 22:33:56.424393] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.895 [2024-07-24 22:33:56.424711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.895 [2024-07-24 22:33:56.424741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:30.895 [2024-07-24 22:33:56.430410] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.895 [2024-07-24 22:33:56.430729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.895 [2024-07-24 22:33:56.430759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:30.895 [2024-07-24 22:33:56.436690] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.895 [2024-07-24 22:33:56.437002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.895 [2024-07-24 22:33:56.437032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:30.895 [2024-07-24 22:33:56.442641] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2384810) with pdu=0x2000190fef90 00:24:30.895 [2024-07-24 22:33:56.442951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:30.895 [2024-07-24 22:33:56.442980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:30.895 00:24:30.895 Latency(us) 00:24:30.895 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:30.895 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:24:30.895 nvme0n1 : 2.00 4840.13 605.02 0.00 0.00 3297.74 2415.12 9514.86 00:24:30.895 =================================================================================================================== 00:24:30.895 Total : 4840.13 605.02 0.00 0.00 3297.74 2415.12 9514.86 00:24:30.895 0 00:24:30.895 22:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:30.895 22:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:30.895 22:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:30.895 22:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:30.895 | .driver_specific 00:24:30.895 | .nvme_error 00:24:30.895 | .status_code 00:24:30.895 | .command_transient_transport_error' 00:24:31.153 22:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 312 > 0 )) 00:24:31.153 22:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3923751 00:24:31.153 22:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3923751 ']' 00:24:31.153 22:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3923751 00:24:31.153 22:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:31.153 22:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:31.153 22:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3923751 00:24:31.153 22:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:31.153 22:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:31.153 22:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3923751' 00:24:31.153 killing process with pid 3923751 00:24:31.153 22:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3923751 00:24:31.153 Received shutdown signal, test time was about 2.000000 seconds 00:24:31.153 00:24:31.153 Latency(us) 00:24:31.153 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:31.153 =================================================================================================================== 00:24:31.153 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:31.153 22:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3923751 00:24:31.413 22:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3922616 00:24:31.413 22:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3922616 ']' 00:24:31.413 22:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3922616 00:24:31.413 22:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:31.413 22:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:31.413 22:33:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3922616 00:24:31.413 22:33:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:31.413 22:33:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:31.413 22:33:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3922616' 00:24:31.413 killing process with pid 3922616 00:24:31.413 22:33:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3922616 00:24:31.413 22:33:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3922616 00:24:31.672 00:24:31.672 real 0m15.558s 00:24:31.672 user 0m30.786s 00:24:31.672 sys 0m4.029s 00:24:31.672 22:33:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:31.672 22:33:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:31.672 ************************************ 00:24:31.672 END TEST nvmf_digest_error 00:24:31.672 ************************************ 00:24:31.672 22:33:57 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:24:31.672 22:33:57 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:24:31.672 22:33:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:31.672 22:33:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:24:31.672 22:33:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:31.672 22:33:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:24:31.672 22:33:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:31.672 22:33:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:31.672 rmmod nvme_tcp 00:24:31.672 rmmod nvme_fabrics 00:24:31.672 rmmod nvme_keyring 00:24:31.672 22:33:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:31.672 22:33:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:24:31.672 22:33:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:24:31.672 22:33:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 3922616 ']' 00:24:31.672 22:33:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 3922616 00:24:31.672 22:33:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 3922616 ']' 00:24:31.672 22:33:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 3922616 00:24:31.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3922616) - No such process 00:24:31.672 22:33:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 3922616 is not found' 00:24:31.672 Process with pid 3922616 is not found 00:24:31.672 22:33:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:31.672 22:33:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:31.672 22:33:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:31.672 22:33:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:31.672 22:33:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:31.672 22:33:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:31.672 22:33:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:31.672 22:33:57 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:34.210 22:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:34.210 00:24:34.210 real 0m35.260s 00:24:34.210 user 1m2.536s 00:24:34.210 sys 0m9.507s 00:24:34.210 22:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:34.210 22:33:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:34.210 ************************************ 00:24:34.210 END TEST nvmf_digest 00:24:34.210 ************************************ 00:24:34.210 22:33:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:24:34.210 22:33:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:24:34.210 22:33:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:24:34.210 22:33:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:24:34.210 22:33:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:34.210 22:33:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:34.210 22:33:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.210 ************************************ 00:24:34.210 START TEST nvmf_bdevperf 00:24:34.210 ************************************ 00:24:34.210 22:33:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:24:34.210 * Looking for test storage... 00:24:34.210 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:34.210 22:33:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:34.210 22:33:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:24:34.210 22:33:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:34.211 22:33:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:34.211 22:33:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:34.211 22:33:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:34.211 22:33:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:34.211 22:33:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:34.211 22:33:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:34.211 22:33:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:34.211 22:33:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:34.211 22:33:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:34.211 22:33:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:24:34.211 22:33:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:24:34.211 22:33:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:34.211 22:33:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:34.211 22:33:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:34.211 22:33:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:34.211 22:33:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:34.211 22:33:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:34.211 22:33:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:34.211 22:33:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:34.211 22:33:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:34.211 22:33:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:34.211 22:33:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:34.211 22:33:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:24:34.211 22:33:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:34.211 22:33:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:24:34.211 22:33:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:34.211 22:33:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:34.211 22:33:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:34.211 22:33:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:34.211 22:33:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:34.211 22:33:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:34.211 22:33:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:34.211 22:33:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:34.211 22:33:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:34.211 22:33:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:34.211 22:33:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:24:34.211 22:33:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:34.211 22:33:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:34.211 22:33:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:34.211 22:33:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:34.211 22:33:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:34.211 22:33:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:34.211 22:33:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:34.211 22:33:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:34.211 22:33:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:34.211 22:33:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:34.211 22:33:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:24:34.211 22:33:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:35.590 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:35.590 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:24:35.590 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:35.590 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:35.590 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:35.590 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:35.590 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:35.590 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:24:35.590 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:35.590 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:24:35.590 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:24:35.590 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:24:35.590 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:24:35.590 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:24:35.590 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:24:35.590 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:35.590 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:35.590 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:35.590 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:35.590 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:35.590 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:35.590 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:35.590 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:35.590 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:35.590 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:35.590 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:35.590 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:35.590 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:35.590 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:35.590 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:35.590 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:35.590 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:35.590 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:35.590 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:24:35.590 Found 0000:08:00.0 (0x8086 - 0x159b) 00:24:35.590 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:35.590 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:35.590 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:35.590 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:35.590 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:35.590 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:35.590 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:24:35.590 Found 0000:08:00.1 (0x8086 - 0x159b) 00:24:35.590 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:35.590 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:35.590 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:35.591 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:35.591 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:35.591 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:35.591 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:35.591 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:35.591 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:35.591 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:35.591 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:35.591 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:35.591 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:35.591 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:35.591 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:35.591 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:24:35.591 Found net devices under 0000:08:00.0: cvl_0_0 00:24:35.591 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:35.591 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:35.591 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:35.591 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:35.591 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:35.591 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:35.591 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:35.591 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:35.591 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:24:35.591 Found net devices under 0000:08:00.1: cvl_0_1 00:24:35.591 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:35.591 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:35.591 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:24:35.591 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:35.591 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:35.591 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:35.591 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:35.591 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:35.591 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:35.591 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:35.591 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:35.591 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:35.591 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:35.591 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:35.591 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:35.591 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:35.591 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:35.591 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:35.591 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:35.591 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:35.591 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:35.591 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:35.591 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:35.591 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:35.591 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:35.591 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:35.591 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:35.591 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.342 ms 00:24:35.591 00:24:35.591 --- 10.0.0.2 ping statistics --- 00:24:35.591 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:35.591 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:24:35.591 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:35.591 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:35.591 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:24:35.591 00:24:35.591 --- 10.0.0.1 ping statistics --- 00:24:35.591 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:35.591 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:24:35.591 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:35.591 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:24:35.591 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:35.591 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:35.591 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:35.591 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:35.591 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:35.591 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:35.591 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:35.591 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:24:35.591 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:24:35.591 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:35.591 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:35.591 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:35.591 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3925571 00:24:35.591 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:35.591 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3925571 00:24:35.591 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 3925571 ']' 00:24:35.591 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:35.591 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:35.591 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:35.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:35.591 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:35.591 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:35.851 [2024-07-24 22:34:01.330772] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:24:35.851 [2024-07-24 22:34:01.330868] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:35.851 EAL: No free 2048 kB hugepages reported on node 1 00:24:35.851 [2024-07-24 22:34:01.397539] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:35.851 [2024-07-24 22:34:01.518058] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:35.851 [2024-07-24 22:34:01.518125] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:35.851 [2024-07-24 22:34:01.518141] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:35.851 [2024-07-24 22:34:01.518154] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:35.851 [2024-07-24 22:34:01.518165] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:35.851 [2024-07-24 22:34:01.518256] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:35.851 [2024-07-24 22:34:01.518311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:35.851 [2024-07-24 22:34:01.518314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:36.112 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:36.112 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:24:36.112 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:36.112 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:36.112 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:36.112 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:36.112 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:36.112 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.112 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:36.112 [2024-07-24 22:34:01.655867] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:36.112 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.112 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:36.112 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.112 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:36.112 Malloc0 00:24:36.112 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.112 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:36.112 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.112 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:36.112 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.112 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:36.112 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.112 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:36.112 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.112 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:36.112 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.112 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:36.112 [2024-07-24 22:34:01.719753] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:36.112 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.112 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:24:36.112 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:24:36.112 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:24:36.112 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:24:36.112 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:36.112 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:36.112 { 00:24:36.112 "params": { 00:24:36.112 "name": "Nvme$subsystem", 00:24:36.112 "trtype": "$TEST_TRANSPORT", 00:24:36.112 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:36.112 "adrfam": "ipv4", 00:24:36.112 "trsvcid": "$NVMF_PORT", 00:24:36.112 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:36.112 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:36.112 "hdgst": ${hdgst:-false}, 00:24:36.112 "ddgst": ${ddgst:-false} 00:24:36.112 }, 00:24:36.112 "method": "bdev_nvme_attach_controller" 00:24:36.112 } 00:24:36.112 EOF 00:24:36.112 )") 00:24:36.112 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:24:36.112 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:24:36.112 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:24:36.112 22:34:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:36.112 "params": { 00:24:36.113 "name": "Nvme1", 00:24:36.113 "trtype": "tcp", 00:24:36.113 "traddr": "10.0.0.2", 00:24:36.113 "adrfam": "ipv4", 00:24:36.113 "trsvcid": "4420", 00:24:36.113 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:36.113 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:36.113 "hdgst": false, 00:24:36.113 "ddgst": false 00:24:36.113 }, 00:24:36.113 "method": "bdev_nvme_attach_controller" 00:24:36.113 }' 00:24:36.113 [2024-07-24 22:34:01.771557] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:24:36.113 [2024-07-24 22:34:01.771651] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3925660 ] 00:24:36.113 EAL: No free 2048 kB hugepages reported on node 1 00:24:36.374 [2024-07-24 22:34:01.833325] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:36.374 [2024-07-24 22:34:01.952747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:36.634 Running I/O for 1 seconds... 00:24:37.572 00:24:37.572 Latency(us) 00:24:37.572 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:37.572 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:37.572 Verification LBA range: start 0x0 length 0x4000 00:24:37.572 Nvme1n1 : 1.00 7302.64 28.53 0.00 0.00 17441.31 1577.72 14272.28 00:24:37.572 =================================================================================================================== 00:24:37.572 Total : 7302.64 28.53 0.00 0.00 17441.31 1577.72 14272.28 00:24:37.836 22:34:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3925798 00:24:37.836 22:34:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:24:37.836 22:34:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:24:37.836 22:34:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:24:37.836 22:34:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:24:37.836 22:34:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:24:37.836 22:34:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:37.836 22:34:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:37.836 { 00:24:37.836 "params": { 00:24:37.836 "name": "Nvme$subsystem", 00:24:37.836 "trtype": "$TEST_TRANSPORT", 00:24:37.836 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:37.836 "adrfam": "ipv4", 00:24:37.836 "trsvcid": "$NVMF_PORT", 00:24:37.836 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:37.836 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:37.836 "hdgst": ${hdgst:-false}, 00:24:37.836 "ddgst": ${ddgst:-false} 00:24:37.836 }, 00:24:37.836 "method": "bdev_nvme_attach_controller" 00:24:37.836 } 00:24:37.836 EOF 00:24:37.836 )") 00:24:37.836 22:34:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:24:37.836 22:34:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:24:37.836 22:34:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:24:37.836 22:34:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:37.836 "params": { 00:24:37.836 "name": "Nvme1", 00:24:37.836 "trtype": "tcp", 00:24:37.836 "traddr": "10.0.0.2", 00:24:37.836 "adrfam": "ipv4", 00:24:37.836 "trsvcid": "4420", 00:24:37.836 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:37.836 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:37.836 "hdgst": false, 00:24:37.836 "ddgst": false 00:24:37.836 }, 00:24:37.836 "method": "bdev_nvme_attach_controller" 00:24:37.836 }' 00:24:37.836 [2024-07-24 22:34:03.513182] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:24:37.836 [2024-07-24 22:34:03.513277] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3925798 ] 00:24:38.097 EAL: No free 2048 kB hugepages reported on node 1 00:24:38.097 [2024-07-24 22:34:03.575983] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:38.097 [2024-07-24 22:34:03.692243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:38.356 Running I/O for 15 seconds... 00:24:40.892 22:34:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3925571 00:24:40.892 22:34:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:24:40.892 [2024-07-24 22:34:06.479585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:17448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.893 [2024-07-24 22:34:06.479634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.893 [2024-07-24 22:34:06.479668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:17456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.893 [2024-07-24 22:34:06.479687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.893 [2024-07-24 22:34:06.479708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.893 [2024-07-24 22:34:06.479725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.893 [2024-07-24 22:34:06.479744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:17472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.893 [2024-07-24 22:34:06.479760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.893 [2024-07-24 22:34:06.479779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:17480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.893 [2024-07-24 22:34:06.479796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.893 [2024-07-24 22:34:06.479815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:17488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.893 [2024-07-24 22:34:06.479831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.893 [2024-07-24 22:34:06.479848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:17496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.893 [2024-07-24 22:34:06.479864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.893 [2024-07-24 22:34:06.479884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:17504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.893 [2024-07-24 22:34:06.479901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.893 [2024-07-24 22:34:06.479931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:17512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.893 [2024-07-24 22:34:06.479947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.893 [2024-07-24 22:34:06.479966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:17520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.893 [2024-07-24 22:34:06.479981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.893 [2024-07-24 22:34:06.480000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:17528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.893 [2024-07-24 22:34:06.480016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.893 [2024-07-24 22:34:06.480037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:17536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.893 [2024-07-24 22:34:06.480053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.893 [2024-07-24 22:34:06.480072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:17544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.893 [2024-07-24 22:34:06.480089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.893 [2024-07-24 22:34:06.480108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:17552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.893 [2024-07-24 22:34:06.480125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.893 [2024-07-24 22:34:06.480144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:17560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.893 [2024-07-24 22:34:06.480160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.893 [2024-07-24 22:34:06.480177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:17568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.893 [2024-07-24 22:34:06.480199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.893 [2024-07-24 22:34:06.480217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:17576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.893 [2024-07-24 22:34:06.480232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.893 [2024-07-24 22:34:06.480250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:17584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.893 [2024-07-24 22:34:06.480265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.893 [2024-07-24 22:34:06.480283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:17592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.893 [2024-07-24 22:34:06.480298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.893 [2024-07-24 22:34:06.480316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:17600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.893 [2024-07-24 22:34:06.480331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.893 [2024-07-24 22:34:06.480349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:17608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.893 [2024-07-24 22:34:06.480368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.893 [2024-07-24 22:34:06.480388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:17616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.893 [2024-07-24 22:34:06.480404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.893 [2024-07-24 22:34:06.480422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:17624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.893 [2024-07-24 22:34:06.480438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.893 [2024-07-24 22:34:06.480455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:17632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.893 [2024-07-24 22:34:06.480471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.893 [2024-07-24 22:34:06.480498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.893 [2024-07-24 22:34:06.480514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.893 [2024-07-24 22:34:06.480536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:17648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.893 [2024-07-24 22:34:06.480552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.893 [2024-07-24 22:34:06.480569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:17656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.893 [2024-07-24 22:34:06.480592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.893 [2024-07-24 22:34:06.480610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:17664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.893 [2024-07-24 22:34:06.480625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.893 [2024-07-24 22:34:06.480643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:17672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.893 [2024-07-24 22:34:06.480658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.893 [2024-07-24 22:34:06.480675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:17680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.893 [2024-07-24 22:34:06.480691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.893 [2024-07-24 22:34:06.480708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:17688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.893 [2024-07-24 22:34:06.480724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.893 [2024-07-24 22:34:06.480741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:17696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.893 [2024-07-24 22:34:06.480757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.893 [2024-07-24 22:34:06.480775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:17704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.893 [2024-07-24 22:34:06.480790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.893 [2024-07-24 22:34:06.480811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.893 [2024-07-24 22:34:06.480827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.893 [2024-07-24 22:34:06.480844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:17720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.893 [2024-07-24 22:34:06.480860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.893 [2024-07-24 22:34:06.480877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:17728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.893 [2024-07-24 22:34:06.480893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.893 [2024-07-24 22:34:06.480910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:17736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.893 [2024-07-24 22:34:06.480926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.893 [2024-07-24 22:34:06.480943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.893 [2024-07-24 22:34:06.480959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.893 [2024-07-24 22:34:06.480976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:17752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.894 [2024-07-24 22:34:06.480992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.894 [2024-07-24 22:34:06.481009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:17760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.894 [2024-07-24 22:34:06.481025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.894 [2024-07-24 22:34:06.481042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.894 [2024-07-24 22:34:06.481058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.894 [2024-07-24 22:34:06.481075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:17776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.894 [2024-07-24 22:34:06.481091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.894 [2024-07-24 22:34:06.481108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:17784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.894 [2024-07-24 22:34:06.481123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.894 [2024-07-24 22:34:06.481140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:17792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.894 [2024-07-24 22:34:06.481156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.894 [2024-07-24 22:34:06.481173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:17800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.894 [2024-07-24 22:34:06.481189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.894 [2024-07-24 22:34:06.481206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:17808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.894 [2024-07-24 22:34:06.481221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.894 [2024-07-24 22:34:06.481242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.894 [2024-07-24 22:34:06.481258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.894 [2024-07-24 22:34:06.481276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:17824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.894 [2024-07-24 22:34:06.481291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.894 [2024-07-24 22:34:06.481309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:17832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.894 [2024-07-24 22:34:06.481324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.894 [2024-07-24 22:34:06.481342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:17840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.894 [2024-07-24 22:34:06.481357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.894 [2024-07-24 22:34:06.481374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:17848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.894 [2024-07-24 22:34:06.481390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.894 [2024-07-24 22:34:06.481407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:17856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.894 [2024-07-24 22:34:06.481423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.894 [2024-07-24 22:34:06.481440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:17864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.894 [2024-07-24 22:34:06.481455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.894 [2024-07-24 22:34:06.481473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:17872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.894 [2024-07-24 22:34:06.481496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.894 [2024-07-24 22:34:06.481514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:17880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.894 [2024-07-24 22:34:06.481531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.894 [2024-07-24 22:34:06.481553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:17888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.894 [2024-07-24 22:34:06.481569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.894 [2024-07-24 22:34:06.481586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.894 [2024-07-24 22:34:06.481609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.894 [2024-07-24 22:34:06.481626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:17904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.894 [2024-07-24 22:34:06.481642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.894 [2024-07-24 22:34:06.481660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:17912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.894 [2024-07-24 22:34:06.481680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.894 [2024-07-24 22:34:06.481697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:17920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.894 [2024-07-24 22:34:06.481713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.894 [2024-07-24 22:34:06.481730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:17928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.894 [2024-07-24 22:34:06.481746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.894 [2024-07-24 22:34:06.481763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:17936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.894 [2024-07-24 22:34:06.481778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.894 [2024-07-24 22:34:06.481796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:17944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.894 [2024-07-24 22:34:06.481811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.894 [2024-07-24 22:34:06.481829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:17952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.894 [2024-07-24 22:34:06.481844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.894 [2024-07-24 22:34:06.481861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.894 [2024-07-24 22:34:06.481877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.894 [2024-07-24 22:34:06.481894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:17968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.894 [2024-07-24 22:34:06.481909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.894 [2024-07-24 22:34:06.481927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.894 [2024-07-24 22:34:06.481942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.894 [2024-07-24 22:34:06.481959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:17984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.894 [2024-07-24 22:34:06.481974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.894 [2024-07-24 22:34:06.481992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.894 [2024-07-24 22:34:06.482008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.894 [2024-07-24 22:34:06.482025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:18000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.894 [2024-07-24 22:34:06.482041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.894 [2024-07-24 22:34:06.482059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:18008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.894 [2024-07-24 22:34:06.482075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.894 [2024-07-24 22:34:06.482097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:18016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.894 [2024-07-24 22:34:06.482113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.894 [2024-07-24 22:34:06.482130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.894 [2024-07-24 22:34:06.482146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.894 [2024-07-24 22:34:06.482163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:18032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.894 [2024-07-24 22:34:06.482179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.894 [2024-07-24 22:34:06.482196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:18040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.894 [2024-07-24 22:34:06.482211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.894 [2024-07-24 22:34:06.482229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:18048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.894 [2024-07-24 22:34:06.482245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.894 [2024-07-24 22:34:06.482262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.894 [2024-07-24 22:34:06.482278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.895 [2024-07-24 22:34:06.482295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:18064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.895 [2024-07-24 22:34:06.482310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.895 [2024-07-24 22:34:06.482328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.895 [2024-07-24 22:34:06.482343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.895 [2024-07-24 22:34:06.482360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:18080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.895 [2024-07-24 22:34:06.482376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.895 [2024-07-24 22:34:06.482393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.895 [2024-07-24 22:34:06.482409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.895 [2024-07-24 22:34:06.482427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:18096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.895 [2024-07-24 22:34:06.482442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.895 [2024-07-24 22:34:06.482460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:18104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.895 [2024-07-24 22:34:06.482475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.895 [2024-07-24 22:34:06.482521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:18112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.895 [2024-07-24 22:34:06.482551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.895 [2024-07-24 22:34:06.482569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:18120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.895 [2024-07-24 22:34:06.482585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.895 [2024-07-24 22:34:06.482607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.895 [2024-07-24 22:34:06.482624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.895 [2024-07-24 22:34:06.482641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:18136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.895 [2024-07-24 22:34:06.482657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.895 [2024-07-24 22:34:06.482675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:18456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.895 [2024-07-24 22:34:06.482691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.895 [2024-07-24 22:34:06.482708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:18464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:40.895 [2024-07-24 22:34:06.482725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.895 [2024-07-24 22:34:06.482742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:18144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.895 [2024-07-24 22:34:06.482758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.895 [2024-07-24 22:34:06.482775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:18152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.895 [2024-07-24 22:34:06.482790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.895 [2024-07-24 22:34:06.482808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.895 [2024-07-24 22:34:06.482823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.895 [2024-07-24 22:34:06.482841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:18168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.895 [2024-07-24 22:34:06.482857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.895 [2024-07-24 22:34:06.482874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:18176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.895 [2024-07-24 22:34:06.482890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.895 [2024-07-24 22:34:06.482908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:18184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.895 [2024-07-24 22:34:06.482923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.895 [2024-07-24 22:34:06.482941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:18192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.895 [2024-07-24 22:34:06.482956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.895 [2024-07-24 22:34:06.482978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:18200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.895 [2024-07-24 22:34:06.482994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.895 [2024-07-24 22:34:06.483012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:18208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.895 [2024-07-24 22:34:06.483027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.895 [2024-07-24 22:34:06.483045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:18216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.895 [2024-07-24 22:34:06.483060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.895 [2024-07-24 22:34:06.483077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:18224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.895 [2024-07-24 22:34:06.483093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.895 [2024-07-24 22:34:06.483111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.895 [2024-07-24 22:34:06.483126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.895 [2024-07-24 22:34:06.483144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.895 [2024-07-24 22:34:06.483159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.895 [2024-07-24 22:34:06.483177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.895 [2024-07-24 22:34:06.483192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.895 [2024-07-24 22:34:06.483209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:18256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.895 [2024-07-24 22:34:06.483225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.895 [2024-07-24 22:34:06.483243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:18264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.895 [2024-07-24 22:34:06.483258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.895 [2024-07-24 22:34:06.483276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:18272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.895 [2024-07-24 22:34:06.483291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.895 [2024-07-24 22:34:06.483309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:18280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.895 [2024-07-24 22:34:06.483324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.895 [2024-07-24 22:34:06.483342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:18288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.895 [2024-07-24 22:34:06.483357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.895 [2024-07-24 22:34:06.483375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:18296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.895 [2024-07-24 22:34:06.483390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.895 [2024-07-24 22:34:06.483411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:18304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.895 [2024-07-24 22:34:06.483427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.895 [2024-07-24 22:34:06.483445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:18312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.895 [2024-07-24 22:34:06.483460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.895 [2024-07-24 22:34:06.483478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:18320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.895 [2024-07-24 22:34:06.483501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.895 [2024-07-24 22:34:06.483518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:18328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.895 [2024-07-24 22:34:06.483541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.895 [2024-07-24 22:34:06.483559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:18336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.895 [2024-07-24 22:34:06.483575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.895 [2024-07-24 22:34:06.483592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.895 [2024-07-24 22:34:06.483607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.895 [2024-07-24 22:34:06.483625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:18352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.895 [2024-07-24 22:34:06.483640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.896 [2024-07-24 22:34:06.483657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.896 [2024-07-24 22:34:06.483673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.896 [2024-07-24 22:34:06.483690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.896 [2024-07-24 22:34:06.483705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.896 [2024-07-24 22:34:06.483723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:18376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.896 [2024-07-24 22:34:06.483738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.896 [2024-07-24 22:34:06.483755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:18384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.896 [2024-07-24 22:34:06.483771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.896 [2024-07-24 22:34:06.483789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:18392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.896 [2024-07-24 22:34:06.483804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.896 [2024-07-24 22:34:06.483821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:18400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.896 [2024-07-24 22:34:06.483840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.896 [2024-07-24 22:34:06.483858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:18408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.896 [2024-07-24 22:34:06.483873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.896 [2024-07-24 22:34:06.483891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.896 [2024-07-24 22:34:06.483906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.896 [2024-07-24 22:34:06.483923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:18424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.896 [2024-07-24 22:34:06.483938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.896 [2024-07-24 22:34:06.483956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:18432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.896 [2024-07-24 22:34:06.483971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.896 [2024-07-24 22:34:06.483988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:18440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.896 [2024-07-24 22:34:06.484003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.896 [2024-07-24 22:34:06.484020] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9d2bc0 is same with the state(5) to be set 00:24:40.896 [2024-07-24 22:34:06.484038] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:40.896 [2024-07-24 22:34:06.484052] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:40.896 [2024-07-24 22:34:06.484065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18448 len:8 PRP1 0x0 PRP2 0x0 00:24:40.896 [2024-07-24 22:34:06.484079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:40.896 [2024-07-24 22:34:06.484141] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x9d2bc0 was disconnected and freed. reset controller. 00:24:40.896 [2024-07-24 22:34:06.488545] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.896 [2024-07-24 22:34:06.488632] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:40.896 [2024-07-24 22:34:06.489389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.896 [2024-07-24 22:34:06.489439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:40.896 [2024-07-24 22:34:06.489457] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:40.896 [2024-07-24 22:34:06.489732] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:40.896 [2024-07-24 22:34:06.490000] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.896 [2024-07-24 22:34:06.490022] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.896 [2024-07-24 22:34:06.490040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.896 [2024-07-24 22:34:06.494168] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.896 [2024-07-24 22:34:06.503324] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.896 [2024-07-24 22:34:06.503894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.896 [2024-07-24 22:34:06.503935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:40.896 [2024-07-24 22:34:06.503955] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:40.896 [2024-07-24 22:34:06.504231] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:40.896 [2024-07-24 22:34:06.504510] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.896 [2024-07-24 22:34:06.504533] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.896 [2024-07-24 22:34:06.504549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.896 [2024-07-24 22:34:06.508609] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.896 [2024-07-24 22:34:06.517731] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.896 [2024-07-24 22:34:06.518298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.896 [2024-07-24 22:34:06.518338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:40.896 [2024-07-24 22:34:06.518358] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:40.896 [2024-07-24 22:34:06.518640] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:40.896 [2024-07-24 22:34:06.518910] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.896 [2024-07-24 22:34:06.518931] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.896 [2024-07-24 22:34:06.518947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.896 [2024-07-24 22:34:06.523017] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.896 [2024-07-24 22:34:06.532171] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.896 [2024-07-24 22:34:06.532780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.896 [2024-07-24 22:34:06.532822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:40.896 [2024-07-24 22:34:06.532841] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:40.896 [2024-07-24 22:34:06.533113] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:40.896 [2024-07-24 22:34:06.533381] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.896 [2024-07-24 22:34:06.533403] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.896 [2024-07-24 22:34:06.533418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.896 [2024-07-24 22:34:06.537498] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.896 [2024-07-24 22:34:06.546599] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.896 [2024-07-24 22:34:06.547275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.896 [2024-07-24 22:34:06.547316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:40.896 [2024-07-24 22:34:06.547335] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:40.896 [2024-07-24 22:34:06.547624] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:40.896 [2024-07-24 22:34:06.547894] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.896 [2024-07-24 22:34:06.547916] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.896 [2024-07-24 22:34:06.547932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.896 [2024-07-24 22:34:06.551968] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.896 [2024-07-24 22:34:06.561034] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.896 [2024-07-24 22:34:06.561624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.896 [2024-07-24 22:34:06.561665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:40.896 [2024-07-24 22:34:06.561684] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:40.896 [2024-07-24 22:34:06.561955] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:40.896 [2024-07-24 22:34:06.562223] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.896 [2024-07-24 22:34:06.562245] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.896 [2024-07-24 22:34:06.562261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.896 [2024-07-24 22:34:06.566313] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.896 [2024-07-24 22:34:06.575360] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.897 [2024-07-24 22:34:06.575989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.897 [2024-07-24 22:34:06.576043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:40.897 [2024-07-24 22:34:06.576063] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:40.897 [2024-07-24 22:34:06.576333] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:40.897 [2024-07-24 22:34:06.576612] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.897 [2024-07-24 22:34:06.576636] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.897 [2024-07-24 22:34:06.576651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.897 [2024-07-24 22:34:06.580686] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.897 [2024-07-24 22:34:06.589765] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.897 [2024-07-24 22:34:06.590321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.897 [2024-07-24 22:34:06.590374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:40.897 [2024-07-24 22:34:06.590395] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:40.897 [2024-07-24 22:34:06.590670] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:40.897 [2024-07-24 22:34:06.590939] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.897 [2024-07-24 22:34:06.590961] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.897 [2024-07-24 22:34:06.590983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.158 [2024-07-24 22:34:06.595094] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.158 [2024-07-24 22:34:06.604252] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.158 [2024-07-24 22:34:06.604785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.158 [2024-07-24 22:34:06.604834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:41.158 [2024-07-24 22:34:06.604855] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:41.158 [2024-07-24 22:34:06.605128] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:41.158 [2024-07-24 22:34:06.605398] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.158 [2024-07-24 22:34:06.605421] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.158 [2024-07-24 22:34:06.605438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.158 [2024-07-24 22:34:06.609519] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.159 [2024-07-24 22:34:06.618619] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.159 [2024-07-24 22:34:06.619270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.159 [2024-07-24 22:34:06.619314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:41.159 [2024-07-24 22:34:06.619335] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:41.159 [2024-07-24 22:34:06.619619] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:41.159 [2024-07-24 22:34:06.619891] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.159 [2024-07-24 22:34:06.619914] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.159 [2024-07-24 22:34:06.619931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.159 [2024-07-24 22:34:06.623972] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.159 [2024-07-24 22:34:06.633067] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.159 [2024-07-24 22:34:06.633557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.159 [2024-07-24 22:34:06.633598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:41.159 [2024-07-24 22:34:06.633618] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:41.159 [2024-07-24 22:34:06.633888] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:41.159 [2024-07-24 22:34:06.634157] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.159 [2024-07-24 22:34:06.634179] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.159 [2024-07-24 22:34:06.634195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.159 [2024-07-24 22:34:06.638232] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.159 [2024-07-24 22:34:06.647562] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.159 [2024-07-24 22:34:06.647981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.159 [2024-07-24 22:34:06.648021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:41.159 [2024-07-24 22:34:06.648039] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:41.159 [2024-07-24 22:34:06.648304] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:41.159 [2024-07-24 22:34:06.648581] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.159 [2024-07-24 22:34:06.648604] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.159 [2024-07-24 22:34:06.648620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.159 [2024-07-24 22:34:06.652659] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.159 [2024-07-24 22:34:06.661955] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.159 [2024-07-24 22:34:06.662393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.159 [2024-07-24 22:34:06.662435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:41.159 [2024-07-24 22:34:06.662454] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:41.159 [2024-07-24 22:34:06.662735] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:41.159 [2024-07-24 22:34:06.663006] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.159 [2024-07-24 22:34:06.663028] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.159 [2024-07-24 22:34:06.663044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.159 [2024-07-24 22:34:06.667080] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.159 [2024-07-24 22:34:06.676505] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.159 [2024-07-24 22:34:06.676924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.159 [2024-07-24 22:34:06.676955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:41.159 [2024-07-24 22:34:06.676973] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:41.159 [2024-07-24 22:34:06.677237] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:41.159 [2024-07-24 22:34:06.677517] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.159 [2024-07-24 22:34:06.677540] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.159 [2024-07-24 22:34:06.677556] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.159 [2024-07-24 22:34:06.681608] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.159 [2024-07-24 22:34:06.690934] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.159 [2024-07-24 22:34:06.691370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.159 [2024-07-24 22:34:06.691400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:41.159 [2024-07-24 22:34:06.691417] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:41.159 [2024-07-24 22:34:06.691690] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:41.159 [2024-07-24 22:34:06.691964] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.159 [2024-07-24 22:34:06.691987] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.159 [2024-07-24 22:34:06.692002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.159 [2024-07-24 22:34:06.696044] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.159 [2024-07-24 22:34:06.705345] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.159 [2024-07-24 22:34:06.705800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.159 [2024-07-24 22:34:06.705830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:41.159 [2024-07-24 22:34:06.705848] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:41.159 [2024-07-24 22:34:06.706112] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:41.159 [2024-07-24 22:34:06.706379] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.159 [2024-07-24 22:34:06.706401] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.159 [2024-07-24 22:34:06.706416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.159 [2024-07-24 22:34:06.710494] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.159 [2024-07-24 22:34:06.719812] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.159 [2024-07-24 22:34:06.720255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.159 [2024-07-24 22:34:06.720295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:41.159 [2024-07-24 22:34:06.720314] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:41.159 [2024-07-24 22:34:06.720604] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:41.159 [2024-07-24 22:34:06.720874] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.159 [2024-07-24 22:34:06.720896] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.159 [2024-07-24 22:34:06.720912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.159 [2024-07-24 22:34:06.724951] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.159 [2024-07-24 22:34:06.734282] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.159 [2024-07-24 22:34:06.734744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.159 [2024-07-24 22:34:06.734776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:41.159 [2024-07-24 22:34:06.734794] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:41.159 [2024-07-24 22:34:06.735059] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:41.159 [2024-07-24 22:34:06.735327] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.159 [2024-07-24 22:34:06.735349] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.159 [2024-07-24 22:34:06.735364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.159 [2024-07-24 22:34:06.739495] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.159 [2024-07-24 22:34:06.748837] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.159 [2024-07-24 22:34:06.749268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.159 [2024-07-24 22:34:06.749299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:41.159 [2024-07-24 22:34:06.749316] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:41.159 [2024-07-24 22:34:06.749597] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:41.159 [2024-07-24 22:34:06.749867] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.159 [2024-07-24 22:34:06.749888] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.159 [2024-07-24 22:34:06.749903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.159 [2024-07-24 22:34:06.753973] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.160 [2024-07-24 22:34:06.763301] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.160 [2024-07-24 22:34:06.763762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.160 [2024-07-24 22:34:06.763791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:41.160 [2024-07-24 22:34:06.763809] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:41.160 [2024-07-24 22:34:06.764073] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:41.160 [2024-07-24 22:34:06.764340] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.160 [2024-07-24 22:34:06.764362] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.160 [2024-07-24 22:34:06.764378] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.160 [2024-07-24 22:34:06.768421] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.160 [2024-07-24 22:34:06.777730] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.160 [2024-07-24 22:34:06.778233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.160 [2024-07-24 22:34:06.778262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:41.160 [2024-07-24 22:34:06.778280] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:41.160 [2024-07-24 22:34:06.778559] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:41.160 [2024-07-24 22:34:06.778827] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.160 [2024-07-24 22:34:06.778849] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.160 [2024-07-24 22:34:06.778864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.160 [2024-07-24 22:34:06.782952] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.160 [2024-07-24 22:34:06.792082] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.160 [2024-07-24 22:34:06.792525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.160 [2024-07-24 22:34:06.792556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:41.160 [2024-07-24 22:34:06.792580] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:41.160 [2024-07-24 22:34:06.792846] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:41.160 [2024-07-24 22:34:06.793114] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.160 [2024-07-24 22:34:06.793136] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.160 [2024-07-24 22:34:06.793151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.160 [2024-07-24 22:34:06.797187] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.160 [2024-07-24 22:34:06.806470] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.160 [2024-07-24 22:34:06.806944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.160 [2024-07-24 22:34:06.806985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:41.160 [2024-07-24 22:34:06.807004] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:41.160 [2024-07-24 22:34:06.807281] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:41.160 [2024-07-24 22:34:06.807565] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.160 [2024-07-24 22:34:06.807588] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.160 [2024-07-24 22:34:06.807604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.160 [2024-07-24 22:34:06.811648] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.160 [2024-07-24 22:34:06.820970] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.160 [2024-07-24 22:34:06.821413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.160 [2024-07-24 22:34:06.821444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:41.160 [2024-07-24 22:34:06.821461] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:41.160 [2024-07-24 22:34:06.821733] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:41.160 [2024-07-24 22:34:06.822000] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.160 [2024-07-24 22:34:06.822023] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.160 [2024-07-24 22:34:06.822038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.160 [2024-07-24 22:34:06.826101] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.160 [2024-07-24 22:34:06.835404] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.160 [2024-07-24 22:34:06.835855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.160 [2024-07-24 22:34:06.835884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:41.160 [2024-07-24 22:34:06.835902] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:41.160 [2024-07-24 22:34:06.836166] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:41.160 [2024-07-24 22:34:06.836432] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.160 [2024-07-24 22:34:06.836461] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.160 [2024-07-24 22:34:06.836477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.160 [2024-07-24 22:34:06.840527] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.160 [2024-07-24 22:34:06.849842] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.160 [2024-07-24 22:34:06.850251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.160 [2024-07-24 22:34:06.850281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:41.160 [2024-07-24 22:34:06.850299] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:41.160 [2024-07-24 22:34:06.850571] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:41.160 [2024-07-24 22:34:06.850840] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.160 [2024-07-24 22:34:06.850862] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.160 [2024-07-24 22:34:06.850877] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.160 [2024-07-24 22:34:06.854918] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.422 [2024-07-24 22:34:06.864325] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.422 [2024-07-24 22:34:06.864743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.422 [2024-07-24 22:34:06.864775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:41.422 [2024-07-24 22:34:06.864792] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:41.422 [2024-07-24 22:34:06.865057] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:41.422 [2024-07-24 22:34:06.865324] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.422 [2024-07-24 22:34:06.865347] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.422 [2024-07-24 22:34:06.865362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.422 [2024-07-24 22:34:06.869471] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.422 [2024-07-24 22:34:06.878764] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.422 [2024-07-24 22:34:06.879237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.422 [2024-07-24 22:34:06.879278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:41.422 [2024-07-24 22:34:06.879297] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:41.422 [2024-07-24 22:34:06.879581] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:41.422 [2024-07-24 22:34:06.879850] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.422 [2024-07-24 22:34:06.879871] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.422 [2024-07-24 22:34:06.879887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.422 [2024-07-24 22:34:06.883920] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.422 [2024-07-24 22:34:06.893286] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.422 [2024-07-24 22:34:06.893752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.422 [2024-07-24 22:34:06.893793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:41.422 [2024-07-24 22:34:06.893813] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:41.422 [2024-07-24 22:34:06.894084] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:41.422 [2024-07-24 22:34:06.894351] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.422 [2024-07-24 22:34:06.894373] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.422 [2024-07-24 22:34:06.894388] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.422 [2024-07-24 22:34:06.898435] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.422 [2024-07-24 22:34:06.907769] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.422 [2024-07-24 22:34:06.908312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.422 [2024-07-24 22:34:06.908354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:41.422 [2024-07-24 22:34:06.908373] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:41.422 [2024-07-24 22:34:06.908663] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:41.422 [2024-07-24 22:34:06.908932] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.422 [2024-07-24 22:34:06.908955] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.422 [2024-07-24 22:34:06.908970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.422 [2024-07-24 22:34:06.913015] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.422 [2024-07-24 22:34:06.922313] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.422 [2024-07-24 22:34:06.922744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.422 [2024-07-24 22:34:06.922775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:41.422 [2024-07-24 22:34:06.922793] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:41.422 [2024-07-24 22:34:06.923057] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:41.422 [2024-07-24 22:34:06.923325] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.422 [2024-07-24 22:34:06.923347] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.422 [2024-07-24 22:34:06.923362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.422 [2024-07-24 22:34:06.927397] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.422 [2024-07-24 22:34:06.936688] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.422 [2024-07-24 22:34:06.937096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.422 [2024-07-24 22:34:06.937127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:41.422 [2024-07-24 22:34:06.937144] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:41.422 [2024-07-24 22:34:06.937415] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:41.422 [2024-07-24 22:34:06.937705] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.422 [2024-07-24 22:34:06.937743] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.422 [2024-07-24 22:34:06.937759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.422 [2024-07-24 22:34:06.941830] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.422 [2024-07-24 22:34:06.951155] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.422 [2024-07-24 22:34:06.951558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.422 [2024-07-24 22:34:06.951588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:41.422 [2024-07-24 22:34:06.951605] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:41.423 [2024-07-24 22:34:06.951869] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:41.423 [2024-07-24 22:34:06.952143] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.423 [2024-07-24 22:34:06.952164] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.423 [2024-07-24 22:34:06.952180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.423 [2024-07-24 22:34:06.956265] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.423 [2024-07-24 22:34:06.965628] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.423 [2024-07-24 22:34:06.966069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.423 [2024-07-24 22:34:06.966098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:41.423 [2024-07-24 22:34:06.966116] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:41.423 [2024-07-24 22:34:06.966380] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:41.423 [2024-07-24 22:34:06.966656] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.423 [2024-07-24 22:34:06.966679] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.423 [2024-07-24 22:34:06.966695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.423 [2024-07-24 22:34:06.970745] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.423 [2024-07-24 22:34:06.980049] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.423 [2024-07-24 22:34:06.980490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.423 [2024-07-24 22:34:06.980519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:41.423 [2024-07-24 22:34:06.980537] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:41.423 [2024-07-24 22:34:06.980801] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:41.423 [2024-07-24 22:34:06.981074] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.423 [2024-07-24 22:34:06.981095] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.423 [2024-07-24 22:34:06.981120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.423 [2024-07-24 22:34:06.985152] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.423 [2024-07-24 22:34:06.994565] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.423 [2024-07-24 22:34:06.994960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.423 [2024-07-24 22:34:06.994991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:41.423 [2024-07-24 22:34:06.995008] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:41.423 [2024-07-24 22:34:06.995273] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:41.423 [2024-07-24 22:34:06.995551] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.423 [2024-07-24 22:34:06.995574] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.423 [2024-07-24 22:34:06.995589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.423 [2024-07-24 22:34:06.999655] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.423 [2024-07-24 22:34:07.008977] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.423 [2024-07-24 22:34:07.009438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.423 [2024-07-24 22:34:07.009488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:41.423 [2024-07-24 22:34:07.009510] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:41.423 [2024-07-24 22:34:07.009781] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:41.423 [2024-07-24 22:34:07.010049] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.423 [2024-07-24 22:34:07.010072] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.423 [2024-07-24 22:34:07.010087] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.423 [2024-07-24 22:34:07.014127] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.423 [2024-07-24 22:34:07.023432] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.423 [2024-07-24 22:34:07.023914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.423 [2024-07-24 22:34:07.023956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:41.423 [2024-07-24 22:34:07.023976] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:41.423 [2024-07-24 22:34:07.024247] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:41.423 [2024-07-24 22:34:07.024531] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.423 [2024-07-24 22:34:07.024554] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.423 [2024-07-24 22:34:07.024569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.423 [2024-07-24 22:34:07.028604] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.423 [2024-07-24 22:34:07.037912] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.423 [2024-07-24 22:34:07.038387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.423 [2024-07-24 22:34:07.038435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:41.423 [2024-07-24 22:34:07.038455] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:41.423 [2024-07-24 22:34:07.038737] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:41.423 [2024-07-24 22:34:07.039005] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.423 [2024-07-24 22:34:07.039027] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.423 [2024-07-24 22:34:07.039042] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.423 [2024-07-24 22:34:07.043103] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.423 [2024-07-24 22:34:07.052418] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.423 [2024-07-24 22:34:07.052903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.423 [2024-07-24 22:34:07.052944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:41.423 [2024-07-24 22:34:07.052964] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:41.423 [2024-07-24 22:34:07.053234] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:41.423 [2024-07-24 22:34:07.053514] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.423 [2024-07-24 22:34:07.053538] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.423 [2024-07-24 22:34:07.053553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.423 [2024-07-24 22:34:07.057587] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.423 [2024-07-24 22:34:07.066863] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.423 [2024-07-24 22:34:07.067290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.423 [2024-07-24 22:34:07.067321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:41.423 [2024-07-24 22:34:07.067338] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:41.423 [2024-07-24 22:34:07.067614] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:41.423 [2024-07-24 22:34:07.067881] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.423 [2024-07-24 22:34:07.067903] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.423 [2024-07-24 22:34:07.067919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.423 [2024-07-24 22:34:07.071990] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.423 [2024-07-24 22:34:07.081296] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.423 [2024-07-24 22:34:07.081709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.423 [2024-07-24 22:34:07.081738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:41.423 [2024-07-24 22:34:07.081756] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:41.423 [2024-07-24 22:34:07.082020] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:41.423 [2024-07-24 22:34:07.082293] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.423 [2024-07-24 22:34:07.082315] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.423 [2024-07-24 22:34:07.082331] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.423 [2024-07-24 22:34:07.086381] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.423 [2024-07-24 22:34:07.095688] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.423 [2024-07-24 22:34:07.096101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.423 [2024-07-24 22:34:07.096130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:41.424 [2024-07-24 22:34:07.096148] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:41.424 [2024-07-24 22:34:07.096418] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:41.424 [2024-07-24 22:34:07.096694] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.424 [2024-07-24 22:34:07.096717] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.424 [2024-07-24 22:34:07.096732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.424 [2024-07-24 22:34:07.100780] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.424 [2024-07-24 22:34:07.110069] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.424 [2024-07-24 22:34:07.110471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.424 [2024-07-24 22:34:07.110507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:41.424 [2024-07-24 22:34:07.110525] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:41.424 [2024-07-24 22:34:07.110792] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:41.424 [2024-07-24 22:34:07.111059] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.424 [2024-07-24 22:34:07.111081] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.424 [2024-07-24 22:34:07.111096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.424 [2024-07-24 22:34:07.115150] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.424 [2024-07-24 22:34:07.124575] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.424 [2024-07-24 22:34:07.125014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.424 [2024-07-24 22:34:07.125043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:41.424 [2024-07-24 22:34:07.125061] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:41.685 [2024-07-24 22:34:07.125325] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:41.685 [2024-07-24 22:34:07.125603] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.685 [2024-07-24 22:34:07.125628] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.686 [2024-07-24 22:34:07.125643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.686 [2024-07-24 22:34:07.129736] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.686 [2024-07-24 22:34:07.139061] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.686 [2024-07-24 22:34:07.139498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.686 [2024-07-24 22:34:07.139528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:41.686 [2024-07-24 22:34:07.139545] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:41.686 [2024-07-24 22:34:07.139809] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:41.686 [2024-07-24 22:34:07.140086] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.686 [2024-07-24 22:34:07.140110] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.686 [2024-07-24 22:34:07.140126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.686 [2024-07-24 22:34:07.144172] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.686 [2024-07-24 22:34:07.153455] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.686 [2024-07-24 22:34:07.153866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.686 [2024-07-24 22:34:07.153896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:41.686 [2024-07-24 22:34:07.153914] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:41.686 [2024-07-24 22:34:07.154180] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:41.686 [2024-07-24 22:34:07.154447] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.686 [2024-07-24 22:34:07.154469] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.686 [2024-07-24 22:34:07.154493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.686 [2024-07-24 22:34:07.158528] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.686 [2024-07-24 22:34:07.167819] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.686 [2024-07-24 22:34:07.168261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.686 [2024-07-24 22:34:07.168291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:41.686 [2024-07-24 22:34:07.168308] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:41.686 [2024-07-24 22:34:07.168580] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:41.686 [2024-07-24 22:34:07.168848] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.686 [2024-07-24 22:34:07.168872] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.686 [2024-07-24 22:34:07.168887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.686 [2024-07-24 22:34:07.172929] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.686 [2024-07-24 22:34:07.182228] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.686 [2024-07-24 22:34:07.182657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.686 [2024-07-24 22:34:07.182686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:41.686 [2024-07-24 22:34:07.182713] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:41.686 [2024-07-24 22:34:07.182977] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:41.686 [2024-07-24 22:34:07.183244] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.686 [2024-07-24 22:34:07.183266] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.686 [2024-07-24 22:34:07.183282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.686 [2024-07-24 22:34:07.187377] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.686 [2024-07-24 22:34:07.196691] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.686 [2024-07-24 22:34:07.197101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.686 [2024-07-24 22:34:07.197131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:41.686 [2024-07-24 22:34:07.197148] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:41.686 [2024-07-24 22:34:07.197412] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:41.686 [2024-07-24 22:34:07.197700] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.686 [2024-07-24 22:34:07.197723] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.686 [2024-07-24 22:34:07.197738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.686 [2024-07-24 22:34:07.201776] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.686 [2024-07-24 22:34:07.211071] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.686 [2024-07-24 22:34:07.211500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.686 [2024-07-24 22:34:07.211530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:41.686 [2024-07-24 22:34:07.211547] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:41.686 [2024-07-24 22:34:07.211812] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:41.686 [2024-07-24 22:34:07.212078] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.686 [2024-07-24 22:34:07.212101] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.686 [2024-07-24 22:34:07.212116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.686 [2024-07-24 22:34:07.216154] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.686 [2024-07-24 22:34:07.225474] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.686 [2024-07-24 22:34:07.225917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.686 [2024-07-24 22:34:07.225946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:41.686 [2024-07-24 22:34:07.225963] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:41.686 [2024-07-24 22:34:07.226228] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:41.686 [2024-07-24 22:34:07.226503] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.686 [2024-07-24 22:34:07.226537] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.686 [2024-07-24 22:34:07.226553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.686 [2024-07-24 22:34:07.230587] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.686 [2024-07-24 22:34:07.239924] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.686 [2024-07-24 22:34:07.240313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.686 [2024-07-24 22:34:07.240343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:41.686 [2024-07-24 22:34:07.240361] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:41.686 [2024-07-24 22:34:07.240635] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:41.686 [2024-07-24 22:34:07.240910] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.686 [2024-07-24 22:34:07.240932] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.686 [2024-07-24 22:34:07.240947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.686 [2024-07-24 22:34:07.245060] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.686 [2024-07-24 22:34:07.254398] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.686 [2024-07-24 22:34:07.254824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.686 [2024-07-24 22:34:07.254854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:41.687 [2024-07-24 22:34:07.254872] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:41.687 [2024-07-24 22:34:07.255135] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:41.687 [2024-07-24 22:34:07.255410] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.687 [2024-07-24 22:34:07.255433] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.687 [2024-07-24 22:34:07.255448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.687 [2024-07-24 22:34:07.259514] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.687 [2024-07-24 22:34:07.268842] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.687 [2024-07-24 22:34:07.269279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.687 [2024-07-24 22:34:07.269308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:41.687 [2024-07-24 22:34:07.269331] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:41.687 [2024-07-24 22:34:07.269608] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:41.687 [2024-07-24 22:34:07.269876] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.687 [2024-07-24 22:34:07.269898] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.687 [2024-07-24 22:34:07.269913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.687 [2024-07-24 22:34:07.273962] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.687 [2024-07-24 22:34:07.283287] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.687 [2024-07-24 22:34:07.283729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.687 [2024-07-24 22:34:07.283758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:41.687 [2024-07-24 22:34:07.283775] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:41.687 [2024-07-24 22:34:07.284039] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:41.687 [2024-07-24 22:34:07.284306] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.687 [2024-07-24 22:34:07.284327] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.687 [2024-07-24 22:34:07.284343] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.687 [2024-07-24 22:34:07.288410] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.687 [2024-07-24 22:34:07.297732] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.687 [2024-07-24 22:34:07.298171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.687 [2024-07-24 22:34:07.298200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:41.687 [2024-07-24 22:34:07.298217] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:41.687 [2024-07-24 22:34:07.298489] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:41.687 [2024-07-24 22:34:07.298757] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.687 [2024-07-24 22:34:07.298779] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.687 [2024-07-24 22:34:07.298794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.687 [2024-07-24 22:34:07.302847] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.687 [2024-07-24 22:34:07.312138] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.687 [2024-07-24 22:34:07.312533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.687 [2024-07-24 22:34:07.312563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:41.687 [2024-07-24 22:34:07.312580] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:41.687 [2024-07-24 22:34:07.312848] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:41.687 [2024-07-24 22:34:07.313114] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.687 [2024-07-24 22:34:07.313136] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.687 [2024-07-24 22:34:07.313152] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.687 [2024-07-24 22:34:07.317209] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.687 [2024-07-24 22:34:07.326609] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.687 [2024-07-24 22:34:07.327067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.687 [2024-07-24 22:34:07.327109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:41.687 [2024-07-24 22:34:07.327129] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:41.687 [2024-07-24 22:34:07.327407] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:41.687 [2024-07-24 22:34:07.327687] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.687 [2024-07-24 22:34:07.327711] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.687 [2024-07-24 22:34:07.327729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.687 [2024-07-24 22:34:07.331767] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.687 [2024-07-24 22:34:07.341225] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.687 [2024-07-24 22:34:07.341844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.687 [2024-07-24 22:34:07.341885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:41.687 [2024-07-24 22:34:07.341906] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:41.687 [2024-07-24 22:34:07.342177] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:41.687 [2024-07-24 22:34:07.342445] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.687 [2024-07-24 22:34:07.342466] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.687 [2024-07-24 22:34:07.342494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.687 [2024-07-24 22:34:07.346614] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.687 [2024-07-24 22:34:07.355808] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.687 [2024-07-24 22:34:07.356425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.687 [2024-07-24 22:34:07.356466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:41.687 [2024-07-24 22:34:07.356497] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:41.687 [2024-07-24 22:34:07.356770] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:41.687 [2024-07-24 22:34:07.357038] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.687 [2024-07-24 22:34:07.357060] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.687 [2024-07-24 22:34:07.357076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.687 [2024-07-24 22:34:07.361137] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.687 [2024-07-24 22:34:07.370269] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.687 [2024-07-24 22:34:07.370746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.687 [2024-07-24 22:34:07.370788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:41.687 [2024-07-24 22:34:07.370807] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:41.687 [2024-07-24 22:34:07.371077] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:41.687 [2024-07-24 22:34:07.371347] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.687 [2024-07-24 22:34:07.371369] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.687 [2024-07-24 22:34:07.371407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.687 [2024-07-24 22:34:07.375452] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.687 [2024-07-24 22:34:07.384829] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.687 [2024-07-24 22:34:07.385426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.687 [2024-07-24 22:34:07.385473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:41.687 [2024-07-24 22:34:07.385508] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:41.687 [2024-07-24 22:34:07.385797] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:41.688 [2024-07-24 22:34:07.386066] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.688 [2024-07-24 22:34:07.386088] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.688 [2024-07-24 22:34:07.386105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.949 [2024-07-24 22:34:07.390237] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.949 [2024-07-24 22:34:07.399421] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.949 [2024-07-24 22:34:07.399998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.949 [2024-07-24 22:34:07.400056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:41.949 [2024-07-24 22:34:07.400076] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:41.949 [2024-07-24 22:34:07.400346] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:41.949 [2024-07-24 22:34:07.400628] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.949 [2024-07-24 22:34:07.400651] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.949 [2024-07-24 22:34:07.400666] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.949 [2024-07-24 22:34:07.404759] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.949 [2024-07-24 22:34:07.413896] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.949 [2024-07-24 22:34:07.414453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.949 [2024-07-24 22:34:07.414503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:41.949 [2024-07-24 22:34:07.414524] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:41.949 [2024-07-24 22:34:07.414795] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:41.949 [2024-07-24 22:34:07.415063] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.949 [2024-07-24 22:34:07.415085] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.949 [2024-07-24 22:34:07.415101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.949 [2024-07-24 22:34:07.419177] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.949 [2024-07-24 22:34:07.428299] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.949 [2024-07-24 22:34:07.428798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.949 [2024-07-24 22:34:07.428846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:41.949 [2024-07-24 22:34:07.428866] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:41.949 [2024-07-24 22:34:07.429137] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:41.949 [2024-07-24 22:34:07.429405] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.949 [2024-07-24 22:34:07.429427] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.949 [2024-07-24 22:34:07.429442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.950 [2024-07-24 22:34:07.433521] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.950 [2024-07-24 22:34:07.442866] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.950 [2024-07-24 22:34:07.443462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.950 [2024-07-24 22:34:07.443513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:41.950 [2024-07-24 22:34:07.443533] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:41.950 [2024-07-24 22:34:07.443803] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:41.950 [2024-07-24 22:34:07.444072] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.950 [2024-07-24 22:34:07.444094] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.950 [2024-07-24 22:34:07.444109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.950 [2024-07-24 22:34:07.448155] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.950 [2024-07-24 22:34:07.457253] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.950 [2024-07-24 22:34:07.457862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.950 [2024-07-24 22:34:07.457903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:41.950 [2024-07-24 22:34:07.457923] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:41.950 [2024-07-24 22:34:07.458193] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:41.950 [2024-07-24 22:34:07.458461] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.950 [2024-07-24 22:34:07.458504] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.950 [2024-07-24 22:34:07.458522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.950 [2024-07-24 22:34:07.462583] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.950 [2024-07-24 22:34:07.471746] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.950 [2024-07-24 22:34:07.472312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.950 [2024-07-24 22:34:07.472352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:41.950 [2024-07-24 22:34:07.472372] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:41.950 [2024-07-24 22:34:07.472660] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:41.950 [2024-07-24 22:34:07.472936] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.950 [2024-07-24 22:34:07.472958] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.950 [2024-07-24 22:34:07.472974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.950 [2024-07-24 22:34:07.477025] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.950 [2024-07-24 22:34:07.486137] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.950 [2024-07-24 22:34:07.486589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.950 [2024-07-24 22:34:07.486619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:41.950 [2024-07-24 22:34:07.486636] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:41.950 [2024-07-24 22:34:07.486907] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:41.950 [2024-07-24 22:34:07.487174] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.950 [2024-07-24 22:34:07.487196] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.950 [2024-07-24 22:34:07.487211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.950 [2024-07-24 22:34:07.491303] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.950 [2024-07-24 22:34:07.500717] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.950 [2024-07-24 22:34:07.501298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.950 [2024-07-24 22:34:07.501340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:41.950 [2024-07-24 22:34:07.501359] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:41.950 [2024-07-24 22:34:07.501641] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:41.950 [2024-07-24 22:34:07.501910] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.950 [2024-07-24 22:34:07.501933] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.950 [2024-07-24 22:34:07.501948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.950 [2024-07-24 22:34:07.506256] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.950 [2024-07-24 22:34:07.515119] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.950 [2024-07-24 22:34:07.515570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.950 [2024-07-24 22:34:07.515610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:41.950 [2024-07-24 22:34:07.515629] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:41.950 [2024-07-24 22:34:07.515900] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:41.950 [2024-07-24 22:34:07.516168] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.950 [2024-07-24 22:34:07.516190] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.950 [2024-07-24 22:34:07.516206] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.950 [2024-07-24 22:34:07.520286] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.950 [2024-07-24 22:34:07.529684] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.950 [2024-07-24 22:34:07.530259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.950 [2024-07-24 22:34:07.530300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:41.950 [2024-07-24 22:34:07.530319] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:41.950 [2024-07-24 22:34:07.530603] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:41.950 [2024-07-24 22:34:07.530872] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.950 [2024-07-24 22:34:07.530894] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.950 [2024-07-24 22:34:07.530917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.950 [2024-07-24 22:34:07.534978] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.950 [2024-07-24 22:34:07.544105] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.950 [2024-07-24 22:34:07.544733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.950 [2024-07-24 22:34:07.544774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:41.950 [2024-07-24 22:34:07.544793] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:41.950 [2024-07-24 22:34:07.545064] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:41.950 [2024-07-24 22:34:07.545332] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.950 [2024-07-24 22:34:07.545354] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.950 [2024-07-24 22:34:07.545369] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.950 [2024-07-24 22:34:07.549425] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.950 [2024-07-24 22:34:07.558554] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.950 [2024-07-24 22:34:07.559136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.950 [2024-07-24 22:34:07.559190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:41.950 [2024-07-24 22:34:07.559210] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:41.950 [2024-07-24 22:34:07.559492] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:41.950 [2024-07-24 22:34:07.559762] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.950 [2024-07-24 22:34:07.559784] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.950 [2024-07-24 22:34:07.559799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.950 [2024-07-24 22:34:07.563866] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.950 [2024-07-24 22:34:07.572941] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.950 [2024-07-24 22:34:07.573473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.950 [2024-07-24 22:34:07.573523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:41.950 [2024-07-24 22:34:07.573549] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:41.950 [2024-07-24 22:34:07.573820] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:41.950 [2024-07-24 22:34:07.574088] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.951 [2024-07-24 22:34:07.574111] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.951 [2024-07-24 22:34:07.574126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.951 [2024-07-24 22:34:07.578180] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.951 [2024-07-24 22:34:07.587307] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.951 [2024-07-24 22:34:07.587901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.951 [2024-07-24 22:34:07.587943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:41.951 [2024-07-24 22:34:07.587963] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:41.951 [2024-07-24 22:34:07.588239] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:41.951 [2024-07-24 22:34:07.588532] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.951 [2024-07-24 22:34:07.588555] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.951 [2024-07-24 22:34:07.588571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.951 [2024-07-24 22:34:07.592664] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.951 [2024-07-24 22:34:07.601791] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.951 [2024-07-24 22:34:07.602388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.951 [2024-07-24 22:34:07.602428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:41.951 [2024-07-24 22:34:07.602447] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:41.951 [2024-07-24 22:34:07.602729] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:41.951 [2024-07-24 22:34:07.602998] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.951 [2024-07-24 22:34:07.603020] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.951 [2024-07-24 22:34:07.603036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.951 [2024-07-24 22:34:07.607120] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.951 [2024-07-24 22:34:07.616242] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.951 [2024-07-24 22:34:07.616810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.951 [2024-07-24 22:34:07.616863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:41.951 [2024-07-24 22:34:07.616883] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:41.951 [2024-07-24 22:34:07.617153] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:41.951 [2024-07-24 22:34:07.617421] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.951 [2024-07-24 22:34:07.617450] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.951 [2024-07-24 22:34:07.617466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.951 [2024-07-24 22:34:07.621576] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.951 [2024-07-24 22:34:07.630656] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.951 [2024-07-24 22:34:07.631268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.951 [2024-07-24 22:34:07.631309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:41.951 [2024-07-24 22:34:07.631328] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:41.951 [2024-07-24 22:34:07.631617] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:41.951 [2024-07-24 22:34:07.631886] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.951 [2024-07-24 22:34:07.631909] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.951 [2024-07-24 22:34:07.631924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.951 [2024-07-24 22:34:07.636010] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.951 [2024-07-24 22:34:07.645109] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.951 [2024-07-24 22:34:07.645666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.951 [2024-07-24 22:34:07.645717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:41.951 [2024-07-24 22:34:07.645735] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:41.951 [2024-07-24 22:34:07.646001] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:41.951 [2024-07-24 22:34:07.646273] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.951 [2024-07-24 22:34:07.646295] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.951 [2024-07-24 22:34:07.646311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.951 [2024-07-24 22:34:07.650433] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.213 [2024-07-24 22:34:07.659634] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.213 [2024-07-24 22:34:07.660063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.213 [2024-07-24 22:34:07.660094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:42.213 [2024-07-24 22:34:07.660112] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:42.213 [2024-07-24 22:34:07.660377] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:42.213 [2024-07-24 22:34:07.660661] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.213 [2024-07-24 22:34:07.660684] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.213 [2024-07-24 22:34:07.660700] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.213 [2024-07-24 22:34:07.664771] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.213 [2024-07-24 22:34:07.674136] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.213 [2024-07-24 22:34:07.674669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.213 [2024-07-24 22:34:07.674723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:42.213 [2024-07-24 22:34:07.674740] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:42.213 [2024-07-24 22:34:07.675004] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:42.213 [2024-07-24 22:34:07.675271] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.213 [2024-07-24 22:34:07.675292] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.213 [2024-07-24 22:34:07.675308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.213 [2024-07-24 22:34:07.679393] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.213 [2024-07-24 22:34:07.688536] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.213 [2024-07-24 22:34:07.689080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.213 [2024-07-24 22:34:07.689124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:42.213 [2024-07-24 22:34:07.689143] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:42.213 [2024-07-24 22:34:07.689414] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:42.213 [2024-07-24 22:34:07.689709] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.213 [2024-07-24 22:34:07.689734] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.213 [2024-07-24 22:34:07.689750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.213 [2024-07-24 22:34:07.693806] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.213 [2024-07-24 22:34:07.702885] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.213 [2024-07-24 22:34:07.703430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.213 [2024-07-24 22:34:07.703490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:42.213 [2024-07-24 22:34:07.703510] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:42.213 [2024-07-24 22:34:07.703774] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:42.213 [2024-07-24 22:34:07.704048] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.213 [2024-07-24 22:34:07.704069] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.213 [2024-07-24 22:34:07.704085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.213 [2024-07-24 22:34:07.708170] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.213 [2024-07-24 22:34:07.717313] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.213 [2024-07-24 22:34:07.717913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.213 [2024-07-24 22:34:07.717954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:42.213 [2024-07-24 22:34:07.717974] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:42.213 [2024-07-24 22:34:07.718251] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:42.213 [2024-07-24 22:34:07.718532] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.213 [2024-07-24 22:34:07.718555] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.213 [2024-07-24 22:34:07.718571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.213 [2024-07-24 22:34:07.722657] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.213 [2024-07-24 22:34:07.731800] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.213 [2024-07-24 22:34:07.732355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.213 [2024-07-24 22:34:07.732396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:42.213 [2024-07-24 22:34:07.732415] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:42.213 [2024-07-24 22:34:07.732699] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:42.213 [2024-07-24 22:34:07.732968] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.213 [2024-07-24 22:34:07.732989] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.213 [2024-07-24 22:34:07.733004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.213 [2024-07-24 22:34:07.737072] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.213 [2024-07-24 22:34:07.746185] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.213 [2024-07-24 22:34:07.746732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.213 [2024-07-24 22:34:07.746774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:42.213 [2024-07-24 22:34:07.746794] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:42.213 [2024-07-24 22:34:07.747065] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:42.213 [2024-07-24 22:34:07.747337] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.213 [2024-07-24 22:34:07.747364] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.213 [2024-07-24 22:34:07.747381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.213 [2024-07-24 22:34:07.751502] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.213 [2024-07-24 22:34:07.760572] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.213 [2024-07-24 22:34:07.761153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.213 [2024-07-24 22:34:07.761194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:42.213 [2024-07-24 22:34:07.761214] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:42.213 [2024-07-24 22:34:07.761494] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:42.213 [2024-07-24 22:34:07.761763] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.213 [2024-07-24 22:34:07.761786] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.213 [2024-07-24 22:34:07.761807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.213 [2024-07-24 22:34:07.765848] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.213 [2024-07-24 22:34:07.774896] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.213 [2024-07-24 22:34:07.775385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.213 [2024-07-24 22:34:07.775416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:42.213 [2024-07-24 22:34:07.775433] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:42.213 [2024-07-24 22:34:07.775714] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:42.213 [2024-07-24 22:34:07.775982] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.213 [2024-07-24 22:34:07.776004] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.213 [2024-07-24 22:34:07.776020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.213 [2024-07-24 22:34:07.780052] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.213 [2024-07-24 22:34:07.789346] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.213 [2024-07-24 22:34:07.789831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.213 [2024-07-24 22:34:07.789885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:42.213 [2024-07-24 22:34:07.789902] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:42.213 [2024-07-24 22:34:07.790166] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:42.213 [2024-07-24 22:34:07.790433] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.213 [2024-07-24 22:34:07.790454] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.214 [2024-07-24 22:34:07.790470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.214 [2024-07-24 22:34:07.794540] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.214 [2024-07-24 22:34:07.803895] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.214 [2024-07-24 22:34:07.804388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.214 [2024-07-24 22:34:07.804417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:42.214 [2024-07-24 22:34:07.804434] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:42.214 [2024-07-24 22:34:07.804707] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:42.214 [2024-07-24 22:34:07.804975] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.214 [2024-07-24 22:34:07.804997] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.214 [2024-07-24 22:34:07.805013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.214 [2024-07-24 22:34:07.809083] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.214 [2024-07-24 22:34:07.818405] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.214 [2024-07-24 22:34:07.818934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.214 [2024-07-24 22:34:07.819016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:42.214 [2024-07-24 22:34:07.819034] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:42.214 [2024-07-24 22:34:07.819298] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:42.214 [2024-07-24 22:34:07.819577] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.214 [2024-07-24 22:34:07.819601] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.214 [2024-07-24 22:34:07.819617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.214 [2024-07-24 22:34:07.823697] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.214 [2024-07-24 22:34:07.832864] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.214 [2024-07-24 22:34:07.833352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.214 [2024-07-24 22:34:07.833393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:42.214 [2024-07-24 22:34:07.833412] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:42.214 [2024-07-24 22:34:07.833695] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:42.214 [2024-07-24 22:34:07.833965] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.214 [2024-07-24 22:34:07.833987] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.214 [2024-07-24 22:34:07.834003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.214 [2024-07-24 22:34:07.838079] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.214 [2024-07-24 22:34:07.847313] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.214 [2024-07-24 22:34:07.847863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.214 [2024-07-24 22:34:07.847914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:42.214 [2024-07-24 22:34:07.847932] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:42.214 [2024-07-24 22:34:07.848196] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:42.214 [2024-07-24 22:34:07.848463] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.214 [2024-07-24 22:34:07.848505] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.214 [2024-07-24 22:34:07.848522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.214 [2024-07-24 22:34:07.852575] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.214 [2024-07-24 22:34:07.861875] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.214 [2024-07-24 22:34:07.862425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.214 [2024-07-24 22:34:07.862466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:42.214 [2024-07-24 22:34:07.862497] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:42.214 [2024-07-24 22:34:07.862770] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:42.214 [2024-07-24 22:34:07.863044] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.214 [2024-07-24 22:34:07.863067] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.214 [2024-07-24 22:34:07.863082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.214 [2024-07-24 22:34:07.867118] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.214 [2024-07-24 22:34:07.876410] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.214 [2024-07-24 22:34:07.876954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.214 [2024-07-24 22:34:07.876996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:42.214 [2024-07-24 22:34:07.877016] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:42.214 [2024-07-24 22:34:07.877287] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:42.214 [2024-07-24 22:34:07.877567] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.214 [2024-07-24 22:34:07.877590] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.214 [2024-07-24 22:34:07.877606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.214 [2024-07-24 22:34:07.881639] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.214 [2024-07-24 22:34:07.890955] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.214 [2024-07-24 22:34:07.891541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.214 [2024-07-24 22:34:07.891610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:42.214 [2024-07-24 22:34:07.891630] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:42.214 [2024-07-24 22:34:07.891900] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:42.214 [2024-07-24 22:34:07.892168] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.214 [2024-07-24 22:34:07.892190] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.214 [2024-07-24 22:34:07.892206] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.214 [2024-07-24 22:34:07.896255] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.214 [2024-07-24 22:34:07.905362] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.214 [2024-07-24 22:34:07.905973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.214 [2024-07-24 22:34:07.906015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:42.214 [2024-07-24 22:34:07.906034] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:42.214 [2024-07-24 22:34:07.906305] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:42.214 [2024-07-24 22:34:07.906587] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.214 [2024-07-24 22:34:07.906610] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.214 [2024-07-24 22:34:07.906626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.214 [2024-07-24 22:34:07.910757] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.476 [2024-07-24 22:34:07.919949] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.476 [2024-07-24 22:34:07.920442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.476 [2024-07-24 22:34:07.920474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:42.476 [2024-07-24 22:34:07.920504] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:42.476 [2024-07-24 22:34:07.920779] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:42.476 [2024-07-24 22:34:07.921066] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.476 [2024-07-24 22:34:07.921089] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.476 [2024-07-24 22:34:07.921104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.476 [2024-07-24 22:34:07.925211] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.476 [2024-07-24 22:34:07.934458] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.476 [2024-07-24 22:34:07.935031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.476 [2024-07-24 22:34:07.935073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:42.476 [2024-07-24 22:34:07.935092] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:42.476 [2024-07-24 22:34:07.935363] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:42.476 [2024-07-24 22:34:07.935641] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.476 [2024-07-24 22:34:07.935664] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.476 [2024-07-24 22:34:07.935680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.476 [2024-07-24 22:34:07.939759] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.476 [2024-07-24 22:34:07.948921] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.476 [2024-07-24 22:34:07.949391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.476 [2024-07-24 22:34:07.949422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:42.476 [2024-07-24 22:34:07.949439] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:42.476 [2024-07-24 22:34:07.949714] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:42.476 [2024-07-24 22:34:07.949981] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.476 [2024-07-24 22:34:07.950003] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.476 [2024-07-24 22:34:07.950019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.476 [2024-07-24 22:34:07.954110] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.476 [2024-07-24 22:34:07.963558] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.476 [2024-07-24 22:34:07.964067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.476 [2024-07-24 22:34:07.964096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:42.476 [2024-07-24 22:34:07.964124] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:42.476 [2024-07-24 22:34:07.964389] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:42.476 [2024-07-24 22:34:07.964668] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.476 [2024-07-24 22:34:07.964691] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.476 [2024-07-24 22:34:07.964706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.476 [2024-07-24 22:34:07.968813] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.476 [2024-07-24 22:34:07.977957] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.476 [2024-07-24 22:34:07.978413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.476 [2024-07-24 22:34:07.978443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:42.476 [2024-07-24 22:34:07.978460] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:42.476 [2024-07-24 22:34:07.978735] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:42.476 [2024-07-24 22:34:07.979002] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.476 [2024-07-24 22:34:07.979024] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.476 [2024-07-24 22:34:07.979039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.476 [2024-07-24 22:34:07.983118] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.476 [2024-07-24 22:34:07.992509] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.476 [2024-07-24 22:34:07.992984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.476 [2024-07-24 22:34:07.993034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:42.476 [2024-07-24 22:34:07.993051] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:42.476 [2024-07-24 22:34:07.993321] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:42.476 [2024-07-24 22:34:07.993599] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.476 [2024-07-24 22:34:07.993622] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.476 [2024-07-24 22:34:07.993637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.476 [2024-07-24 22:34:07.997723] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.476 [2024-07-24 22:34:08.006928] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.476 [2024-07-24 22:34:08.007439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.476 [2024-07-24 22:34:08.007495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:42.476 [2024-07-24 22:34:08.007514] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:42.476 [2024-07-24 22:34:08.007785] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:42.476 [2024-07-24 22:34:08.008051] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.476 [2024-07-24 22:34:08.008085] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.476 [2024-07-24 22:34:08.008101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.476 [2024-07-24 22:34:08.012183] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.476 [2024-07-24 22:34:08.021372] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.476 [2024-07-24 22:34:08.021942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.476 [2024-07-24 22:34:08.021984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:42.476 [2024-07-24 22:34:08.022003] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:42.476 [2024-07-24 22:34:08.022280] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:42.476 [2024-07-24 22:34:08.022576] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.476 [2024-07-24 22:34:08.022599] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.476 [2024-07-24 22:34:08.022615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.476 [2024-07-24 22:34:08.026701] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.476 [2024-07-24 22:34:08.035908] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.476 [2024-07-24 22:34:08.036505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.476 [2024-07-24 22:34:08.036546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:42.476 [2024-07-24 22:34:08.036566] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:42.476 [2024-07-24 22:34:08.036837] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:42.476 [2024-07-24 22:34:08.037107] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.477 [2024-07-24 22:34:08.037130] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.477 [2024-07-24 22:34:08.037145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.477 [2024-07-24 22:34:08.041241] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.477 [2024-07-24 22:34:08.050368] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.477 [2024-07-24 22:34:08.050885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.477 [2024-07-24 22:34:08.050933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:42.477 [2024-07-24 22:34:08.050951] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:42.477 [2024-07-24 22:34:08.051216] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:42.477 [2024-07-24 22:34:08.051495] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.477 [2024-07-24 22:34:08.051517] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.477 [2024-07-24 22:34:08.051532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.477 [2024-07-24 22:34:08.055600] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.477 [2024-07-24 22:34:08.064968] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.477 [2024-07-24 22:34:08.065449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.477 [2024-07-24 22:34:08.065503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:42.477 [2024-07-24 22:34:08.065521] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:42.477 [2024-07-24 22:34:08.065787] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:42.477 [2024-07-24 22:34:08.066054] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.477 [2024-07-24 22:34:08.066076] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.477 [2024-07-24 22:34:08.066091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.477 [2024-07-24 22:34:08.070160] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.477 [2024-07-24 22:34:08.079461] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.477 [2024-07-24 22:34:08.079979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.477 [2024-07-24 22:34:08.080033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:42.477 [2024-07-24 22:34:08.080050] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:42.477 [2024-07-24 22:34:08.080313] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:42.477 [2024-07-24 22:34:08.080592] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.477 [2024-07-24 22:34:08.080615] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.477 [2024-07-24 22:34:08.080631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.477 [2024-07-24 22:34:08.084669] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.477 [2024-07-24 22:34:08.094057] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.477 [2024-07-24 22:34:08.094679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.477 [2024-07-24 22:34:08.094720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:42.477 [2024-07-24 22:34:08.094740] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:42.477 [2024-07-24 22:34:08.095010] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:42.477 [2024-07-24 22:34:08.095278] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.477 [2024-07-24 22:34:08.095300] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.477 [2024-07-24 22:34:08.095315] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.477 [2024-07-24 22:34:08.099387] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.477 [2024-07-24 22:34:08.108528] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.477 [2024-07-24 22:34:08.109120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.477 [2024-07-24 22:34:08.109161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:42.477 [2024-07-24 22:34:08.109180] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:42.477 [2024-07-24 22:34:08.109457] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:42.477 [2024-07-24 22:34:08.109736] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.477 [2024-07-24 22:34:08.109759] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.477 [2024-07-24 22:34:08.109774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.477 [2024-07-24 22:34:08.113830] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.477 [2024-07-24 22:34:08.122965] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.477 [2024-07-24 22:34:08.123569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.477 [2024-07-24 22:34:08.123611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:42.477 [2024-07-24 22:34:08.123630] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:42.477 [2024-07-24 22:34:08.123901] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:42.477 [2024-07-24 22:34:08.124169] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.477 [2024-07-24 22:34:08.124191] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.477 [2024-07-24 22:34:08.124206] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.477 [2024-07-24 22:34:08.128292] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.477 [2024-07-24 22:34:08.137407] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.477 [2024-07-24 22:34:08.137920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.477 [2024-07-24 22:34:08.137962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:42.477 [2024-07-24 22:34:08.137981] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:42.477 [2024-07-24 22:34:08.138251] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:42.477 [2024-07-24 22:34:08.138531] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.477 [2024-07-24 22:34:08.138555] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.477 [2024-07-24 22:34:08.138570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.477 [2024-07-24 22:34:08.142619] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.477 [2024-07-24 22:34:08.151916] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.477 [2024-07-24 22:34:08.152490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.477 [2024-07-24 22:34:08.152531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:42.477 [2024-07-24 22:34:08.152551] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:42.477 [2024-07-24 22:34:08.152822] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:42.477 [2024-07-24 22:34:08.153090] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.477 [2024-07-24 22:34:08.153112] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.477 [2024-07-24 22:34:08.153133] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.477 [2024-07-24 22:34:08.157203] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.477 [2024-07-24 22:34:08.166324] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.477 [2024-07-24 22:34:08.166832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.477 [2024-07-24 22:34:08.166873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:42.477 [2024-07-24 22:34:08.166892] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:42.477 [2024-07-24 22:34:08.167169] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:42.477 [2024-07-24 22:34:08.167437] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.477 [2024-07-24 22:34:08.167459] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.477 [2024-07-24 22:34:08.167474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.477 [2024-07-24 22:34:08.171551] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.737 [2024-07-24 22:34:08.180794] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.737 [2024-07-24 22:34:08.181359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.737 [2024-07-24 22:34:08.181414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:42.737 [2024-07-24 22:34:08.181434] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:42.737 [2024-07-24 22:34:08.181717] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:42.737 [2024-07-24 22:34:08.181986] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.737 [2024-07-24 22:34:08.182008] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.737 [2024-07-24 22:34:08.182023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.737 [2024-07-24 22:34:08.186123] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.737 [2024-07-24 22:34:08.195277] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.737 [2024-07-24 22:34:08.195790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.737 [2024-07-24 22:34:08.195822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:42.737 [2024-07-24 22:34:08.195839] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:42.737 [2024-07-24 22:34:08.196110] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:42.737 [2024-07-24 22:34:08.196377] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.737 [2024-07-24 22:34:08.196400] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.737 [2024-07-24 22:34:08.196415] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.737 [2024-07-24 22:34:08.200459] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.737 [2024-07-24 22:34:08.209771] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.737 [2024-07-24 22:34:08.210288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.737 [2024-07-24 22:34:08.210339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:42.737 [2024-07-24 22:34:08.210389] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:42.737 [2024-07-24 22:34:08.210664] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:42.737 [2024-07-24 22:34:08.210931] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.737 [2024-07-24 22:34:08.210953] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.737 [2024-07-24 22:34:08.210970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.737 [2024-07-24 22:34:08.215095] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.737 [2024-07-24 22:34:08.224312] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.737 [2024-07-24 22:34:08.224834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.737 [2024-07-24 22:34:08.224874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:42.737 [2024-07-24 22:34:08.224894] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:42.737 [2024-07-24 22:34:08.225165] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:42.737 [2024-07-24 22:34:08.225433] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.737 [2024-07-24 22:34:08.225455] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.737 [2024-07-24 22:34:08.225470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.737 [2024-07-24 22:34:08.229561] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.737 [2024-07-24 22:34:08.238680] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.737 [2024-07-24 22:34:08.239303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.737 [2024-07-24 22:34:08.239344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:42.737 [2024-07-24 22:34:08.239364] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:42.737 [2024-07-24 22:34:08.239647] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:42.737 [2024-07-24 22:34:08.239916] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.737 [2024-07-24 22:34:08.239938] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.737 [2024-07-24 22:34:08.239953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.737 [2024-07-24 22:34:08.244024] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.737 [2024-07-24 22:34:08.253131] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.737 [2024-07-24 22:34:08.253566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.737 [2024-07-24 22:34:08.253626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:42.737 [2024-07-24 22:34:08.253646] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:42.737 [2024-07-24 22:34:08.253917] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:42.737 [2024-07-24 22:34:08.254198] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.737 [2024-07-24 22:34:08.254221] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.737 [2024-07-24 22:34:08.254237] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.738 [2024-07-24 22:34:08.258357] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.738 [2024-07-24 22:34:08.267461] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.738 [2024-07-24 22:34:08.268003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.738 [2024-07-24 22:34:08.268044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:42.738 [2024-07-24 22:34:08.268063] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:42.738 [2024-07-24 22:34:08.268334] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:42.738 [2024-07-24 22:34:08.268614] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.738 [2024-07-24 22:34:08.268637] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.738 [2024-07-24 22:34:08.268653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.738 [2024-07-24 22:34:08.272701] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.738 [2024-07-24 22:34:08.282013] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.738 [2024-07-24 22:34:08.282522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.738 [2024-07-24 22:34:08.282556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:42.738 [2024-07-24 22:34:08.282574] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:42.738 [2024-07-24 22:34:08.282838] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:42.738 [2024-07-24 22:34:08.283105] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.738 [2024-07-24 22:34:08.283127] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.738 [2024-07-24 22:34:08.283142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.738 [2024-07-24 22:34:08.287222] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.738 [2024-07-24 22:34:08.296348] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.738 [2024-07-24 22:34:08.296941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.738 [2024-07-24 22:34:08.296983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:42.738 [2024-07-24 22:34:08.297002] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:42.738 [2024-07-24 22:34:08.297273] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:42.738 [2024-07-24 22:34:08.297554] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.738 [2024-07-24 22:34:08.297578] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.738 [2024-07-24 22:34:08.297593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.738 [2024-07-24 22:34:08.301656] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.738 [2024-07-24 22:34:08.310710] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.738 [2024-07-24 22:34:08.311287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.738 [2024-07-24 22:34:08.311329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:42.738 [2024-07-24 22:34:08.311348] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:42.738 [2024-07-24 22:34:08.311630] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:42.738 [2024-07-24 22:34:08.311900] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.738 [2024-07-24 22:34:08.311922] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.738 [2024-07-24 22:34:08.311937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.738 [2024-07-24 22:34:08.315975] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.738 [2024-07-24 22:34:08.325050] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.738 [2024-07-24 22:34:08.325651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.738 [2024-07-24 22:34:08.325694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:42.738 [2024-07-24 22:34:08.325713] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:42.738 [2024-07-24 22:34:08.325984] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:42.738 [2024-07-24 22:34:08.326252] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.738 [2024-07-24 22:34:08.326274] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.738 [2024-07-24 22:34:08.326289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.738 [2024-07-24 22:34:08.330340] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.738 [2024-07-24 22:34:08.339441] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.738 [2024-07-24 22:34:08.339988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.738 [2024-07-24 22:34:08.340032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:42.738 [2024-07-24 22:34:08.340059] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:42.738 [2024-07-24 22:34:08.340334] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:42.738 [2024-07-24 22:34:08.340621] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.738 [2024-07-24 22:34:08.340645] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.738 [2024-07-24 22:34:08.340662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.738 [2024-07-24 22:34:08.344722] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.738 [2024-07-24 22:34:08.354031] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.738 [2024-07-24 22:34:08.354505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.738 [2024-07-24 22:34:08.354568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:42.738 [2024-07-24 22:34:08.354604] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:42.738 [2024-07-24 22:34:08.354876] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:42.738 [2024-07-24 22:34:08.355146] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.738 [2024-07-24 22:34:08.355168] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.738 [2024-07-24 22:34:08.355185] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.738 [2024-07-24 22:34:08.359223] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.738 [2024-07-24 22:34:08.368528] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.738 [2024-07-24 22:34:08.369075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.738 [2024-07-24 22:34:08.369126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:42.738 [2024-07-24 22:34:08.369144] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:42.738 [2024-07-24 22:34:08.369409] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:42.738 [2024-07-24 22:34:08.369687] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.738 [2024-07-24 22:34:08.369711] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.738 [2024-07-24 22:34:08.369727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.738 [2024-07-24 22:34:08.373770] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.738 [2024-07-24 22:34:08.383072] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.738 [2024-07-24 22:34:08.383646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.738 [2024-07-24 22:34:08.383681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:42.738 [2024-07-24 22:34:08.383712] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:42.738 [2024-07-24 22:34:08.383977] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:42.738 [2024-07-24 22:34:08.384251] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.738 [2024-07-24 22:34:08.384273] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.738 [2024-07-24 22:34:08.384289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.738 [2024-07-24 22:34:08.388345] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.738 [2024-07-24 22:34:08.397462] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.738 [2024-07-24 22:34:08.398034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.738 [2024-07-24 22:34:08.398085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:42.738 [2024-07-24 22:34:08.398102] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:42.738 [2024-07-24 22:34:08.398366] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:42.738 [2024-07-24 22:34:08.398645] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.738 [2024-07-24 22:34:08.398673] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.738 [2024-07-24 22:34:08.398689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.739 [2024-07-24 22:34:08.402732] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.739 [2024-07-24 22:34:08.412054] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.739 [2024-07-24 22:34:08.412567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.739 [2024-07-24 22:34:08.412597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:42.739 [2024-07-24 22:34:08.412614] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:42.739 [2024-07-24 22:34:08.412879] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:42.739 [2024-07-24 22:34:08.413145] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.739 [2024-07-24 22:34:08.413167] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.739 [2024-07-24 22:34:08.413182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.739 [2024-07-24 22:34:08.417235] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.739 [2024-07-24 22:34:08.426547] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.739 [2024-07-24 22:34:08.427117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.739 [2024-07-24 22:34:08.427158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:42.739 [2024-07-24 22:34:08.427177] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:42.739 [2024-07-24 22:34:08.427448] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:42.739 [2024-07-24 22:34:08.427729] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.739 [2024-07-24 22:34:08.427752] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.739 [2024-07-24 22:34:08.427768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.739 [2024-07-24 22:34:08.431827] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.998 [2024-07-24 22:34:08.440983] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.998 [2024-07-24 22:34:08.441512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.998 [2024-07-24 22:34:08.441545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:42.998 [2024-07-24 22:34:08.441563] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:42.998 [2024-07-24 22:34:08.441837] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:42.998 [2024-07-24 22:34:08.442115] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.998 [2024-07-24 22:34:08.442139] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.998 [2024-07-24 22:34:08.442155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.998 [2024-07-24 22:34:08.446255] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.998 [2024-07-24 22:34:08.455361] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.998 [2024-07-24 22:34:08.455914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.998 [2024-07-24 22:34:08.455955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:42.998 [2024-07-24 22:34:08.455975] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:42.998 [2024-07-24 22:34:08.456246] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:42.998 [2024-07-24 22:34:08.456526] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.998 [2024-07-24 22:34:08.456549] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.998 [2024-07-24 22:34:08.456565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.998 [2024-07-24 22:34:08.460605] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.998 [2024-07-24 22:34:08.469918] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.998 [2024-07-24 22:34:08.470504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.998 [2024-07-24 22:34:08.470544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:42.998 [2024-07-24 22:34:08.470570] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:42.998 [2024-07-24 22:34:08.470841] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:42.998 [2024-07-24 22:34:08.471109] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.998 [2024-07-24 22:34:08.471131] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.998 [2024-07-24 22:34:08.471147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.998 [2024-07-24 22:34:08.475188] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.998 [2024-07-24 22:34:08.484248] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.998 [2024-07-24 22:34:08.484770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.998 [2024-07-24 22:34:08.484801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:42.998 [2024-07-24 22:34:08.484819] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:42.998 [2024-07-24 22:34:08.485083] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:42.998 [2024-07-24 22:34:08.485349] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.998 [2024-07-24 22:34:08.485371] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.998 [2024-07-24 22:34:08.485386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.998 [2024-07-24 22:34:08.489423] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.998 [2024-07-24 22:34:08.498742] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.998 [2024-07-24 22:34:08.499320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.998 [2024-07-24 22:34:08.499361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:42.998 [2024-07-24 22:34:08.499380] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:42.998 [2024-07-24 22:34:08.499670] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:42.998 [2024-07-24 22:34:08.499940] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.998 [2024-07-24 22:34:08.499962] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.998 [2024-07-24 22:34:08.499978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.998 [2024-07-24 22:34:08.504017] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.998 [2024-07-24 22:34:08.513174] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.998 [2024-07-24 22:34:08.513717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.998 [2024-07-24 22:34:08.513758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:42.998 [2024-07-24 22:34:08.513777] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:42.998 [2024-07-24 22:34:08.514048] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:42.998 [2024-07-24 22:34:08.514315] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.998 [2024-07-24 22:34:08.514338] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.998 [2024-07-24 22:34:08.514353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.998 [2024-07-24 22:34:08.518398] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.998 [2024-07-24 22:34:08.527701] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.999 [2024-07-24 22:34:08.528252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.999 [2024-07-24 22:34:08.528292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:42.999 [2024-07-24 22:34:08.528312] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:42.999 [2024-07-24 22:34:08.528783] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:42.999 [2024-07-24 22:34:08.529052] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.999 [2024-07-24 22:34:08.529074] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.999 [2024-07-24 22:34:08.529091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.999 [2024-07-24 22:34:08.533131] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.999 [2024-07-24 22:34:08.542212] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.999 [2024-07-24 22:34:08.542738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.999 [2024-07-24 22:34:08.542831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:42.999 [2024-07-24 22:34:08.542851] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:42.999 [2024-07-24 22:34:08.543122] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:42.999 [2024-07-24 22:34:08.543392] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.999 [2024-07-24 22:34:08.543414] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.999 [2024-07-24 22:34:08.543437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.999 [2024-07-24 22:34:08.547502] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.999 [2024-07-24 22:34:08.556577] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.999 [2024-07-24 22:34:08.557070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.999 [2024-07-24 22:34:08.557110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:42.999 [2024-07-24 22:34:08.557129] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:42.999 [2024-07-24 22:34:08.557400] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:42.999 [2024-07-24 22:34:08.557686] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.999 [2024-07-24 22:34:08.557709] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.999 [2024-07-24 22:34:08.557724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.999 [2024-07-24 22:34:08.561791] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.999 [2024-07-24 22:34:08.571116] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.999 [2024-07-24 22:34:08.571675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.999 [2024-07-24 22:34:08.571730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:42.999 [2024-07-24 22:34:08.571750] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:42.999 [2024-07-24 22:34:08.572020] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:42.999 [2024-07-24 22:34:08.572290] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.999 [2024-07-24 22:34:08.572313] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.999 [2024-07-24 22:34:08.572328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.999 [2024-07-24 22:34:08.576374] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.999 [2024-07-24 22:34:08.585671] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.999 [2024-07-24 22:34:08.586161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.999 [2024-07-24 22:34:08.586210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:42.999 [2024-07-24 22:34:08.586227] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:42.999 [2024-07-24 22:34:08.586501] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:42.999 [2024-07-24 22:34:08.586769] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.999 [2024-07-24 22:34:08.586791] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.999 [2024-07-24 22:34:08.586806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.999 [2024-07-24 22:34:08.590843] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.999 [2024-07-24 22:34:08.600155] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.999 [2024-07-24 22:34:08.600677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.999 [2024-07-24 22:34:08.600728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:42.999 [2024-07-24 22:34:08.600746] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:42.999 [2024-07-24 22:34:08.601016] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:42.999 [2024-07-24 22:34:08.601282] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.999 [2024-07-24 22:34:08.601304] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.999 [2024-07-24 22:34:08.601320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.999 [2024-07-24 22:34:08.605365] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.999 [2024-07-24 22:34:08.614712] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.999 [2024-07-24 22:34:08.615203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.999 [2024-07-24 22:34:08.615244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:42.999 [2024-07-24 22:34:08.615264] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:42.999 [2024-07-24 22:34:08.615548] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:42.999 [2024-07-24 22:34:08.615817] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.999 [2024-07-24 22:34:08.615839] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.999 [2024-07-24 22:34:08.615854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.999 [2024-07-24 22:34:08.619890] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.999 [2024-07-24 22:34:08.629182] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.999 [2024-07-24 22:34:08.629760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.999 [2024-07-24 22:34:08.629802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:42.999 [2024-07-24 22:34:08.629821] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:42.999 [2024-07-24 22:34:08.630092] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:42.999 [2024-07-24 22:34:08.630361] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.999 [2024-07-24 22:34:08.630383] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.999 [2024-07-24 22:34:08.630398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.999 [2024-07-24 22:34:08.634439] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.999 [2024-07-24 22:34:08.643532] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.999 [2024-07-24 22:34:08.644096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.999 [2024-07-24 22:34:08.644136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:42.999 [2024-07-24 22:34:08.644155] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:42.999 [2024-07-24 22:34:08.644431] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:42.999 [2024-07-24 22:34:08.644717] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.999 [2024-07-24 22:34:08.644741] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.999 [2024-07-24 22:34:08.644756] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.999 [2024-07-24 22:34:08.648816] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.999 [2024-07-24 22:34:08.657872] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.999 [2024-07-24 22:34:08.658400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.999 [2024-07-24 22:34:08.658448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:42.999 [2024-07-24 22:34:08.658465] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:42.999 [2024-07-24 22:34:08.658865] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:42.999 [2024-07-24 22:34:08.659135] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.999 [2024-07-24 22:34:08.659157] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.999 [2024-07-24 22:34:08.659172] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.999 [2024-07-24 22:34:08.663205] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.999 [2024-07-24 22:34:08.672260] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.999 [2024-07-24 22:34:08.672788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.000 [2024-07-24 22:34:08.672830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:43.000 [2024-07-24 22:34:08.672849] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:43.000 [2024-07-24 22:34:08.673120] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:43.000 [2024-07-24 22:34:08.673388] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.000 [2024-07-24 22:34:08.673410] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.000 [2024-07-24 22:34:08.673425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.000 [2024-07-24 22:34:08.677474] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.000 [2024-07-24 22:34:08.686795] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.000 [2024-07-24 22:34:08.687359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.000 [2024-07-24 22:34:08.687414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:43.000 [2024-07-24 22:34:08.687433] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:43.000 [2024-07-24 22:34:08.687716] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:43.000 [2024-07-24 22:34:08.687985] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.000 [2024-07-24 22:34:08.688007] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.000 [2024-07-24 22:34:08.688023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.000 [2024-07-24 22:34:08.692075] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.000 [2024-07-24 22:34:08.701271] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.259 [2024-07-24 22:34:08.701815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.259 [2024-07-24 22:34:08.701858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:43.259 [2024-07-24 22:34:08.701877] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:43.259 [2024-07-24 22:34:08.702148] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:43.259 [2024-07-24 22:34:08.702424] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.259 [2024-07-24 22:34:08.702448] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.259 [2024-07-24 22:34:08.702464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.259 [2024-07-24 22:34:08.706583] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.259 [2024-07-24 22:34:08.715656] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.259 [2024-07-24 22:34:08.716150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.259 [2024-07-24 22:34:08.716180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:43.259 [2024-07-24 22:34:08.716204] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:43.259 [2024-07-24 22:34:08.716468] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:43.259 [2024-07-24 22:34:08.716746] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.259 [2024-07-24 22:34:08.716768] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.259 [2024-07-24 22:34:08.716784] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.259 [2024-07-24 22:34:08.720852] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.259 [2024-07-24 22:34:08.730176] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.259 [2024-07-24 22:34:08.730675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.259 [2024-07-24 22:34:08.730724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:43.259 [2024-07-24 22:34:08.730742] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:43.259 [2024-07-24 22:34:08.731007] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:43.259 [2024-07-24 22:34:08.731273] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.259 [2024-07-24 22:34:08.731296] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.259 [2024-07-24 22:34:08.731311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.259 [2024-07-24 22:34:08.735355] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.259 [2024-07-24 22:34:08.744684] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.259 [2024-07-24 22:34:08.745185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.259 [2024-07-24 22:34:08.745236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:43.259 [2024-07-24 22:34:08.745259] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:43.259 [2024-07-24 22:34:08.745534] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:43.259 [2024-07-24 22:34:08.745801] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.259 [2024-07-24 22:34:08.745824] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.259 [2024-07-24 22:34:08.745839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.259 [2024-07-24 22:34:08.749877] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.259 [2024-07-24 22:34:08.759191] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.259 [2024-07-24 22:34:08.759666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.259 [2024-07-24 22:34:08.759708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:43.259 [2024-07-24 22:34:08.759728] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:43.259 [2024-07-24 22:34:08.759998] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:43.259 [2024-07-24 22:34:08.760270] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.259 [2024-07-24 22:34:08.760293] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.259 [2024-07-24 22:34:08.760309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.259 [2024-07-24 22:34:08.764428] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.259 [2024-07-24 22:34:08.773751] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.259 [2024-07-24 22:34:08.774395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.259 [2024-07-24 22:34:08.774436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:43.259 [2024-07-24 22:34:08.774455] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:43.259 [2024-07-24 22:34:08.774741] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:43.259 [2024-07-24 22:34:08.775011] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.259 [2024-07-24 22:34:08.775033] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.260 [2024-07-24 22:34:08.775049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.260 [2024-07-24 22:34:08.779083] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.260 [2024-07-24 22:34:08.788135] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.260 [2024-07-24 22:34:08.788722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.260 [2024-07-24 22:34:08.788763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:43.260 [2024-07-24 22:34:08.788783] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:43.260 [2024-07-24 22:34:08.789053] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:43.260 [2024-07-24 22:34:08.789321] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.260 [2024-07-24 22:34:08.789351] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.260 [2024-07-24 22:34:08.789367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.260 [2024-07-24 22:34:08.793429] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.260 [2024-07-24 22:34:08.802532] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.260 [2024-07-24 22:34:08.803060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.260 [2024-07-24 22:34:08.803101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:43.260 [2024-07-24 22:34:08.803121] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:43.260 [2024-07-24 22:34:08.803391] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:43.260 [2024-07-24 22:34:08.803677] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.260 [2024-07-24 22:34:08.803701] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.260 [2024-07-24 22:34:08.803716] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.260 [2024-07-24 22:34:08.807759] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.260 [2024-07-24 22:34:08.817057] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.260 [2024-07-24 22:34:08.817500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.260 [2024-07-24 22:34:08.817542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:43.260 [2024-07-24 22:34:08.817562] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:43.260 [2024-07-24 22:34:08.817832] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:43.260 [2024-07-24 22:34:08.818101] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.260 [2024-07-24 22:34:08.818123] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.260 [2024-07-24 22:34:08.818138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.260 [2024-07-24 22:34:08.822174] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.260 [2024-07-24 22:34:08.831500] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.260 [2024-07-24 22:34:08.832030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.260 [2024-07-24 22:34:08.832086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:43.260 [2024-07-24 22:34:08.832105] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:43.260 [2024-07-24 22:34:08.832375] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:43.260 [2024-07-24 22:34:08.832654] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.260 [2024-07-24 22:34:08.832677] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.260 [2024-07-24 22:34:08.832693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.260 [2024-07-24 22:34:08.836735] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.260 [2024-07-24 22:34:08.846043] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.260 [2024-07-24 22:34:08.846547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.260 [2024-07-24 22:34:08.846578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:43.260 [2024-07-24 22:34:08.846596] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:43.260 [2024-07-24 22:34:08.846860] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:43.260 [2024-07-24 22:34:08.847127] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.260 [2024-07-24 22:34:08.847149] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.260 [2024-07-24 22:34:08.847165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.260 [2024-07-24 22:34:08.851200] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.260 [2024-07-24 22:34:08.860550] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.260 [2024-07-24 22:34:08.861146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.260 [2024-07-24 22:34:08.861187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:43.260 [2024-07-24 22:34:08.861207] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:43.260 [2024-07-24 22:34:08.861478] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:43.260 [2024-07-24 22:34:08.861761] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.260 [2024-07-24 22:34:08.861783] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.260 [2024-07-24 22:34:08.861798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.260 [2024-07-24 22:34:08.865841] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.260 [2024-07-24 22:34:08.874898] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.260 [2024-07-24 22:34:08.875436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.260 [2024-07-24 22:34:08.875492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:43.260 [2024-07-24 22:34:08.875511] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:43.260 [2024-07-24 22:34:08.875776] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:43.260 [2024-07-24 22:34:08.876042] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.260 [2024-07-24 22:34:08.876064] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.260 [2024-07-24 22:34:08.876079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.260 [2024-07-24 22:34:08.880116] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.260 [2024-07-24 22:34:08.889449] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.260 [2024-07-24 22:34:08.890042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.260 [2024-07-24 22:34:08.890084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:43.260 [2024-07-24 22:34:08.890104] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:43.260 [2024-07-24 22:34:08.890380] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:43.260 [2024-07-24 22:34:08.890666] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.260 [2024-07-24 22:34:08.890690] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.260 [2024-07-24 22:34:08.890705] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.260 [2024-07-24 22:34:08.894769] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.260 [2024-07-24 22:34:08.903840] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.260 [2024-07-24 22:34:08.904429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.260 [2024-07-24 22:34:08.904470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:43.260 [2024-07-24 22:34:08.904501] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:43.260 [2024-07-24 22:34:08.904773] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:43.260 [2024-07-24 22:34:08.905041] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.260 [2024-07-24 22:34:08.905063] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.260 [2024-07-24 22:34:08.905079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.260 [2024-07-24 22:34:08.909123] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.260 [2024-07-24 22:34:08.918221] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.260 [2024-07-24 22:34:08.918739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.260 [2024-07-24 22:34:08.918780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:43.260 [2024-07-24 22:34:08.918800] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:43.260 [2024-07-24 22:34:08.919070] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:43.260 [2024-07-24 22:34:08.919338] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.261 [2024-07-24 22:34:08.919360] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.261 [2024-07-24 22:34:08.919376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.261 [2024-07-24 22:34:08.923416] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.261 [2024-07-24 22:34:08.932726] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.261 [2024-07-24 22:34:08.933307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.261 [2024-07-24 22:34:08.933347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:43.261 [2024-07-24 22:34:08.933366] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:43.261 [2024-07-24 22:34:08.933649] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:43.261 [2024-07-24 22:34:08.933918] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.261 [2024-07-24 22:34:08.933940] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.261 [2024-07-24 22:34:08.933962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.261 [2024-07-24 22:34:08.938003] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.261 [2024-07-24 22:34:08.947070] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.261 [2024-07-24 22:34:08.947656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.261 [2024-07-24 22:34:08.947722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:43.261 [2024-07-24 22:34:08.947742] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:43.261 [2024-07-24 22:34:08.948013] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:43.261 [2024-07-24 22:34:08.948281] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.261 [2024-07-24 22:34:08.948303] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.261 [2024-07-24 22:34:08.948318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.261 [2024-07-24 22:34:08.952357] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.261 [2024-07-24 22:34:08.961474] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.261 [2024-07-24 22:34:08.961996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.261 [2024-07-24 22:34:08.962042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:43.261 [2024-07-24 22:34:08.962060] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:43.261 [2024-07-24 22:34:08.962342] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:43.587 [2024-07-24 22:34:08.962626] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.587 [2024-07-24 22:34:08.962651] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.587 [2024-07-24 22:34:08.962668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.587 [2024-07-24 22:34:08.966734] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.587 [2024-07-24 22:34:08.975807] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.587 [2024-07-24 22:34:08.976356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.587 [2024-07-24 22:34:08.976406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:43.587 [2024-07-24 22:34:08.976424] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:43.588 [2024-07-24 22:34:08.976695] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:43.588 [2024-07-24 22:34:08.976963] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.588 [2024-07-24 22:34:08.976985] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.588 [2024-07-24 22:34:08.977001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.588 [2024-07-24 22:34:08.981037] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.588 [2024-07-24 22:34:08.990351] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.588 [2024-07-24 22:34:08.990859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.588 [2024-07-24 22:34:08.990902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:43.588 [2024-07-24 22:34:08.990922] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:43.588 [2024-07-24 22:34:08.991193] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:43.588 [2024-07-24 22:34:08.991467] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.588 [2024-07-24 22:34:08.991499] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.588 [2024-07-24 22:34:08.991516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.588 [2024-07-24 22:34:08.995591] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.588 [2024-07-24 22:34:09.004878] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.588 [2024-07-24 22:34:09.005364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.588 [2024-07-24 22:34:09.005411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:43.588 [2024-07-24 22:34:09.005429] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:43.588 [2024-07-24 22:34:09.005704] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:43.588 [2024-07-24 22:34:09.005972] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.588 [2024-07-24 22:34:09.005994] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.588 [2024-07-24 22:34:09.006010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.588 [2024-07-24 22:34:09.010042] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.588 [2024-07-24 22:34:09.019461] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.588 [2024-07-24 22:34:09.020075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.588 [2024-07-24 22:34:09.020116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:43.588 [2024-07-24 22:34:09.020136] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:43.588 [2024-07-24 22:34:09.020407] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:43.588 [2024-07-24 22:34:09.020686] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.588 [2024-07-24 22:34:09.020709] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.588 [2024-07-24 22:34:09.020725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.588 [2024-07-24 22:34:09.024761] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.588 [2024-07-24 22:34:09.033818] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.588 [2024-07-24 22:34:09.034293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.588 [2024-07-24 22:34:09.034334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:43.588 [2024-07-24 22:34:09.034354] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:43.588 [2024-07-24 22:34:09.034635] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:43.588 [2024-07-24 22:34:09.034910] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.588 [2024-07-24 22:34:09.034933] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.588 [2024-07-24 22:34:09.034949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.588 [2024-07-24 22:34:09.038990] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.588 [2024-07-24 22:34:09.048286] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.588 [2024-07-24 22:34:09.048756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.588 [2024-07-24 22:34:09.048797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:43.588 [2024-07-24 22:34:09.048817] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:43.588 [2024-07-24 22:34:09.049088] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:43.588 [2024-07-24 22:34:09.049356] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.588 [2024-07-24 22:34:09.049378] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.588 [2024-07-24 22:34:09.049394] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.588 [2024-07-24 22:34:09.053436] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.588 [2024-07-24 22:34:09.062732] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.588 [2024-07-24 22:34:09.063371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.588 [2024-07-24 22:34:09.063412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:43.588 [2024-07-24 22:34:09.063432] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:43.588 [2024-07-24 22:34:09.063713] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:43.588 [2024-07-24 22:34:09.063983] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.588 [2024-07-24 22:34:09.064005] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.588 [2024-07-24 22:34:09.064021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.588 [2024-07-24 22:34:09.068056] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.588 [2024-07-24 22:34:09.077155] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.588 [2024-07-24 22:34:09.077811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.588 [2024-07-24 22:34:09.077865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:43.588 [2024-07-24 22:34:09.077884] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:43.588 [2024-07-24 22:34:09.078161] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:43.588 [2024-07-24 22:34:09.078429] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.588 [2024-07-24 22:34:09.078451] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.588 [2024-07-24 22:34:09.078467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.588 [2024-07-24 22:34:09.082563] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.588 [2024-07-24 22:34:09.091671] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.588 [2024-07-24 22:34:09.092251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.588 [2024-07-24 22:34:09.092292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:43.588 [2024-07-24 22:34:09.092311] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:43.588 [2024-07-24 22:34:09.092596] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:43.588 [2024-07-24 22:34:09.092875] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.588 [2024-07-24 22:34:09.092899] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.588 [2024-07-24 22:34:09.092916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.588 [2024-07-24 22:34:09.097052] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.588 [2024-07-24 22:34:09.106190] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.588 [2024-07-24 22:34:09.106768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.588 [2024-07-24 22:34:09.106809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:43.588 [2024-07-24 22:34:09.106829] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:43.588 [2024-07-24 22:34:09.107105] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:43.588 [2024-07-24 22:34:09.107373] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.588 [2024-07-24 22:34:09.107395] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.588 [2024-07-24 22:34:09.107410] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.588 [2024-07-24 22:34:09.111507] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.588 [2024-07-24 22:34:09.120653] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.588 [2024-07-24 22:34:09.121172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.588 [2024-07-24 22:34:09.121214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:43.588 [2024-07-24 22:34:09.121233] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:43.588 [2024-07-24 22:34:09.121524] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:43.588 [2024-07-24 22:34:09.121793] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.589 [2024-07-24 22:34:09.121815] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.589 [2024-07-24 22:34:09.121830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.589 [2024-07-24 22:34:09.125927] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.589 [2024-07-24 22:34:09.135054] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.589 [2024-07-24 22:34:09.135652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.589 [2024-07-24 22:34:09.135694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:43.589 [2024-07-24 22:34:09.135719] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:43.589 [2024-07-24 22:34:09.135992] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:43.589 [2024-07-24 22:34:09.136260] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.589 [2024-07-24 22:34:09.136282] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.589 [2024-07-24 22:34:09.136298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.589 [2024-07-24 22:34:09.140347] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.589 [2024-07-24 22:34:09.149550] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.589 [2024-07-24 22:34:09.150045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.589 [2024-07-24 22:34:09.150076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:43.589 [2024-07-24 22:34:09.150094] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:43.589 [2024-07-24 22:34:09.150358] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:43.589 [2024-07-24 22:34:09.150639] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.589 [2024-07-24 22:34:09.150663] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.589 [2024-07-24 22:34:09.150678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.589 [2024-07-24 22:34:09.154779] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.589 [2024-07-24 22:34:09.164134] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.589 [2024-07-24 22:34:09.164656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.589 [2024-07-24 22:34:09.164697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:43.589 [2024-07-24 22:34:09.164716] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:43.589 [2024-07-24 22:34:09.164986] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:43.589 [2024-07-24 22:34:09.165254] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.589 [2024-07-24 22:34:09.165276] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.589 [2024-07-24 22:34:09.165292] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.589 [2024-07-24 22:34:09.169360] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.589 [2024-07-24 22:34:09.178695] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.589 [2024-07-24 22:34:09.179236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.589 [2024-07-24 22:34:09.179277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:43.589 [2024-07-24 22:34:09.179297] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:43.589 [2024-07-24 22:34:09.179580] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:43.589 [2024-07-24 22:34:09.179850] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.589 [2024-07-24 22:34:09.179893] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.589 [2024-07-24 22:34:09.179909] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.589 [2024-07-24 22:34:09.184007] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.589 [2024-07-24 22:34:09.193149] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.589 [2024-07-24 22:34:09.193663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.589 [2024-07-24 22:34:09.193694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:43.589 [2024-07-24 22:34:09.193712] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:43.589 [2024-07-24 22:34:09.193977] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:43.589 [2024-07-24 22:34:09.194255] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.589 [2024-07-24 22:34:09.194278] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.589 [2024-07-24 22:34:09.194293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.589 [2024-07-24 22:34:09.198353] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.589 [2024-07-24 22:34:09.207726] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.589 [2024-07-24 22:34:09.208274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.589 [2024-07-24 22:34:09.208322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:43.589 [2024-07-24 22:34:09.208340] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:43.589 [2024-07-24 22:34:09.208617] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:43.589 [2024-07-24 22:34:09.208885] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.589 [2024-07-24 22:34:09.208907] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.589 [2024-07-24 22:34:09.208922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.589 [2024-07-24 22:34:09.212978] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.589 [2024-07-24 22:34:09.222123] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.589 [2024-07-24 22:34:09.222690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.589 [2024-07-24 22:34:09.222730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:43.589 [2024-07-24 22:34:09.222749] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:43.589 [2024-07-24 22:34:09.223020] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:43.589 [2024-07-24 22:34:09.223288] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.589 [2024-07-24 22:34:09.223310] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.589 [2024-07-24 22:34:09.223326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.589 [2024-07-24 22:34:09.227386] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.589 [2024-07-24 22:34:09.236500] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.589 [2024-07-24 22:34:09.237057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.589 [2024-07-24 22:34:09.237098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:43.589 [2024-07-24 22:34:09.237118] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:43.589 [2024-07-24 22:34:09.237389] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:43.589 [2024-07-24 22:34:09.237669] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.589 [2024-07-24 22:34:09.237692] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.589 [2024-07-24 22:34:09.237707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.589 [2024-07-24 22:34:09.241761] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.589 [2024-07-24 22:34:09.250899] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.589 [2024-07-24 22:34:09.251491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.589 [2024-07-24 22:34:09.251532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:43.589 [2024-07-24 22:34:09.251552] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:43.589 [2024-07-24 22:34:09.251823] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:43.589 [2024-07-24 22:34:09.252092] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.589 [2024-07-24 22:34:09.252114] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.589 [2024-07-24 22:34:09.252129] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.589 [2024-07-24 22:34:09.256210] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.589 [2024-07-24 22:34:09.265345] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.589 [2024-07-24 22:34:09.265944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.589 [2024-07-24 22:34:09.265986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:43.589 [2024-07-24 22:34:09.266005] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:43.590 [2024-07-24 22:34:09.266276] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:43.590 [2024-07-24 22:34:09.266563] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.590 [2024-07-24 22:34:09.266587] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.590 [2024-07-24 22:34:09.266603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.590 [2024-07-24 22:34:09.270705] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.590 [2024-07-24 22:34:09.279872] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.590 [2024-07-24 22:34:09.280373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.590 [2024-07-24 22:34:09.280404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:43.590 [2024-07-24 22:34:09.280422] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:43.590 [2024-07-24 22:34:09.280708] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:43.590 [2024-07-24 22:34:09.280976] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.590 [2024-07-24 22:34:09.280998] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.590 [2024-07-24 22:34:09.281014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.590 [2024-07-24 22:34:09.285112] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.850 [2024-07-24 22:34:09.294319] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.850 [2024-07-24 22:34:09.294812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.850 [2024-07-24 22:34:09.294843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:43.850 [2024-07-24 22:34:09.294861] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:43.850 [2024-07-24 22:34:09.295126] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:43.850 [2024-07-24 22:34:09.295393] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.850 [2024-07-24 22:34:09.295423] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.850 [2024-07-24 22:34:09.295448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.850 [2024-07-24 22:34:09.299541] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.850 [2024-07-24 22:34:09.308875] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.850 [2024-07-24 22:34:09.309467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.850 [2024-07-24 22:34:09.309517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:43.850 [2024-07-24 22:34:09.309537] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:43.850 [2024-07-24 22:34:09.309808] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:43.850 [2024-07-24 22:34:09.310076] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.851 [2024-07-24 22:34:09.310098] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.851 [2024-07-24 22:34:09.310114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.851 [2024-07-24 22:34:09.314159] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.851 [2024-07-24 22:34:09.323296] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.851 [2024-07-24 22:34:09.323892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.851 [2024-07-24 22:34:09.323951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:43.851 [2024-07-24 22:34:09.323971] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:43.851 [2024-07-24 22:34:09.324242] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:43.851 [2024-07-24 22:34:09.324523] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.851 [2024-07-24 22:34:09.324546] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.851 [2024-07-24 22:34:09.324570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.851 [2024-07-24 22:34:09.328650] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.851 [2024-07-24 22:34:09.337806] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.851 [2024-07-24 22:34:09.338306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.851 [2024-07-24 22:34:09.338355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:43.851 [2024-07-24 22:34:09.338373] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:43.851 [2024-07-24 22:34:09.338648] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:43.851 [2024-07-24 22:34:09.338917] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.851 [2024-07-24 22:34:09.338939] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.851 [2024-07-24 22:34:09.338954] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.851 [2024-07-24 22:34:09.343006] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.851 [2024-07-24 22:34:09.352356] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.851 [2024-07-24 22:34:09.352896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.851 [2024-07-24 22:34:09.352937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:43.851 [2024-07-24 22:34:09.352957] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:43.851 [2024-07-24 22:34:09.353228] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:43.851 [2024-07-24 22:34:09.353509] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.851 [2024-07-24 22:34:09.353532] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.851 [2024-07-24 22:34:09.353548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.851 [2024-07-24 22:34:09.357592] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.851 [2024-07-24 22:34:09.366896] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.851 [2024-07-24 22:34:09.367430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.851 [2024-07-24 22:34:09.367489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:43.851 [2024-07-24 22:34:09.367509] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:43.851 [2024-07-24 22:34:09.367774] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:43.851 [2024-07-24 22:34:09.368041] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.851 [2024-07-24 22:34:09.368063] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.851 [2024-07-24 22:34:09.368078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.851 [2024-07-24 22:34:09.372138] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.851 [2024-07-24 22:34:09.381256] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.851 [2024-07-24 22:34:09.381909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.851 [2024-07-24 22:34:09.381951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:43.851 [2024-07-24 22:34:09.381971] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:43.851 [2024-07-24 22:34:09.382242] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:43.851 [2024-07-24 22:34:09.382525] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.851 [2024-07-24 22:34:09.382548] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.851 [2024-07-24 22:34:09.382564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.851 [2024-07-24 22:34:09.386614] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.851 [2024-07-24 22:34:09.395706] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.851 [2024-07-24 22:34:09.396312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.851 [2024-07-24 22:34:09.396354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:43.851 [2024-07-24 22:34:09.396373] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:43.851 [2024-07-24 22:34:09.396655] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:43.851 [2024-07-24 22:34:09.396924] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.851 [2024-07-24 22:34:09.396947] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.851 [2024-07-24 22:34:09.396962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.851 [2024-07-24 22:34:09.401008] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.851 [2024-07-24 22:34:09.410074] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.851 [2024-07-24 22:34:09.410640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.851 [2024-07-24 22:34:09.410682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:43.851 [2024-07-24 22:34:09.410702] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:43.851 [2024-07-24 22:34:09.410973] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:43.851 [2024-07-24 22:34:09.411241] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.851 [2024-07-24 22:34:09.411264] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.851 [2024-07-24 22:34:09.411280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.851 [2024-07-24 22:34:09.415341] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.851 [2024-07-24 22:34:09.424412] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.851 [2024-07-24 22:34:09.424940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.851 [2024-07-24 22:34:09.424981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:43.851 [2024-07-24 22:34:09.425001] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:43.851 [2024-07-24 22:34:09.425272] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:43.851 [2024-07-24 22:34:09.425562] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.851 [2024-07-24 22:34:09.425585] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.851 [2024-07-24 22:34:09.425600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.851 [2024-07-24 22:34:09.429643] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.851 [2024-07-24 22:34:09.438933] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.851 [2024-07-24 22:34:09.439488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.851 [2024-07-24 22:34:09.439530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:43.851 [2024-07-24 22:34:09.439550] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:43.851 [2024-07-24 22:34:09.439821] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:43.851 [2024-07-24 22:34:09.440089] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.851 [2024-07-24 22:34:09.440111] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.851 [2024-07-24 22:34:09.440126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.852 [2024-07-24 22:34:09.444172] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.852 [2024-07-24 22:34:09.453462] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.852 [2024-07-24 22:34:09.453990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.852 [2024-07-24 22:34:09.454020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:43.852 [2024-07-24 22:34:09.454038] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:43.852 [2024-07-24 22:34:09.454303] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:43.852 [2024-07-24 22:34:09.454579] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.852 [2024-07-24 22:34:09.454602] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.852 [2024-07-24 22:34:09.454618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.852 [2024-07-24 22:34:09.458671] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.852 [2024-07-24 22:34:09.467972] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.852 [2024-07-24 22:34:09.468465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.852 [2024-07-24 22:34:09.468555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:43.852 [2024-07-24 22:34:09.468574] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:43.852 [2024-07-24 22:34:09.468837] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:43.852 [2024-07-24 22:34:09.469110] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.852 [2024-07-24 22:34:09.469133] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.852 [2024-07-24 22:34:09.469148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.852 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3925571 Killed "${NVMF_APP[@]}" "$@" 00:24:43.852 22:34:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:24:43.852 22:34:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:24:43.852 [2024-07-24 22:34:09.473183] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.852 22:34:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:43.852 22:34:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:43.852 22:34:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:43.852 22:34:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3926376 00:24:43.852 22:34:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:43.852 22:34:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3926376 00:24:43.852 22:34:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 3926376 ']' 00:24:43.852 22:34:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:43.852 22:34:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:43.852 22:34:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:43.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:43.852 22:34:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:43.852 22:34:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:43.852 [2024-07-24 22:34:09.482528] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.852 [2024-07-24 22:34:09.483101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.852 [2024-07-24 22:34:09.483166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:43.852 [2024-07-24 22:34:09.483186] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:43.852 [2024-07-24 22:34:09.483464] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:43.852 [2024-07-24 22:34:09.483745] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.852 [2024-07-24 22:34:09.483768] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.852 [2024-07-24 22:34:09.483785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.852 [2024-07-24 22:34:09.487826] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.852 [2024-07-24 22:34:09.496956] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.852 [2024-07-24 22:34:09.497387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.852 [2024-07-24 22:34:09.497440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:43.852 [2024-07-24 22:34:09.497458] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:43.852 [2024-07-24 22:34:09.497730] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:43.852 [2024-07-24 22:34:09.497999] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.852 [2024-07-24 22:34:09.498021] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.852 [2024-07-24 22:34:09.498042] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.852 [2024-07-24 22:34:09.502080] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.852 [2024-07-24 22:34:09.511375] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.852 [2024-07-24 22:34:09.511877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.852 [2024-07-24 22:34:09.511948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:43.852 [2024-07-24 22:34:09.511968] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:43.852 [2024-07-24 22:34:09.512240] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:43.852 [2024-07-24 22:34:09.512520] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.852 [2024-07-24 22:34:09.512543] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.852 [2024-07-24 22:34:09.512558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.852 [2024-07-24 22:34:09.516591] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.852 [2024-07-24 22:34:09.525747] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.852 [2024-07-24 22:34:09.526223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.852 [2024-07-24 22:34:09.526254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:43.852 [2024-07-24 22:34:09.526272] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:43.852 [2024-07-24 22:34:09.526545] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:43.852 [2024-07-24 22:34:09.526813] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.852 [2024-07-24 22:34:09.526836] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.852 [2024-07-24 22:34:09.526851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.852 [2024-07-24 22:34:09.529753] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:24:43.852 [2024-07-24 22:34:09.529851] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:43.852 [2024-07-24 22:34:09.530881] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:43.852 [2024-07-24 22:34:09.540174] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:43.852 [2024-07-24 22:34:09.540662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:43.852 [2024-07-24 22:34:09.540691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:43.852 [2024-07-24 22:34:09.540709] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:43.852 [2024-07-24 22:34:09.540980] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:43.852 [2024-07-24 22:34:09.541253] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:43.852 [2024-07-24 22:34:09.541275] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:43.852 [2024-07-24 22:34:09.541290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:43.853 [2024-07-24 22:34:09.545365] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.113 [2024-07-24 22:34:09.554957] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.114 [2024-07-24 22:34:09.555515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.114 [2024-07-24 22:34:09.555569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:44.114 [2024-07-24 22:34:09.555587] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:44.114 [2024-07-24 22:34:09.555852] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:44.114 [2024-07-24 22:34:09.556118] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.114 [2024-07-24 22:34:09.556140] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.114 [2024-07-24 22:34:09.556156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.114 [2024-07-24 22:34:09.560247] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.114 EAL: No free 2048 kB hugepages reported on node 1 00:24:44.114 [2024-07-24 22:34:09.569293] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.114 [2024-07-24 22:34:09.569821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.114 [2024-07-24 22:34:09.569905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:44.114 [2024-07-24 22:34:09.569923] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:44.114 [2024-07-24 22:34:09.570194] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:44.114 [2024-07-24 22:34:09.570461] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.114 [2024-07-24 22:34:09.570492] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.114 [2024-07-24 22:34:09.570510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.114 [2024-07-24 22:34:09.574564] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.114 [2024-07-24 22:34:09.583625] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.114 [2024-07-24 22:34:09.584067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.114 [2024-07-24 22:34:09.584109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:44.114 [2024-07-24 22:34:09.584129] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:44.114 [2024-07-24 22:34:09.584406] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:44.114 [2024-07-24 22:34:09.584685] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.114 [2024-07-24 22:34:09.584708] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.114 [2024-07-24 22:34:09.584724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.114 [2024-07-24 22:34:09.588759] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.114 [2024-07-24 22:34:09.598078] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.114 [2024-07-24 22:34:09.598511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.114 [2024-07-24 22:34:09.598548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:44.114 [2024-07-24 22:34:09.598566] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:44.114 [2024-07-24 22:34:09.598830] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:44.114 [2024-07-24 22:34:09.599097] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.114 [2024-07-24 22:34:09.599119] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.114 [2024-07-24 22:34:09.599135] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.114 [2024-07-24 22:34:09.599868] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:44.114 [2024-07-24 22:34:09.603190] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.114 [2024-07-24 22:34:09.612665] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.114 [2024-07-24 22:34:09.613302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.114 [2024-07-24 22:34:09.613355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:44.114 [2024-07-24 22:34:09.613379] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:44.114 [2024-07-24 22:34:09.613673] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:44.114 [2024-07-24 22:34:09.613948] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.114 [2024-07-24 22:34:09.613971] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.114 [2024-07-24 22:34:09.613989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.114 [2024-07-24 22:34:09.618065] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.114 [2024-07-24 22:34:09.627129] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.114 [2024-07-24 22:34:09.627678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.114 [2024-07-24 22:34:09.627727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:44.114 [2024-07-24 22:34:09.627747] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:44.114 [2024-07-24 22:34:09.628023] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:44.114 [2024-07-24 22:34:09.628294] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.114 [2024-07-24 22:34:09.628317] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.114 [2024-07-24 22:34:09.628334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.114 [2024-07-24 22:34:09.632380] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.114 [2024-07-24 22:34:09.641687] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.114 [2024-07-24 22:34:09.642209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.114 [2024-07-24 22:34:09.642247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:44.114 [2024-07-24 22:34:09.642266] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:44.114 [2024-07-24 22:34:09.642556] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:44.114 [2024-07-24 22:34:09.642842] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.114 [2024-07-24 22:34:09.642865] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.114 [2024-07-24 22:34:09.642883] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.114 [2024-07-24 22:34:09.646943] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.114 [2024-07-24 22:34:09.656233] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.114 [2024-07-24 22:34:09.656768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.114 [2024-07-24 22:34:09.656818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:44.114 [2024-07-24 22:34:09.656841] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:44.114 [2024-07-24 22:34:09.657120] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:44.114 [2024-07-24 22:34:09.657393] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.114 [2024-07-24 22:34:09.657416] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.114 [2024-07-24 22:34:09.657434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.114 [2024-07-24 22:34:09.661519] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.114 [2024-07-24 22:34:09.670693] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.114 [2024-07-24 22:34:09.671341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.114 [2024-07-24 22:34:09.671397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:44.114 [2024-07-24 22:34:09.671420] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:44.114 [2024-07-24 22:34:09.671716] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:44.114 [2024-07-24 22:34:09.671990] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.114 [2024-07-24 22:34:09.672014] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.114 [2024-07-24 22:34:09.672033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.114 [2024-07-24 22:34:09.676070] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.114 [2024-07-24 22:34:09.685141] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.114 [2024-07-24 22:34:09.685667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.114 [2024-07-24 22:34:09.685706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:44.114 [2024-07-24 22:34:09.685724] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:44.114 [2024-07-24 22:34:09.685994] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:44.114 [2024-07-24 22:34:09.686263] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.114 [2024-07-24 22:34:09.686285] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.114 [2024-07-24 22:34:09.686302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.114 [2024-07-24 22:34:09.690354] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.114 [2024-07-24 22:34:09.699692] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.114 [2024-07-24 22:34:09.700233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.114 [2024-07-24 22:34:09.700283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:44.114 [2024-07-24 22:34:09.700304] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:44.114 [2024-07-24 22:34:09.700595] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:44.114 [2024-07-24 22:34:09.700868] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.114 [2024-07-24 22:34:09.700890] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.114 [2024-07-24 22:34:09.700907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.114 [2024-07-24 22:34:09.704944] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.114 [2024-07-24 22:34:09.714265] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.114 [2024-07-24 22:34:09.714766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.114 [2024-07-24 22:34:09.714803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:44.114 [2024-07-24 22:34:09.714822] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:44.114 [2024-07-24 22:34:09.715091] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:44.114 [2024-07-24 22:34:09.715365] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.114 [2024-07-24 22:34:09.715389] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.114 [2024-07-24 22:34:09.715407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.114 [2024-07-24 22:34:09.716155] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:44.114 [2024-07-24 22:34:09.716191] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:44.114 [2024-07-24 22:34:09.716208] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:44.114 [2024-07-24 22:34:09.716221] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:44.114 [2024-07-24 22:34:09.716233] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:44.114 [2024-07-24 22:34:09.716311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:44.114 [2024-07-24 22:34:09.716364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:44.114 [2024-07-24 22:34:09.716368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:44.114 [2024-07-24 22:34:09.719452] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.114 [2024-07-24 22:34:09.728875] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.114 [2024-07-24 22:34:09.729504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.114 [2024-07-24 22:34:09.729548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:44.114 [2024-07-24 22:34:09.729569] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:44.114 [2024-07-24 22:34:09.729849] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:44.114 [2024-07-24 22:34:09.730138] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.114 [2024-07-24 22:34:09.730161] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.114 [2024-07-24 22:34:09.730181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.114 [2024-07-24 22:34:09.734272] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.114 [2024-07-24 22:34:09.743522] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.114 [2024-07-24 22:34:09.744132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.114 [2024-07-24 22:34:09.744178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:44.114 [2024-07-24 22:34:09.744199] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:44.114 [2024-07-24 22:34:09.744493] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:44.114 [2024-07-24 22:34:09.744776] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.114 [2024-07-24 22:34:09.744799] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.114 [2024-07-24 22:34:09.744820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.114 [2024-07-24 22:34:09.748929] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.114 [2024-07-24 22:34:09.758178] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.114 [2024-07-24 22:34:09.758832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.114 [2024-07-24 22:34:09.758880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:44.114 [2024-07-24 22:34:09.758902] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:44.114 [2024-07-24 22:34:09.759182] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:44.114 [2024-07-24 22:34:09.759457] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.114 [2024-07-24 22:34:09.759487] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.114 [2024-07-24 22:34:09.759508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.114 [2024-07-24 22:34:09.763566] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.114 [2024-07-24 22:34:09.772678] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.114 [2024-07-24 22:34:09.773308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.114 [2024-07-24 22:34:09.773351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:44.114 [2024-07-24 22:34:09.773372] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:44.114 [2024-07-24 22:34:09.773660] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:44.114 [2024-07-24 22:34:09.773944] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.114 [2024-07-24 22:34:09.773968] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.114 [2024-07-24 22:34:09.773989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.114 [2024-07-24 22:34:09.778147] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.114 [2024-07-24 22:34:09.787346] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.114 [2024-07-24 22:34:09.787952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.114 [2024-07-24 22:34:09.787995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:44.114 [2024-07-24 22:34:09.788016] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:44.114 [2024-07-24 22:34:09.788297] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:44.114 [2024-07-24 22:34:09.788578] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.114 [2024-07-24 22:34:09.788602] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.114 [2024-07-24 22:34:09.788622] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.114 [2024-07-24 22:34:09.792691] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.114 [2024-07-24 22:34:09.801831] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.114 [2024-07-24 22:34:09.802262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.114 [2024-07-24 22:34:09.802291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:44.114 [2024-07-24 22:34:09.802308] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:44.114 [2024-07-24 22:34:09.802583] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:44.114 [2024-07-24 22:34:09.802851] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.114 [2024-07-24 22:34:09.802874] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.114 [2024-07-24 22:34:09.802890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.114 [2024-07-24 22:34:09.806934] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.114 [2024-07-24 22:34:09.816330] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.114 [2024-07-24 22:34:09.816783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.115 [2024-07-24 22:34:09.816817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:44.115 [2024-07-24 22:34:09.816835] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:44.374 [2024-07-24 22:34:09.817102] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:44.374 [2024-07-24 22:34:09.817373] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.374 [2024-07-24 22:34:09.817396] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.374 [2024-07-24 22:34:09.817412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.374 [2024-07-24 22:34:09.821526] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.374 22:34:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:44.374 22:34:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:24:44.374 22:34:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:44.374 22:34:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:44.374 22:34:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:44.374 [2024-07-24 22:34:09.830845] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.374 [2024-07-24 22:34:09.831238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.374 [2024-07-24 22:34:09.831268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:44.374 [2024-07-24 22:34:09.831286] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:44.374 [2024-07-24 22:34:09.831560] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:44.374 [2024-07-24 22:34:09.831829] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.374 [2024-07-24 22:34:09.831851] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.374 [2024-07-24 22:34:09.831867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.374 [2024-07-24 22:34:09.835897] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.374 [2024-07-24 22:34:09.845219] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.374 [2024-07-24 22:34:09.845646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.374 [2024-07-24 22:34:09.845677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:44.374 [2024-07-24 22:34:09.845694] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:44.374 [2024-07-24 22:34:09.845964] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:44.374 [2024-07-24 22:34:09.846238] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.374 [2024-07-24 22:34:09.846262] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.374 [2024-07-24 22:34:09.846279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.374 22:34:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:44.374 22:34:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:44.374 22:34:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.374 22:34:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:44.374 [2024-07-24 22:34:09.850372] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.374 [2024-07-24 22:34:09.854094] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:44.374 [2024-07-24 22:34:09.859696] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.374 [2024-07-24 22:34:09.860115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.374 [2024-07-24 22:34:09.860143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:44.374 [2024-07-24 22:34:09.860161] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:44.374 [2024-07-24 22:34:09.860431] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:44.374 [2024-07-24 22:34:09.860706] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.374 [2024-07-24 22:34:09.860730] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.374 [2024-07-24 22:34:09.860756] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.374 [2024-07-24 22:34:09.864819] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.374 [2024-07-24 22:34:09.874164] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.374 [2024-07-24 22:34:09.874609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.375 [2024-07-24 22:34:09.874639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:44.375 [2024-07-24 22:34:09.874656] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:44.375 [2024-07-24 22:34:09.874924] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:44.375 [2024-07-24 22:34:09.875203] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.375 [2024-07-24 22:34:09.875226] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.375 [2024-07-24 22:34:09.875241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.375 [2024-07-24 22:34:09.879298] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.375 22:34:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.375 22:34:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:44.375 22:34:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.375 22:34:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:44.375 [2024-07-24 22:34:09.888699] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.375 [2024-07-24 22:34:09.889303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.375 [2024-07-24 22:34:09.889348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:44.375 [2024-07-24 22:34:09.889369] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:44.375 [2024-07-24 22:34:09.889671] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:44.375 [2024-07-24 22:34:09.889946] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.375 [2024-07-24 22:34:09.889969] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.375 [2024-07-24 22:34:09.889988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.375 [2024-07-24 22:34:09.894091] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.375 Malloc0 00:24:44.375 22:34:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.375 22:34:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:44.375 22:34:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.375 22:34:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:44.375 [2024-07-24 22:34:09.903345] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.375 [2024-07-24 22:34:09.903895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.375 [2024-07-24 22:34:09.903931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:44.375 [2024-07-24 22:34:09.903952] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:44.375 [2024-07-24 22:34:09.904224] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:44.375 [2024-07-24 22:34:09.904515] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.375 [2024-07-24 22:34:09.904539] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.375 [2024-07-24 22:34:09.904558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.375 [2024-07-24 22:34:09.908638] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.375 22:34:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.375 22:34:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:44.375 22:34:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.375 22:34:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:44.375 [2024-07-24 22:34:09.917751] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.375 22:34:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.375 [2024-07-24 22:34:09.918161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:44.375 22:34:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:44.375 [2024-07-24 22:34:09.918190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a08d0 with addr=10.0.0.2, port=4420 00:24:44.375 [2024-07-24 22:34:09.918208] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a08d0 is same with the state(5) to be set 00:24:44.375 22:34:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:44.375 22:34:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:44.375 [2024-07-24 22:34:09.918491] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a08d0 (9): Bad file descriptor 00:24:44.375 [2024-07-24 22:34:09.918760] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:44.375 [2024-07-24 22:34:09.918783] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:44.375 [2024-07-24 22:34:09.918799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:44.375 [2024-07-24 22:34:09.921926] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:44.375 [2024-07-24 22:34:09.922834] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:44.375 22:34:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:44.375 22:34:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3925798 00:24:44.375 [2024-07-24 22:34:09.932134] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:44.375 [2024-07-24 22:34:09.974346] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:54.356 00:24:54.356 Latency(us) 00:24:54.356 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:54.356 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:54.356 Verification LBA range: start 0x0 length 0x4000 00:24:54.356 Nvme1n1 : 15.01 5753.97 22.48 7472.78 0.00 9646.84 703.91 24175.50 00:24:54.356 =================================================================================================================== 00:24:54.356 Total : 5753.97 22.48 7472.78 0.00 9646.84 703.91 24175.50 00:24:54.356 22:34:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:24:54.356 22:34:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:54.356 22:34:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.356 22:34:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:54.356 22:34:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.356 22:34:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:24:54.356 22:34:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:24:54.356 22:34:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:54.356 22:34:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:24:54.356 22:34:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:54.356 22:34:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:24:54.356 22:34:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:54.356 22:34:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:54.356 rmmod nvme_tcp 00:24:54.356 rmmod nvme_fabrics 00:24:54.356 rmmod nvme_keyring 00:24:54.356 22:34:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:54.356 22:34:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:24:54.356 22:34:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:24:54.356 22:34:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 3926376 ']' 00:24:54.356 22:34:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 3926376 00:24:54.356 22:34:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 3926376 ']' 00:24:54.356 22:34:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 3926376 00:24:54.356 22:34:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:24:54.356 22:34:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:54.356 22:34:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3926376 00:24:54.356 22:34:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:54.356 22:34:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:54.356 22:34:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3926376' 00:24:54.356 killing process with pid 3926376 00:24:54.356 22:34:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 3926376 00:24:54.356 22:34:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 3926376 00:24:54.356 22:34:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:54.356 22:34:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:54.356 22:34:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:54.356 22:34:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:54.356 22:34:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:54.356 22:34:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:54.357 22:34:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:54.357 22:34:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:56.258 22:34:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:56.258 00:24:56.258 real 0m22.075s 00:24:56.258 user 0m59.887s 00:24:56.258 sys 0m3.898s 00:24:56.258 22:34:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:56.258 22:34:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:56.258 ************************************ 00:24:56.258 END TEST nvmf_bdevperf 00:24:56.258 ************************************ 00:24:56.258 22:34:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:24:56.258 22:34:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:56.258 22:34:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:56.258 22:34:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.258 ************************************ 00:24:56.258 START TEST nvmf_target_disconnect 00:24:56.258 ************************************ 00:24:56.258 22:34:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:24:56.258 * Looking for test storage... 00:24:56.258 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:56.258 22:34:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:56.258 22:34:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:24:56.258 22:34:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:56.258 22:34:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:56.258 22:34:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:56.258 22:34:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:56.258 22:34:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:56.258 22:34:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:56.258 22:34:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:56.258 22:34:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:56.258 22:34:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:56.258 22:34:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:56.258 22:34:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:24:56.258 22:34:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:24:56.258 22:34:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:56.258 22:34:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:56.258 22:34:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:56.258 22:34:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:56.258 22:34:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:56.258 22:34:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:56.258 22:34:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:56.258 22:34:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:56.258 22:34:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.258 22:34:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.258 22:34:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.258 22:34:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:24:56.258 22:34:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:56.258 22:34:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:24:56.258 22:34:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:56.258 22:34:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:56.258 22:34:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:56.258 22:34:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:56.258 22:34:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:56.258 22:34:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:56.258 22:34:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:56.258 22:34:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:56.259 22:34:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:56.259 22:34:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:24:56.259 22:34:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:24:56.259 22:34:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:24:56.259 22:34:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:56.259 22:34:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:56.259 22:34:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:56.259 22:34:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:56.259 22:34:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:56.259 22:34:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:56.259 22:34:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:56.259 22:34:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:56.259 22:34:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:56.259 22:34:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:56.259 22:34:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:24:56.259 22:34:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:57.638 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:57.638 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:24:57.638 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:57.638 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:57.638 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:57.638 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:57.638 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:57.638 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:24:57.638 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:57.638 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:24:57.638 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:24:57.638 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:24:57.638 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:24:57.638 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:24:57.638 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:24:57.638 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:57.638 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:57.638 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:57.638 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:57.638 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:57.638 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:57.638 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:57.638 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:57.638 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:57.638 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:57.638 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:57.638 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:57.638 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:57.638 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:57.638 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:57.638 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:57.638 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:57.638 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:57.638 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:24:57.638 Found 0000:08:00.0 (0x8086 - 0x159b) 00:24:57.638 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:57.638 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:57.638 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:57.638 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:57.638 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:57.638 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:57.638 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:24:57.638 Found 0000:08:00.1 (0x8086 - 0x159b) 00:24:57.638 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:57.638 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:57.638 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:57.638 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:57.638 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:57.638 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:57.639 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:57.639 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:57.639 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:57.639 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:57.639 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:57.639 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:57.639 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:57.639 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:57.639 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:57.639 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:24:57.639 Found net devices under 0000:08:00.0: cvl_0_0 00:24:57.639 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:57.639 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:57.639 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:57.639 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:57.639 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:57.639 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:57.639 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:57.639 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:57.639 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:24:57.639 Found net devices under 0000:08:00.1: cvl_0_1 00:24:57.639 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:57.639 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:57.639 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:24:57.639 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:57.639 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:57.639 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:57.639 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:57.639 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:57.639 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:57.639 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:57.639 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:57.639 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:57.639 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:57.639 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:57.639 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:57.639 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:57.639 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:57.639 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:57.639 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:57.639 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:57.639 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:57.639 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:57.639 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:57.639 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:57.639 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:57.639 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:57.899 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:57.899 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.339 ms 00:24:57.899 00:24:57.899 --- 10.0.0.2 ping statistics --- 00:24:57.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:57.899 rtt min/avg/max/mdev = 0.339/0.339/0.339/0.000 ms 00:24:57.899 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:57.899 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:57.899 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:24:57.899 00:24:57.899 --- 10.0.0.1 ping statistics --- 00:24:57.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:57.899 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:24:57.899 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:57.899 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:24:57.899 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:57.899 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:57.899 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:57.899 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:57.899 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:57.899 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:57.899 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:57.899 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:24:57.899 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:57.899 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:57.899 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:57.899 ************************************ 00:24:57.899 START TEST nvmf_target_disconnect_tc1 00:24:57.899 ************************************ 00:24:57.899 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:24:57.899 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:57.899 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:24:57.899 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:57.899 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:24:57.899 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:57.899 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:24:57.899 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:57.899 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:24:57.899 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:57.899 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:24:57.899 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:24:57.899 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:57.899 EAL: No free 2048 kB hugepages reported on node 1 00:24:57.899 [2024-07-24 22:34:23.487325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:57.899 [2024-07-24 22:34:23.487413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21b40c0 with addr=10.0.0.2, port=4420 00:24:57.899 [2024-07-24 22:34:23.487461] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:57.899 [2024-07-24 22:34:23.487513] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:57.899 [2024-07-24 22:34:23.487539] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:24:57.899 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:24:57.899 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:24:57.899 Initializing NVMe Controllers 00:24:57.899 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:24:57.899 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:57.899 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:57.899 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:57.899 00:24:57.899 real 0m0.096s 00:24:57.899 user 0m0.041s 00:24:57.899 sys 0m0.054s 00:24:57.899 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:57.899 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:57.899 ************************************ 00:24:57.899 END TEST nvmf_target_disconnect_tc1 00:24:57.899 ************************************ 00:24:57.899 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:24:57.899 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:57.899 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:57.899 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:57.899 ************************************ 00:24:57.899 START TEST nvmf_target_disconnect_tc2 00:24:57.899 ************************************ 00:24:57.899 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:24:57.899 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:24:57.899 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:24:57.899 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:57.899 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:57.899 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:57.899 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3928731 00:24:57.899 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:24:57.900 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3928731 00:24:57.900 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3928731 ']' 00:24:57.900 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:57.900 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:57.900 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:57.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:57.900 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:57.900 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:58.157 [2024-07-24 22:34:23.612930] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:24:58.157 [2024-07-24 22:34:23.613023] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:58.157 EAL: No free 2048 kB hugepages reported on node 1 00:24:58.157 [2024-07-24 22:34:23.679230] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:58.157 [2024-07-24 22:34:23.796857] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:58.157 [2024-07-24 22:34:23.796917] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:58.157 [2024-07-24 22:34:23.796933] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:58.157 [2024-07-24 22:34:23.796950] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:58.157 [2024-07-24 22:34:23.796962] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:58.157 [2024-07-24 22:34:23.797027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:24:58.157 [2024-07-24 22:34:23.797097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:24:58.157 [2024-07-24 22:34:23.797195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:24:58.157 [2024-07-24 22:34:23.797197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:24:58.414 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:58.414 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:24:58.414 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:58.414 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:58.414 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:58.414 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:58.414 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:58.414 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.414 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:58.414 Malloc0 00:24:58.414 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.414 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:58.414 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.414 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:58.414 [2024-07-24 22:34:23.960237] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:58.414 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.414 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:58.414 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.414 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:58.414 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.414 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:58.414 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.414 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:58.414 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.414 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:58.414 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.414 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:58.414 [2024-07-24 22:34:23.988464] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:58.414 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.414 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:58.414 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.414 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:58.414 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.414 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3928842 00:24:58.414 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:24:58.414 22:34:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:58.414 EAL: No free 2048 kB hugepages reported on node 1 00:25:00.317 22:34:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3928731 00:25:00.317 22:34:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:25:00.317 Write completed with error (sct=0, sc=8) 00:25:00.317 starting I/O failed 00:25:00.317 Write completed with error (sct=0, sc=8) 00:25:00.317 starting I/O failed 00:25:00.317 Write completed with error (sct=0, sc=8) 00:25:00.317 starting I/O failed 00:25:00.317 Read completed with error (sct=0, sc=8) 00:25:00.317 starting I/O failed 00:25:00.317 Write completed with error (sct=0, sc=8) 00:25:00.317 starting I/O failed 00:25:00.317 Write completed with error (sct=0, sc=8) 00:25:00.317 starting I/O failed 00:25:00.317 Read completed with error (sct=0, sc=8) 00:25:00.317 starting I/O failed 00:25:00.317 Read completed with error (sct=0, sc=8) 00:25:00.317 starting I/O failed 00:25:00.317 Read completed with error (sct=0, sc=8) 00:25:00.317 starting I/O failed 00:25:00.317 Write completed with error (sct=0, sc=8) 00:25:00.317 starting I/O failed 00:25:00.317 Read completed with error (sct=0, sc=8) 00:25:00.317 starting I/O failed 00:25:00.317 Read completed with error (sct=0, sc=8) 00:25:00.317 starting I/O failed 00:25:00.317 Read completed with error (sct=0, sc=8) 00:25:00.317 starting I/O failed 00:25:00.317 Write completed with error (sct=0, sc=8) 00:25:00.317 starting I/O failed 00:25:00.317 Read completed with error (sct=0, sc=8) 00:25:00.317 starting I/O failed 00:25:00.317 Write completed with error (sct=0, sc=8) 00:25:00.317 starting I/O failed 00:25:00.317 Write completed with error (sct=0, sc=8) 00:25:00.317 starting I/O failed 00:25:00.317 Read completed with error (sct=0, sc=8) 00:25:00.317 starting I/O failed 00:25:00.317 Write completed with error (sct=0, sc=8) 00:25:00.317 starting I/O failed 00:25:00.318 Read completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Read completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Write completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Write completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Read completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Write completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Write completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Read completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Read completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Read completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Read completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Read completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Write completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Read completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Read completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 [2024-07-24 22:34:26.015614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:00.318 Read completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Read completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Read completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Read completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Read completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Write completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Write completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Read completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Read completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Read completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Write completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Write completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Read completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Read completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Read completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Write completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Write completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Write completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Read completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Write completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Read completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Write completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Read completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Read completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Read completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Read completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Write completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Write completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Write completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Write completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 [2024-07-24 22:34:26.016137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:00.318 Read completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Read completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Read completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Read completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Write completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Read completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Write completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Read completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Write completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Write completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Write completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Read completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Read completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Read completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Write completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Read completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Read completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Write completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Write completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Write completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Write completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Read completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Write completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Write completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Read completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Write completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Write completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Write completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Write completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Write completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Read completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Write completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 [2024-07-24 22:34:26.016542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:00.318 Read completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Read completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Read completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Read completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Read completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Read completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Read completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Write completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Read completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Write completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Write completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Read completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Write completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Write completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Write completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Read completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Write completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Write completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Write completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Read completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Write completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Read completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Write completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Write completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Read completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Write completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Write completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Write completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Read completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Read completed with error (sct=0, sc=8) 00:25:00.318 starting I/O failed 00:25:00.318 Write completed with error (sct=0, sc=8) 00:25:00.319 starting I/O failed 00:25:00.319 Read completed with error (sct=0, sc=8) 00:25:00.319 starting I/O failed 00:25:00.319 [2024-07-24 22:34:26.016960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:00.319 [2024-07-24 22:34:26.017146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.319 [2024-07-24 22:34:26.017180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.319 qpair failed and we were unable to recover it. 00:25:00.319 [2024-07-24 22:34:26.017426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.319 [2024-07-24 22:34:26.017478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.319 qpair failed and we were unable to recover it. 00:25:00.319 [2024-07-24 22:34:26.017648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.319 [2024-07-24 22:34:26.017677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.319 qpair failed and we were unable to recover it. 00:25:00.319 [2024-07-24 22:34:26.017891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.319 [2024-07-24 22:34:26.017942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.319 qpair failed and we were unable to recover it. 00:25:00.319 [2024-07-24 22:34:26.018111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.319 [2024-07-24 22:34:26.018159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.319 qpair failed and we were unable to recover it. 00:25:00.319 [2024-07-24 22:34:26.018272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.319 [2024-07-24 22:34:26.018299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.319 qpair failed and we were unable to recover it. 00:25:00.319 [2024-07-24 22:34:26.018555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.319 [2024-07-24 22:34:26.018583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.319 qpair failed and we were unable to recover it. 00:25:00.319 [2024-07-24 22:34:26.018828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.319 [2024-07-24 22:34:26.018855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.319 qpair failed and we were unable to recover it. 00:25:00.319 [2024-07-24 22:34:26.019025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.319 [2024-07-24 22:34:26.019052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.319 qpair failed and we were unable to recover it. 00:25:00.319 [2024-07-24 22:34:26.019194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.319 [2024-07-24 22:34:26.019249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.319 qpair failed and we were unable to recover it. 00:25:00.319 [2024-07-24 22:34:26.019424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.319 [2024-07-24 22:34:26.019468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.319 qpair failed and we were unable to recover it. 00:25:00.319 [2024-07-24 22:34:26.019646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.319 [2024-07-24 22:34:26.019673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.319 qpair failed and we were unable to recover it. 00:25:00.319 [2024-07-24 22:34:26.019892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.319 [2024-07-24 22:34:26.019918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.319 qpair failed and we were unable to recover it. 00:25:00.319 [2024-07-24 22:34:26.020107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.319 [2024-07-24 22:34:26.020133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.319 qpair failed and we were unable to recover it. 00:25:00.319 [2024-07-24 22:34:26.020235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.319 [2024-07-24 22:34:26.020262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.319 qpair failed and we were unable to recover it. 00:25:00.319 [2024-07-24 22:34:26.020450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.319 [2024-07-24 22:34:26.020478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.319 qpair failed and we were unable to recover it. 00:25:00.319 [2024-07-24 22:34:26.020676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.319 [2024-07-24 22:34:26.020723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.319 qpair failed and we were unable to recover it. 00:25:00.319 [2024-07-24 22:34:26.020876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.319 [2024-07-24 22:34:26.020910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.319 qpair failed and we were unable to recover it. 00:25:00.319 [2024-07-24 22:34:26.021112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.319 [2024-07-24 22:34:26.021160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.319 qpair failed and we were unable to recover it. 00:25:00.319 [2024-07-24 22:34:26.021355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.598 [2024-07-24 22:34:26.021382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.598 qpair failed and we were unable to recover it. 00:25:00.598 [2024-07-24 22:34:26.021501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.598 [2024-07-24 22:34:26.021537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.598 qpair failed and we were unable to recover it. 00:25:00.598 [2024-07-24 22:34:26.021738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.598 [2024-07-24 22:34:26.021765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.598 qpair failed and we were unable to recover it. 00:25:00.598 [2024-07-24 22:34:26.021896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.598 [2024-07-24 22:34:26.021923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.598 qpair failed and we were unable to recover it. 00:25:00.598 [2024-07-24 22:34:26.022087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.598 [2024-07-24 22:34:26.022113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.598 qpair failed and we were unable to recover it. 00:25:00.598 [2024-07-24 22:34:26.022295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.598 [2024-07-24 22:34:26.022321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.598 qpair failed and we were unable to recover it. 00:25:00.598 [2024-07-24 22:34:26.022421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.598 [2024-07-24 22:34:26.022449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.598 qpair failed and we were unable to recover it. 00:25:00.598 [2024-07-24 22:34:26.022636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.598 [2024-07-24 22:34:26.022686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.598 qpair failed and we were unable to recover it. 00:25:00.598 [2024-07-24 22:34:26.022808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.598 [2024-07-24 22:34:26.022850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.598 qpair failed and we were unable to recover it. 00:25:00.598 [2024-07-24 22:34:26.022994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.598 [2024-07-24 22:34:26.023044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.598 qpair failed and we were unable to recover it. 00:25:00.598 [2024-07-24 22:34:26.023206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.598 [2024-07-24 22:34:26.023232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.598 qpair failed and we were unable to recover it. 00:25:00.598 [2024-07-24 22:34:26.023394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.598 [2024-07-24 22:34:26.023420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.598 qpair failed and we were unable to recover it. 00:25:00.598 [2024-07-24 22:34:26.023636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.598 [2024-07-24 22:34:26.023663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.598 qpair failed and we were unable to recover it. 00:25:00.598 [2024-07-24 22:34:26.023827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.598 [2024-07-24 22:34:26.023854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.598 qpair failed and we were unable to recover it. 00:25:00.598 [2024-07-24 22:34:26.024027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.598 [2024-07-24 22:34:26.024071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.598 qpair failed and we were unable to recover it. 00:25:00.598 [2024-07-24 22:34:26.024172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.598 [2024-07-24 22:34:26.024199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.598 qpair failed and we were unable to recover it. 00:25:00.598 [2024-07-24 22:34:26.024368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.598 [2024-07-24 22:34:26.024422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.598 qpair failed and we were unable to recover it. 00:25:00.598 [2024-07-24 22:34:26.024535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.598 [2024-07-24 22:34:26.024569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.598 qpair failed and we were unable to recover it. 00:25:00.598 [2024-07-24 22:34:26.024741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.598 [2024-07-24 22:34:26.024789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.598 qpair failed and we were unable to recover it. 00:25:00.598 [2024-07-24 22:34:26.024982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.598 [2024-07-24 22:34:26.025031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.598 qpair failed and we were unable to recover it. 00:25:00.598 [2024-07-24 22:34:26.025219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.599 [2024-07-24 22:34:26.025255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.599 qpair failed and we were unable to recover it. 00:25:00.599 [2024-07-24 22:34:26.025448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.599 [2024-07-24 22:34:26.025545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.599 qpair failed and we were unable to recover it. 00:25:00.599 [2024-07-24 22:34:26.025780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.599 [2024-07-24 22:34:26.025842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.599 qpair failed and we were unable to recover it. 00:25:00.599 [2024-07-24 22:34:26.025979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.599 [2024-07-24 22:34:26.026005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.599 qpair failed and we were unable to recover it. 00:25:00.599 [2024-07-24 22:34:26.026161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.599 [2024-07-24 22:34:26.026188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.599 qpair failed and we were unable to recover it. 00:25:00.599 [2024-07-24 22:34:26.026345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.599 [2024-07-24 22:34:26.026371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.599 qpair failed and we were unable to recover it. 00:25:00.599 [2024-07-24 22:34:26.026494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.599 [2024-07-24 22:34:26.026522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.599 qpair failed and we were unable to recover it. 00:25:00.599 [2024-07-24 22:34:26.026624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.599 [2024-07-24 22:34:26.026650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.599 qpair failed and we were unable to recover it. 00:25:00.599 [2024-07-24 22:34:26.026871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.599 [2024-07-24 22:34:26.026899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.599 qpair failed and we were unable to recover it. 00:25:00.599 [2024-07-24 22:34:26.027025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.599 [2024-07-24 22:34:26.027052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.599 qpair failed and we were unable to recover it. 00:25:00.599 [2024-07-24 22:34:26.027179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.599 [2024-07-24 22:34:26.027206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.599 qpair failed and we were unable to recover it. 00:25:00.599 [2024-07-24 22:34:26.027321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.599 [2024-07-24 22:34:26.027348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.599 qpair failed and we were unable to recover it. 00:25:00.599 [2024-07-24 22:34:26.027457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.599 [2024-07-24 22:34:26.027490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.599 qpair failed and we were unable to recover it. 00:25:00.599 [2024-07-24 22:34:26.027635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.599 [2024-07-24 22:34:26.027679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.599 qpair failed and we were unable to recover it. 00:25:00.599 [2024-07-24 22:34:26.027864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.599 [2024-07-24 22:34:26.027891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.599 qpair failed and we were unable to recover it. 00:25:00.599 [2024-07-24 22:34:26.028030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.599 [2024-07-24 22:34:26.028079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.599 qpair failed and we were unable to recover it. 00:25:00.599 [2024-07-24 22:34:26.028254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.599 [2024-07-24 22:34:26.028280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.599 qpair failed and we were unable to recover it. 00:25:00.599 [2024-07-24 22:34:26.028464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.599 [2024-07-24 22:34:26.028504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.599 qpair failed and we were unable to recover it. 00:25:00.599 [2024-07-24 22:34:26.028645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.599 [2024-07-24 22:34:26.028697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.599 qpair failed and we were unable to recover it. 00:25:00.599 [2024-07-24 22:34:26.028800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.599 [2024-07-24 22:34:26.028827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.599 qpair failed and we were unable to recover it. 00:25:00.599 [2024-07-24 22:34:26.028969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.599 [2024-07-24 22:34:26.029022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.599 qpair failed and we were unable to recover it. 00:25:00.599 [2024-07-24 22:34:26.029145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.599 [2024-07-24 22:34:26.029193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.599 qpair failed and we were unable to recover it. 00:25:00.599 [2024-07-24 22:34:26.029371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.599 [2024-07-24 22:34:26.029397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.599 qpair failed and we were unable to recover it. 00:25:00.599 [2024-07-24 22:34:26.029526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.599 [2024-07-24 22:34:26.029553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.599 qpair failed and we were unable to recover it. 00:25:00.599 [2024-07-24 22:34:26.029746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.599 [2024-07-24 22:34:26.029773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.599 qpair failed and we were unable to recover it. 00:25:00.599 [2024-07-24 22:34:26.029886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.599 [2024-07-24 22:34:26.029912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.599 qpair failed and we were unable to recover it. 00:25:00.599 [2024-07-24 22:34:26.030032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.599 [2024-07-24 22:34:26.030059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.599 qpair failed and we were unable to recover it. 00:25:00.599 [2024-07-24 22:34:26.030157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.599 [2024-07-24 22:34:26.030182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.599 qpair failed and we were unable to recover it. 00:25:00.599 [2024-07-24 22:34:26.030306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.599 [2024-07-24 22:34:26.030374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.599 qpair failed and we were unable to recover it. 00:25:00.599 [2024-07-24 22:34:26.030502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.599 [2024-07-24 22:34:26.030530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.599 qpair failed and we were unable to recover it. 00:25:00.599 [2024-07-24 22:34:26.030661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.599 [2024-07-24 22:34:26.030688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.599 qpair failed and we were unable to recover it. 00:25:00.599 [2024-07-24 22:34:26.030786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.599 [2024-07-24 22:34:26.030812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.599 qpair failed and we were unable to recover it. 00:25:00.599 [2024-07-24 22:34:26.030938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.599 [2024-07-24 22:34:26.030982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.599 qpair failed and we were unable to recover it. 00:25:00.599 [2024-07-24 22:34:26.031111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.599 [2024-07-24 22:34:26.031137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.599 qpair failed and we were unable to recover it. 00:25:00.599 [2024-07-24 22:34:26.031283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.599 [2024-07-24 22:34:26.031333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.599 qpair failed and we were unable to recover it. 00:25:00.599 [2024-07-24 22:34:26.031536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.599 [2024-07-24 22:34:26.031562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.599 qpair failed and we were unable to recover it. 00:25:00.599 [2024-07-24 22:34:26.031726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.599 [2024-07-24 22:34:26.031778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.599 qpair failed and we were unable to recover it. 00:25:00.600 [2024-07-24 22:34:26.031943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.600 [2024-07-24 22:34:26.031969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.600 qpair failed and we were unable to recover it. 00:25:00.600 [2024-07-24 22:34:26.032169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.600 [2024-07-24 22:34:26.032219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.600 qpair failed and we were unable to recover it. 00:25:00.600 [2024-07-24 22:34:26.032343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.600 [2024-07-24 22:34:26.032397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.600 qpair failed and we were unable to recover it. 00:25:00.600 [2024-07-24 22:34:26.032493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.600 [2024-07-24 22:34:26.032519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.600 qpair failed and we were unable to recover it. 00:25:00.600 [2024-07-24 22:34:26.032658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.600 [2024-07-24 22:34:26.032713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.600 qpair failed and we were unable to recover it. 00:25:00.600 [2024-07-24 22:34:26.032851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.600 [2024-07-24 22:34:26.032902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.600 qpair failed and we were unable to recover it. 00:25:00.600 [2024-07-24 22:34:26.033082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.600 [2024-07-24 22:34:26.033134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.600 qpair failed and we were unable to recover it. 00:25:00.600 [2024-07-24 22:34:26.033326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.600 [2024-07-24 22:34:26.033376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.600 qpair failed and we were unable to recover it. 00:25:00.600 [2024-07-24 22:34:26.033539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.600 [2024-07-24 22:34:26.033565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.600 qpair failed and we were unable to recover it. 00:25:00.600 [2024-07-24 22:34:26.033758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.600 [2024-07-24 22:34:26.033813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.600 qpair failed and we were unable to recover it. 00:25:00.600 [2024-07-24 22:34:26.033931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.600 [2024-07-24 22:34:26.033956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.600 qpair failed and we were unable to recover it. 00:25:00.600 [2024-07-24 22:34:26.034098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.600 [2024-07-24 22:34:26.034154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.600 qpair failed and we were unable to recover it. 00:25:00.600 [2024-07-24 22:34:26.034284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.600 [2024-07-24 22:34:26.034338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.600 qpair failed and we were unable to recover it. 00:25:00.600 [2024-07-24 22:34:26.034462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.600 [2024-07-24 22:34:26.034525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.600 qpair failed and we were unable to recover it. 00:25:00.600 [2024-07-24 22:34:26.034628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.600 [2024-07-24 22:34:26.034654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.600 qpair failed and we were unable to recover it. 00:25:00.600 [2024-07-24 22:34:26.034756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.600 [2024-07-24 22:34:26.034783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.600 qpair failed and we were unable to recover it. 00:25:00.600 [2024-07-24 22:34:26.034896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.600 [2024-07-24 22:34:26.034955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.600 qpair failed and we were unable to recover it. 00:25:00.600 [2024-07-24 22:34:26.035057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.600 [2024-07-24 22:34:26.035084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.600 qpair failed and we were unable to recover it. 00:25:00.600 [2024-07-24 22:34:26.035289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.600 [2024-07-24 22:34:26.035315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.600 qpair failed and we were unable to recover it. 00:25:00.600 [2024-07-24 22:34:26.035446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.600 [2024-07-24 22:34:26.035472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.600 qpair failed and we were unable to recover it. 00:25:00.600 [2024-07-24 22:34:26.035598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.600 [2024-07-24 22:34:26.035639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.600 qpair failed and we were unable to recover it. 00:25:00.600 [2024-07-24 22:34:26.035855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.600 [2024-07-24 22:34:26.035908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.600 qpair failed and we were unable to recover it. 00:25:00.600 [2024-07-24 22:34:26.036095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.600 [2024-07-24 22:34:26.036153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.600 qpair failed and we were unable to recover it. 00:25:00.600 [2024-07-24 22:34:26.036363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.600 [2024-07-24 22:34:26.036389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.600 qpair failed and we were unable to recover it. 00:25:00.600 [2024-07-24 22:34:26.036585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.600 [2024-07-24 22:34:26.036618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.600 qpair failed and we were unable to recover it. 00:25:00.600 [2024-07-24 22:34:26.036750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.600 [2024-07-24 22:34:26.036830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.600 qpair failed and we were unable to recover it. 00:25:00.600 [2024-07-24 22:34:26.036933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.600 [2024-07-24 22:34:26.036958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.600 qpair failed and we were unable to recover it. 00:25:00.600 [2024-07-24 22:34:26.037104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.600 [2024-07-24 22:34:26.037158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.600 qpair failed and we were unable to recover it. 00:25:00.600 [2024-07-24 22:34:26.037357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.600 [2024-07-24 22:34:26.037421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.600 qpair failed and we were unable to recover it. 00:25:00.600 [2024-07-24 22:34:26.037550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.600 [2024-07-24 22:34:26.037601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.600 qpair failed and we were unable to recover it. 00:25:00.600 [2024-07-24 22:34:26.037733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.600 [2024-07-24 22:34:26.037785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.600 qpair failed and we were unable to recover it. 00:25:00.600 [2024-07-24 22:34:26.037932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.600 [2024-07-24 22:34:26.037976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.600 qpair failed and we were unable to recover it. 00:25:00.600 [2024-07-24 22:34:26.038183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.600 [2024-07-24 22:34:26.038231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.600 qpair failed and we were unable to recover it. 00:25:00.600 [2024-07-24 22:34:26.038383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.600 [2024-07-24 22:34:26.038440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.600 qpair failed and we were unable to recover it. 00:25:00.600 [2024-07-24 22:34:26.038594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.600 [2024-07-24 22:34:26.038619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.600 qpair failed and we were unable to recover it. 00:25:00.600 [2024-07-24 22:34:26.038740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.600 [2024-07-24 22:34:26.038771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.600 qpair failed and we were unable to recover it. 00:25:00.600 [2024-07-24 22:34:26.038959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.601 [2024-07-24 22:34:26.038985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.601 qpair failed and we were unable to recover it. 00:25:00.601 [2024-07-24 22:34:26.039151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.601 [2024-07-24 22:34:26.039216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.601 qpair failed and we were unable to recover it. 00:25:00.601 [2024-07-24 22:34:26.039407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.601 [2024-07-24 22:34:26.039435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.601 qpair failed and we were unable to recover it. 00:25:00.601 [2024-07-24 22:34:26.039538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.601 [2024-07-24 22:34:26.039565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.601 qpair failed and we were unable to recover it. 00:25:00.601 [2024-07-24 22:34:26.039753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.601 [2024-07-24 22:34:26.039779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.601 qpair failed and we were unable to recover it. 00:25:00.601 [2024-07-24 22:34:26.039943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.601 [2024-07-24 22:34:26.039970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.601 qpair failed and we were unable to recover it. 00:25:00.601 [2024-07-24 22:34:26.040213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.601 [2024-07-24 22:34:26.040261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.601 qpair failed and we were unable to recover it. 00:25:00.601 [2024-07-24 22:34:26.040408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.601 [2024-07-24 22:34:26.040453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.601 qpair failed and we were unable to recover it. 00:25:00.601 [2024-07-24 22:34:26.040638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.601 [2024-07-24 22:34:26.040704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.601 qpair failed and we were unable to recover it. 00:25:00.601 [2024-07-24 22:34:26.040970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.601 [2024-07-24 22:34:26.041026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.601 qpair failed and we were unable to recover it. 00:25:00.601 [2024-07-24 22:34:26.041164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.601 [2024-07-24 22:34:26.041192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.601 qpair failed and we were unable to recover it. 00:25:00.601 [2024-07-24 22:34:26.041348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.601 [2024-07-24 22:34:26.041397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.601 qpair failed and we were unable to recover it. 00:25:00.601 [2024-07-24 22:34:26.041540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.601 [2024-07-24 22:34:26.041581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.601 qpair failed and we were unable to recover it. 00:25:00.601 [2024-07-24 22:34:26.041723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.601 [2024-07-24 22:34:26.041751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.601 qpair failed and we were unable to recover it. 00:25:00.601 [2024-07-24 22:34:26.041864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.601 [2024-07-24 22:34:26.041890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.601 qpair failed and we were unable to recover it. 00:25:00.601 [2024-07-24 22:34:26.042006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.601 [2024-07-24 22:34:26.042032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.601 qpair failed and we were unable to recover it. 00:25:00.601 [2024-07-24 22:34:26.042231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.601 [2024-07-24 22:34:26.042278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.601 qpair failed and we were unable to recover it. 00:25:00.601 [2024-07-24 22:34:26.042436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.601 [2024-07-24 22:34:26.042463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.601 qpair failed and we were unable to recover it. 00:25:00.601 [2024-07-24 22:34:26.042616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.601 [2024-07-24 22:34:26.042667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.601 qpair failed and we were unable to recover it. 00:25:00.601 [2024-07-24 22:34:26.042773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.601 [2024-07-24 22:34:26.042799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.601 qpair failed and we were unable to recover it. 00:25:00.601 [2024-07-24 22:34:26.042916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.601 [2024-07-24 22:34:26.042961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.601 qpair failed and we were unable to recover it. 00:25:00.601 [2024-07-24 22:34:26.043094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.601 [2024-07-24 22:34:26.043119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.601 qpair failed and we were unable to recover it. 00:25:00.601 [2024-07-24 22:34:26.043218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.601 [2024-07-24 22:34:26.043243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.601 qpair failed and we were unable to recover it. 00:25:00.601 [2024-07-24 22:34:26.043346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.601 [2024-07-24 22:34:26.043373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.601 qpair failed and we were unable to recover it. 00:25:00.601 [2024-07-24 22:34:26.043487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.601 [2024-07-24 22:34:26.043514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.601 qpair failed and we were unable to recover it. 00:25:00.601 [2024-07-24 22:34:26.043752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.601 [2024-07-24 22:34:26.043812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.601 qpair failed and we were unable to recover it. 00:25:00.601 [2024-07-24 22:34:26.043931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.601 [2024-07-24 22:34:26.043958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.601 qpair failed and we were unable to recover it. 00:25:00.601 [2024-07-24 22:34:26.044093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.601 [2024-07-24 22:34:26.044118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.601 qpair failed and we were unable to recover it. 00:25:00.601 [2024-07-24 22:34:26.044219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.601 [2024-07-24 22:34:26.044249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.601 qpair failed and we were unable to recover it. 00:25:00.601 [2024-07-24 22:34:26.044445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.601 [2024-07-24 22:34:26.044505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.601 qpair failed and we were unable to recover it. 00:25:00.601 [2024-07-24 22:34:26.044754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.601 [2024-07-24 22:34:26.044783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.601 qpair failed and we were unable to recover it. 00:25:00.601 [2024-07-24 22:34:26.044926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.601 [2024-07-24 22:34:26.044977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.601 qpair failed and we were unable to recover it. 00:25:00.601 [2024-07-24 22:34:26.045109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.601 [2024-07-24 22:34:26.045164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.601 qpair failed and we were unable to recover it. 00:25:00.601 [2024-07-24 22:34:26.045355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.601 [2024-07-24 22:34:26.045403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.601 qpair failed and we were unable to recover it. 00:25:00.601 [2024-07-24 22:34:26.045504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.601 [2024-07-24 22:34:26.045531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.601 qpair failed and we were unable to recover it. 00:25:00.601 [2024-07-24 22:34:26.045647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.601 [2024-07-24 22:34:26.045672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.601 qpair failed and we were unable to recover it. 00:25:00.601 [2024-07-24 22:34:26.045807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.601 [2024-07-24 22:34:26.045862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.601 qpair failed and we were unable to recover it. 00:25:00.602 [2024-07-24 22:34:26.046071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.602 [2024-07-24 22:34:26.046098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.602 qpair failed and we were unable to recover it. 00:25:00.602 [2024-07-24 22:34:26.046281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.602 [2024-07-24 22:34:26.046307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.602 qpair failed and we were unable to recover it. 00:25:00.602 [2024-07-24 22:34:26.046464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.602 [2024-07-24 22:34:26.046533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.602 qpair failed and we were unable to recover it. 00:25:00.602 [2024-07-24 22:34:26.046662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.602 [2024-07-24 22:34:26.046718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.602 qpair failed and we were unable to recover it. 00:25:00.602 [2024-07-24 22:34:26.046906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.602 [2024-07-24 22:34:26.046932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.602 qpair failed and we were unable to recover it. 00:25:00.602 [2024-07-24 22:34:26.047133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.602 [2024-07-24 22:34:26.047160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.602 qpair failed and we were unable to recover it. 00:25:00.602 [2024-07-24 22:34:26.047323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.602 [2024-07-24 22:34:26.047376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.602 qpair failed and we were unable to recover it. 00:25:00.602 [2024-07-24 22:34:26.047549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.602 [2024-07-24 22:34:26.047576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.602 qpair failed and we were unable to recover it. 00:25:00.602 [2024-07-24 22:34:26.047678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.602 [2024-07-24 22:34:26.047705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.602 qpair failed and we were unable to recover it. 00:25:00.602 [2024-07-24 22:34:26.047893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.602 [2024-07-24 22:34:26.047944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.602 qpair failed and we were unable to recover it. 00:25:00.602 [2024-07-24 22:34:26.048085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.602 [2024-07-24 22:34:26.048133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.602 qpair failed and we were unable to recover it. 00:25:00.602 [2024-07-24 22:34:26.048334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.602 [2024-07-24 22:34:26.048360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.602 qpair failed and we were unable to recover it. 00:25:00.602 [2024-07-24 22:34:26.048509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.602 [2024-07-24 22:34:26.048553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.602 qpair failed and we were unable to recover it. 00:25:00.602 [2024-07-24 22:34:26.048714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.602 [2024-07-24 22:34:26.048766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.602 qpair failed and we were unable to recover it. 00:25:00.602 [2024-07-24 22:34:26.048901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.602 [2024-07-24 22:34:26.048958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.602 qpair failed and we were unable to recover it. 00:25:00.602 [2024-07-24 22:34:26.049117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.602 [2024-07-24 22:34:26.049144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.602 qpair failed and we were unable to recover it. 00:25:00.602 [2024-07-24 22:34:26.049302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.602 [2024-07-24 22:34:26.049356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.602 qpair failed and we were unable to recover it. 00:25:00.602 [2024-07-24 22:34:26.049456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.602 [2024-07-24 22:34:26.049489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.602 qpair failed and we were unable to recover it. 00:25:00.602 [2024-07-24 22:34:26.049682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.602 [2024-07-24 22:34:26.049708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.602 qpair failed and we were unable to recover it. 00:25:00.602 [2024-07-24 22:34:26.049825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.602 [2024-07-24 22:34:26.049852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.602 qpair failed and we were unable to recover it. 00:25:00.602 [2024-07-24 22:34:26.049998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.602 [2024-07-24 22:34:26.050049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.602 qpair failed and we were unable to recover it. 00:25:00.602 [2024-07-24 22:34:26.050201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.602 [2024-07-24 22:34:26.050254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.602 qpair failed and we were unable to recover it. 00:25:00.602 [2024-07-24 22:34:26.050367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.602 [2024-07-24 22:34:26.050393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.602 qpair failed and we were unable to recover it. 00:25:00.602 [2024-07-24 22:34:26.050555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.602 [2024-07-24 22:34:26.050582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.602 qpair failed and we were unable to recover it. 00:25:00.602 [2024-07-24 22:34:26.050768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.602 [2024-07-24 22:34:26.050814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.602 qpair failed and we were unable to recover it. 00:25:00.602 [2024-07-24 22:34:26.050914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.602 [2024-07-24 22:34:26.050942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.602 qpair failed and we were unable to recover it. 00:25:00.602 [2024-07-24 22:34:26.051067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.602 [2024-07-24 22:34:26.051093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.602 qpair failed and we were unable to recover it. 00:25:00.602 [2024-07-24 22:34:26.051194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.602 [2024-07-24 22:34:26.051220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.602 qpair failed and we were unable to recover it. 00:25:00.602 [2024-07-24 22:34:26.051370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.602 [2024-07-24 22:34:26.051430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.602 qpair failed and we were unable to recover it. 00:25:00.602 [2024-07-24 22:34:26.051590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.602 [2024-07-24 22:34:26.051648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.602 qpair failed and we were unable to recover it. 00:25:00.602 [2024-07-24 22:34:26.051823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.602 [2024-07-24 22:34:26.051851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.602 qpair failed and we were unable to recover it. 00:25:00.602 [2024-07-24 22:34:26.052032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.602 [2024-07-24 22:34:26.052071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.602 qpair failed and we were unable to recover it. 00:25:00.602 [2024-07-24 22:34:26.052236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.602 [2024-07-24 22:34:26.052262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.602 qpair failed and we were unable to recover it. 00:25:00.602 [2024-07-24 22:34:26.052403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.602 [2024-07-24 22:34:26.052455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.602 qpair failed and we were unable to recover it. 00:25:00.602 [2024-07-24 22:34:26.052663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.602 [2024-07-24 22:34:26.052715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.602 qpair failed and we were unable to recover it. 00:25:00.602 [2024-07-24 22:34:26.052851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.602 [2024-07-24 22:34:26.052906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.602 qpair failed and we were unable to recover it. 00:25:00.602 [2024-07-24 22:34:26.053035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.603 [2024-07-24 22:34:26.053090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.603 qpair failed and we were unable to recover it. 00:25:00.603 [2024-07-24 22:34:26.053237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.603 [2024-07-24 22:34:26.053287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.603 qpair failed and we were unable to recover it. 00:25:00.603 [2024-07-24 22:34:26.053463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.603 [2024-07-24 22:34:26.053501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.603 qpair failed and we were unable to recover it. 00:25:00.603 [2024-07-24 22:34:26.053661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.603 [2024-07-24 22:34:26.053716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.603 qpair failed and we were unable to recover it. 00:25:00.603 [2024-07-24 22:34:26.053878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.603 [2024-07-24 22:34:26.053905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.603 qpair failed and we were unable to recover it. 00:25:00.603 [2024-07-24 22:34:26.054054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.603 [2024-07-24 22:34:26.054106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.603 qpair failed and we were unable to recover it. 00:25:00.603 [2024-07-24 22:34:26.054235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.603 [2024-07-24 22:34:26.054288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.603 qpair failed and we were unable to recover it. 00:25:00.603 [2024-07-24 22:34:26.054402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.603 [2024-07-24 22:34:26.054445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.603 qpair failed and we were unable to recover it. 00:25:00.603 [2024-07-24 22:34:26.054587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.603 [2024-07-24 22:34:26.054615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.603 qpair failed and we were unable to recover it. 00:25:00.603 [2024-07-24 22:34:26.054758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.603 [2024-07-24 22:34:26.054811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.603 qpair failed and we were unable to recover it. 00:25:00.603 [2024-07-24 22:34:26.054943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.603 [2024-07-24 22:34:26.054970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.603 qpair failed and we were unable to recover it. 00:25:00.603 [2024-07-24 22:34:26.055111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.603 [2024-07-24 22:34:26.055162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.603 qpair failed and we were unable to recover it. 00:25:00.603 [2024-07-24 22:34:26.055291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.603 [2024-07-24 22:34:26.055343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.603 qpair failed and we were unable to recover it. 00:25:00.603 [2024-07-24 22:34:26.055516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.603 [2024-07-24 22:34:26.055570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.603 qpair failed and we were unable to recover it. 00:25:00.603 [2024-07-24 22:34:26.055715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.603 [2024-07-24 22:34:26.055776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.603 qpair failed and we were unable to recover it. 00:25:00.603 [2024-07-24 22:34:26.055957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.603 [2024-07-24 22:34:26.055982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.603 qpair failed and we were unable to recover it. 00:25:00.603 [2024-07-24 22:34:26.056156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.603 [2024-07-24 22:34:26.056213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.603 qpair failed and we were unable to recover it. 00:25:00.603 [2024-07-24 22:34:26.056367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.603 [2024-07-24 22:34:26.056414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.603 qpair failed and we were unable to recover it. 00:25:00.603 [2024-07-24 22:34:26.056541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.603 [2024-07-24 22:34:26.056567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.603 qpair failed and we were unable to recover it. 00:25:00.603 [2024-07-24 22:34:26.056706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.603 [2024-07-24 22:34:26.056749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.603 qpair failed and we were unable to recover it. 00:25:00.603 [2024-07-24 22:34:26.056933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.603 [2024-07-24 22:34:26.056959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.603 qpair failed and we were unable to recover it. 00:25:00.603 [2024-07-24 22:34:26.057104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.603 [2024-07-24 22:34:26.057155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.603 qpair failed and we were unable to recover it. 00:25:00.603 [2024-07-24 22:34:26.057316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.603 [2024-07-24 22:34:26.057369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.603 qpair failed and we were unable to recover it. 00:25:00.603 [2024-07-24 22:34:26.057561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.603 [2024-07-24 22:34:26.057588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.603 qpair failed and we were unable to recover it. 00:25:00.603 [2024-07-24 22:34:26.057729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.603 [2024-07-24 22:34:26.057755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.603 qpair failed and we were unable to recover it. 00:25:00.603 [2024-07-24 22:34:26.057869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.603 [2024-07-24 22:34:26.057897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.603 qpair failed and we were unable to recover it. 00:25:00.603 [2024-07-24 22:34:26.058013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.603 [2024-07-24 22:34:26.058073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.603 qpair failed and we were unable to recover it. 00:25:00.603 [2024-07-24 22:34:26.058204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.603 [2024-07-24 22:34:26.058256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.603 qpair failed and we were unable to recover it. 00:25:00.603 [2024-07-24 22:34:26.058357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.603 [2024-07-24 22:34:26.058383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.603 qpair failed and we were unable to recover it. 00:25:00.603 [2024-07-24 22:34:26.058489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.603 [2024-07-24 22:34:26.058523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.603 qpair failed and we were unable to recover it. 00:25:00.603 [2024-07-24 22:34:26.058701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.604 [2024-07-24 22:34:26.058751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.604 qpair failed and we were unable to recover it. 00:25:00.604 [2024-07-24 22:34:26.058916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.604 [2024-07-24 22:34:26.058973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.604 qpair failed and we were unable to recover it. 00:25:00.604 [2024-07-24 22:34:26.059173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.604 [2024-07-24 22:34:26.059226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.604 qpair failed and we were unable to recover it. 00:25:00.604 [2024-07-24 22:34:26.059428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.604 [2024-07-24 22:34:26.059477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.604 qpair failed and we were unable to recover it. 00:25:00.604 [2024-07-24 22:34:26.059620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.604 [2024-07-24 22:34:26.059676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.604 qpair failed and we were unable to recover it. 00:25:00.604 [2024-07-24 22:34:26.059877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.604 [2024-07-24 22:34:26.059930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.604 qpair failed and we were unable to recover it. 00:25:00.604 [2024-07-24 22:34:26.060066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.604 [2024-07-24 22:34:26.060093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.604 qpair failed and we were unable to recover it. 00:25:00.604 [2024-07-24 22:34:26.060226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.604 [2024-07-24 22:34:26.060277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.604 qpair failed and we were unable to recover it. 00:25:00.604 [2024-07-24 22:34:26.060379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.604 [2024-07-24 22:34:26.060405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.604 qpair failed and we were unable to recover it. 00:25:00.604 [2024-07-24 22:34:26.060574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.604 [2024-07-24 22:34:26.060626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.604 qpair failed and we were unable to recover it. 00:25:00.604 [2024-07-24 22:34:26.060755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.604 [2024-07-24 22:34:26.060806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.604 qpair failed and we were unable to recover it. 00:25:00.604 [2024-07-24 22:34:26.060927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.604 [2024-07-24 22:34:26.060979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.604 qpair failed and we were unable to recover it. 00:25:00.604 [2024-07-24 22:34:26.061108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.604 [2024-07-24 22:34:26.061160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.604 qpair failed and we were unable to recover it. 00:25:00.604 [2024-07-24 22:34:26.061336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.604 [2024-07-24 22:34:26.061362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.604 qpair failed and we were unable to recover it. 00:25:00.604 [2024-07-24 22:34:26.061536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.604 [2024-07-24 22:34:26.061563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.604 qpair failed and we were unable to recover it. 00:25:00.604 [2024-07-24 22:34:26.061769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.604 [2024-07-24 22:34:26.061818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.604 qpair failed and we were unable to recover it. 00:25:00.604 [2024-07-24 22:34:26.061928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.604 [2024-07-24 22:34:26.061955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.604 qpair failed and we were unable to recover it. 00:25:00.604 [2024-07-24 22:34:26.062138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.604 [2024-07-24 22:34:26.062191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.604 qpair failed and we were unable to recover it. 00:25:00.604 [2024-07-24 22:34:26.062391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.604 [2024-07-24 22:34:26.062443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.604 qpair failed and we were unable to recover it. 00:25:00.604 [2024-07-24 22:34:26.062644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.604 [2024-07-24 22:34:26.062708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.604 qpair failed and we were unable to recover it. 00:25:00.604 [2024-07-24 22:34:26.062877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.604 [2024-07-24 22:34:26.062928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.604 qpair failed and we were unable to recover it. 00:25:00.604 [2024-07-24 22:34:26.063105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.604 [2024-07-24 22:34:26.063132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.604 qpair failed and we were unable to recover it. 00:25:00.604 [2024-07-24 22:34:26.063322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.604 [2024-07-24 22:34:26.063348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.604 qpair failed and we were unable to recover it. 00:25:00.604 [2024-07-24 22:34:26.063449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.604 [2024-07-24 22:34:26.063476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.604 qpair failed and we were unable to recover it. 00:25:00.604 [2024-07-24 22:34:26.063719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.604 [2024-07-24 22:34:26.063765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.604 qpair failed and we were unable to recover it. 00:25:00.604 [2024-07-24 22:34:26.063931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.604 [2024-07-24 22:34:26.063957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.604 qpair failed and we were unable to recover it. 00:25:00.604 [2024-07-24 22:34:26.064146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.604 [2024-07-24 22:34:26.064173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.604 qpair failed and we were unable to recover it. 00:25:00.604 [2024-07-24 22:34:26.064300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.604 [2024-07-24 22:34:26.064345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.604 qpair failed and we were unable to recover it. 00:25:00.604 [2024-07-24 22:34:26.064444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.604 [2024-07-24 22:34:26.064470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.604 qpair failed and we were unable to recover it. 00:25:00.604 [2024-07-24 22:34:26.064698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.604 [2024-07-24 22:34:26.064749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.604 qpair failed and we were unable to recover it. 00:25:00.604 [2024-07-24 22:34:26.064903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.604 [2024-07-24 22:34:26.064957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.604 qpair failed and we were unable to recover it. 00:25:00.604 [2024-07-24 22:34:26.065135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.604 [2024-07-24 22:34:26.065161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.604 qpair failed and we were unable to recover it. 00:25:00.604 [2024-07-24 22:34:26.065267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.604 [2024-07-24 22:34:26.065294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.604 qpair failed and we were unable to recover it. 00:25:00.604 [2024-07-24 22:34:26.065419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.604 [2024-07-24 22:34:26.065472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.604 qpair failed and we were unable to recover it. 00:25:00.604 [2024-07-24 22:34:26.065682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.604 [2024-07-24 22:34:26.065732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.604 qpair failed and we were unable to recover it. 00:25:00.604 [2024-07-24 22:34:26.065878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.604 [2024-07-24 22:34:26.065926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.604 qpair failed and we were unable to recover it. 00:25:00.604 [2024-07-24 22:34:26.066050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.604 [2024-07-24 22:34:26.066106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.604 qpair failed and we were unable to recover it. 00:25:00.604 [2024-07-24 22:34:26.066206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.604 [2024-07-24 22:34:26.066233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.605 qpair failed and we were unable to recover it. 00:25:00.605 [2024-07-24 22:34:26.066469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.605 [2024-07-24 22:34:26.066502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.605 qpair failed and we were unable to recover it. 00:25:00.605 [2024-07-24 22:34:26.066683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.605 [2024-07-24 22:34:26.066729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.605 qpair failed and we were unable to recover it. 00:25:00.605 [2024-07-24 22:34:26.066934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.605 [2024-07-24 22:34:26.066988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.605 qpair failed and we were unable to recover it. 00:25:00.605 [2024-07-24 22:34:26.067139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.605 [2024-07-24 22:34:26.067195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.605 qpair failed and we were unable to recover it. 00:25:00.605 [2024-07-24 22:34:26.067324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.605 [2024-07-24 22:34:26.067350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.605 qpair failed and we were unable to recover it. 00:25:00.605 [2024-07-24 22:34:26.067496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.605 [2024-07-24 22:34:26.067553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.605 qpair failed and we were unable to recover it. 00:25:00.605 [2024-07-24 22:34:26.067690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.605 [2024-07-24 22:34:26.067745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.605 qpair failed and we were unable to recover it. 00:25:00.605 [2024-07-24 22:34:26.067950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.605 [2024-07-24 22:34:26.068001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.605 qpair failed and we were unable to recover it. 00:25:00.605 [2024-07-24 22:34:26.068149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.605 [2024-07-24 22:34:26.068201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.605 qpair failed and we were unable to recover it. 00:25:00.605 [2024-07-24 22:34:26.068389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.605 [2024-07-24 22:34:26.068443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.605 qpair failed and we were unable to recover it. 00:25:00.605 [2024-07-24 22:34:26.068640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.605 [2024-07-24 22:34:26.068692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.605 qpair failed and we were unable to recover it. 00:25:00.605 [2024-07-24 22:34:26.068873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.605 [2024-07-24 22:34:26.068899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.605 qpair failed and we were unable to recover it. 00:25:00.605 [2024-07-24 22:34:26.068998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.605 [2024-07-24 22:34:26.069023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.605 qpair failed and we were unable to recover it. 00:25:00.605 [2024-07-24 22:34:26.069206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.605 [2024-07-24 22:34:26.069232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.605 qpair failed and we were unable to recover it. 00:25:00.605 [2024-07-24 22:34:26.069459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.605 [2024-07-24 22:34:26.069521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.605 qpair failed and we were unable to recover it. 00:25:00.605 [2024-07-24 22:34:26.069658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.605 [2024-07-24 22:34:26.069684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.605 qpair failed and we were unable to recover it. 00:25:00.605 [2024-07-24 22:34:26.069843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.605 [2024-07-24 22:34:26.069870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.605 qpair failed and we were unable to recover it. 00:25:00.605 [2024-07-24 22:34:26.070049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.605 [2024-07-24 22:34:26.070075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.605 qpair failed and we were unable to recover it. 00:25:00.605 [2024-07-24 22:34:26.070242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.605 [2024-07-24 22:34:26.070300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.605 qpair failed and we were unable to recover it. 00:25:00.605 [2024-07-24 22:34:26.070434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.605 [2024-07-24 22:34:26.070494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.605 qpair failed and we were unable to recover it. 00:25:00.605 [2024-07-24 22:34:26.070648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.605 [2024-07-24 22:34:26.070702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.605 qpair failed and we were unable to recover it. 00:25:00.605 [2024-07-24 22:34:26.070888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.605 [2024-07-24 22:34:26.070940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.605 qpair failed and we were unable to recover it. 00:25:00.605 [2024-07-24 22:34:26.071063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.605 [2024-07-24 22:34:26.071118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.605 qpair failed and we were unable to recover it. 00:25:00.605 [2024-07-24 22:34:26.071323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.605 [2024-07-24 22:34:26.071349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.605 qpair failed and we were unable to recover it. 00:25:00.605 [2024-07-24 22:34:26.071528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.605 [2024-07-24 22:34:26.071556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.605 qpair failed and we were unable to recover it. 00:25:00.605 [2024-07-24 22:34:26.071699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.605 [2024-07-24 22:34:26.071726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.605 qpair failed and we were unable to recover it. 00:25:00.605 [2024-07-24 22:34:26.071917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.605 [2024-07-24 22:34:26.071972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.605 qpair failed and we were unable to recover it. 00:25:00.605 [2024-07-24 22:34:26.072136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.605 [2024-07-24 22:34:26.072194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.605 qpair failed and we were unable to recover it. 00:25:00.605 [2024-07-24 22:34:26.072373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.605 [2024-07-24 22:34:26.072399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.605 qpair failed and we were unable to recover it. 00:25:00.605 [2024-07-24 22:34:26.072496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.605 [2024-07-24 22:34:26.072523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.606 qpair failed and we were unable to recover it. 00:25:00.606 [2024-07-24 22:34:26.072654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.606 [2024-07-24 22:34:26.072699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.606 qpair failed and we were unable to recover it. 00:25:00.606 [2024-07-24 22:34:26.072849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.606 [2024-07-24 22:34:26.072901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.606 qpair failed and we were unable to recover it. 00:25:00.606 [2024-07-24 22:34:26.073016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.606 [2024-07-24 22:34:26.073075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.606 qpair failed and we were unable to recover it. 00:25:00.606 [2024-07-24 22:34:26.073204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.606 [2024-07-24 22:34:26.073230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.606 qpair failed and we were unable to recover it. 00:25:00.606 [2024-07-24 22:34:26.073363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.606 [2024-07-24 22:34:26.073390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.606 qpair failed and we were unable to recover it. 00:25:00.606 [2024-07-24 22:34:26.073555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.606 [2024-07-24 22:34:26.073633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.606 qpair failed and we were unable to recover it. 00:25:00.606 [2024-07-24 22:34:26.073789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.606 [2024-07-24 22:34:26.073832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.606 qpair failed and we were unable to recover it. 00:25:00.606 [2024-07-24 22:34:26.074005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.606 [2024-07-24 22:34:26.074054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.606 qpair failed and we were unable to recover it. 00:25:00.606 [2024-07-24 22:34:26.074172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.606 [2024-07-24 22:34:26.074229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.606 qpair failed and we were unable to recover it. 00:25:00.606 [2024-07-24 22:34:26.074368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.606 [2024-07-24 22:34:26.074418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.606 qpair failed and we were unable to recover it. 00:25:00.606 [2024-07-24 22:34:26.074535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.606 [2024-07-24 22:34:26.074562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.606 qpair failed and we were unable to recover it. 00:25:00.606 [2024-07-24 22:34:26.074783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.606 [2024-07-24 22:34:26.074840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.606 qpair failed and we were unable to recover it. 00:25:00.606 [2024-07-24 22:34:26.074975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.606 [2024-07-24 22:34:26.075027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.606 qpair failed and we were unable to recover it. 00:25:00.606 [2024-07-24 22:34:26.075185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.606 [2024-07-24 22:34:26.075247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.606 qpair failed and we were unable to recover it. 00:25:00.606 [2024-07-24 22:34:26.075396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.606 [2024-07-24 22:34:26.075422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.606 qpair failed and we were unable to recover it. 00:25:00.606 [2024-07-24 22:34:26.075622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.606 [2024-07-24 22:34:26.075672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.606 qpair failed and we were unable to recover it. 00:25:00.606 [2024-07-24 22:34:26.075847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.606 [2024-07-24 22:34:26.075905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.606 qpair failed and we were unable to recover it. 00:25:00.606 [2024-07-24 22:34:26.076056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.606 [2024-07-24 22:34:26.076112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.606 qpair failed and we were unable to recover it. 00:25:00.606 [2024-07-24 22:34:26.076282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.606 [2024-07-24 22:34:26.076309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.606 qpair failed and we were unable to recover it. 00:25:00.606 [2024-07-24 22:34:26.076447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.606 [2024-07-24 22:34:26.076512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.606 qpair failed and we were unable to recover it. 00:25:00.606 [2024-07-24 22:34:26.076690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.606 [2024-07-24 22:34:26.076717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.606 qpair failed and we were unable to recover it. 00:25:00.606 [2024-07-24 22:34:26.076909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.606 [2024-07-24 22:34:26.076958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.606 qpair failed and we were unable to recover it. 00:25:00.606 [2024-07-24 22:34:26.077077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.606 [2024-07-24 22:34:26.077134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.606 qpair failed and we were unable to recover it. 00:25:00.606 [2024-07-24 22:34:26.077308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.606 [2024-07-24 22:34:26.077367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.606 qpair failed and we were unable to recover it. 00:25:00.606 [2024-07-24 22:34:26.077544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.606 [2024-07-24 22:34:26.077571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.606 qpair failed and we were unable to recover it. 00:25:00.606 [2024-07-24 22:34:26.077733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.606 [2024-07-24 22:34:26.077760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.606 qpair failed and we were unable to recover it. 00:25:00.606 [2024-07-24 22:34:26.077902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.606 [2024-07-24 22:34:26.077954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.606 qpair failed and we were unable to recover it. 00:25:00.606 [2024-07-24 22:34:26.078102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.606 [2024-07-24 22:34:26.078155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.606 qpair failed and we were unable to recover it. 00:25:00.606 [2024-07-24 22:34:26.078323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.606 [2024-07-24 22:34:26.078378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.606 qpair failed and we were unable to recover it. 00:25:00.606 [2024-07-24 22:34:26.078485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.606 [2024-07-24 22:34:26.078512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.606 qpair failed and we were unable to recover it. 00:25:00.606 [2024-07-24 22:34:26.078668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.606 [2024-07-24 22:34:26.078720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.606 qpair failed and we were unable to recover it. 00:25:00.606 [2024-07-24 22:34:26.078843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.606 [2024-07-24 22:34:26.078869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.606 qpair failed and we were unable to recover it. 00:25:00.606 [2024-07-24 22:34:26.079021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.606 [2024-07-24 22:34:26.079068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.606 qpair failed and we were unable to recover it. 00:25:00.606 [2024-07-24 22:34:26.079197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.606 [2024-07-24 22:34:26.079249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.607 qpair failed and we were unable to recover it. 00:25:00.607 [2024-07-24 22:34:26.079421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.607 [2024-07-24 22:34:26.079470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.607 qpair failed and we were unable to recover it. 00:25:00.607 [2024-07-24 22:34:26.079585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.607 [2024-07-24 22:34:26.079611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.607 qpair failed and we were unable to recover it. 00:25:00.607 [2024-07-24 22:34:26.079756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.607 [2024-07-24 22:34:26.079808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.607 qpair failed and we were unable to recover it. 00:25:00.607 [2024-07-24 22:34:26.079979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.607 [2024-07-24 22:34:26.080005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.607 qpair failed and we were unable to recover it. 00:25:00.607 [2024-07-24 22:34:26.080103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.607 [2024-07-24 22:34:26.080129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.607 qpair failed and we were unable to recover it. 00:25:00.607 [2024-07-24 22:34:26.080257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.607 [2024-07-24 22:34:26.080329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.607 qpair failed and we were unable to recover it. 00:25:00.607 [2024-07-24 22:34:26.080462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.607 [2024-07-24 22:34:26.080494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.607 qpair failed and we were unable to recover it. 00:25:00.607 [2024-07-24 22:34:26.080665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.607 [2024-07-24 22:34:26.080719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.607 qpair failed and we were unable to recover it. 00:25:00.607 [2024-07-24 22:34:26.080863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.607 [2024-07-24 22:34:26.080917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.607 qpair failed and we were unable to recover it. 00:25:00.607 [2024-07-24 22:34:26.081045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.607 [2024-07-24 22:34:26.081072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.607 qpair failed and we were unable to recover it. 00:25:00.607 [2024-07-24 22:34:26.081219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.607 [2024-07-24 22:34:26.081254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.607 qpair failed and we were unable to recover it. 00:25:00.607 [2024-07-24 22:34:26.081465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.607 [2024-07-24 22:34:26.081522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.607 qpair failed and we were unable to recover it. 00:25:00.607 [2024-07-24 22:34:26.081626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.607 [2024-07-24 22:34:26.081653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.607 qpair failed and we were unable to recover it. 00:25:00.607 [2024-07-24 22:34:26.081765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.607 [2024-07-24 22:34:26.081791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.607 qpair failed and we were unable to recover it. 00:25:00.607 [2024-07-24 22:34:26.081913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.607 [2024-07-24 22:34:26.081974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.607 qpair failed and we were unable to recover it. 00:25:00.607 [2024-07-24 22:34:26.082081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.607 [2024-07-24 22:34:26.082109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.607 qpair failed and we were unable to recover it. 00:25:00.607 [2024-07-24 22:34:26.082281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.607 [2024-07-24 22:34:26.082335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.607 qpair failed and we were unable to recover it. 00:25:00.607 [2024-07-24 22:34:26.082464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.607 [2024-07-24 22:34:26.082522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.607 qpair failed and we were unable to recover it. 00:25:00.607 [2024-07-24 22:34:26.082676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.607 [2024-07-24 22:34:26.082725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.607 qpair failed and we were unable to recover it. 00:25:00.607 [2024-07-24 22:34:26.082882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.607 [2024-07-24 22:34:26.082910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.607 qpair failed and we were unable to recover it. 00:25:00.607 [2024-07-24 22:34:26.083073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.607 [2024-07-24 22:34:26.083123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.607 qpair failed and we were unable to recover it. 00:25:00.607 [2024-07-24 22:34:26.083294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.607 [2024-07-24 22:34:26.083357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.607 qpair failed and we were unable to recover it. 00:25:00.607 [2024-07-24 22:34:26.083459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.607 [2024-07-24 22:34:26.083493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.607 qpair failed and we were unable to recover it. 00:25:00.607 [2024-07-24 22:34:26.083636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.607 [2024-07-24 22:34:26.083667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.607 qpair failed and we were unable to recover it. 00:25:00.607 [2024-07-24 22:34:26.083890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.607 [2024-07-24 22:34:26.083937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.607 qpair failed and we were unable to recover it. 00:25:00.607 [2024-07-24 22:34:26.084177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.607 [2024-07-24 22:34:26.084230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.607 qpair failed and we were unable to recover it. 00:25:00.607 [2024-07-24 22:34:26.084353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.607 [2024-07-24 22:34:26.084406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.607 qpair failed and we were unable to recover it. 00:25:00.607 [2024-07-24 22:34:26.084503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.607 [2024-07-24 22:34:26.084531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.607 qpair failed and we were unable to recover it. 00:25:00.607 [2024-07-24 22:34:26.084651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.607 [2024-07-24 22:34:26.084677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.607 qpair failed and we were unable to recover it. 00:25:00.607 [2024-07-24 22:34:26.084852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.607 [2024-07-24 22:34:26.084910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.607 qpair failed and we were unable to recover it. 00:25:00.607 [2024-07-24 22:34:26.085067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.607 [2024-07-24 22:34:26.085126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.607 qpair failed and we were unable to recover it. 00:25:00.607 [2024-07-24 22:34:26.085260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.607 [2024-07-24 22:34:26.085286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.607 qpair failed and we were unable to recover it. 00:25:00.607 [2024-07-24 22:34:26.085450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.607 [2024-07-24 22:34:26.085475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.607 qpair failed and we were unable to recover it. 00:25:00.607 [2024-07-24 22:34:26.085671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.607 [2024-07-24 22:34:26.085697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.608 qpair failed and we were unable to recover it. 00:25:00.608 [2024-07-24 22:34:26.085883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.608 [2024-07-24 22:34:26.085933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.608 qpair failed and we were unable to recover it. 00:25:00.608 [2024-07-24 22:34:26.086074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.608 [2024-07-24 22:34:26.086125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.608 qpair failed and we were unable to recover it. 00:25:00.608 [2024-07-24 22:34:26.086237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.608 [2024-07-24 22:34:26.086263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.608 qpair failed and we were unable to recover it. 00:25:00.608 [2024-07-24 22:34:26.086387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.608 [2024-07-24 22:34:26.086416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.608 qpair failed and we were unable to recover it. 00:25:00.608 [2024-07-24 22:34:26.086541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.608 [2024-07-24 22:34:26.086592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.608 qpair failed and we were unable to recover it. 00:25:00.608 [2024-07-24 22:34:26.086774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.608 [2024-07-24 22:34:26.086822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.608 qpair failed and we were unable to recover it. 00:25:00.608 [2024-07-24 22:34:26.086988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.608 [2024-07-24 22:34:26.087015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.608 qpair failed and we were unable to recover it. 00:25:00.608 [2024-07-24 22:34:26.087131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.608 [2024-07-24 22:34:26.087187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.608 qpair failed and we were unable to recover it. 00:25:00.608 [2024-07-24 22:34:26.087355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.608 [2024-07-24 22:34:26.087381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.608 qpair failed and we were unable to recover it. 00:25:00.608 [2024-07-24 22:34:26.087548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.608 [2024-07-24 22:34:26.087610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.608 qpair failed and we were unable to recover it. 00:25:00.608 [2024-07-24 22:34:26.087772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.608 [2024-07-24 22:34:26.087826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.608 qpair failed and we were unable to recover it. 00:25:00.608 [2024-07-24 22:34:26.087929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.608 [2024-07-24 22:34:26.087957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.608 qpair failed and we were unable to recover it. 00:25:00.608 [2024-07-24 22:34:26.088140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.608 [2024-07-24 22:34:26.088188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.608 qpair failed and we were unable to recover it. 00:25:00.608 [2024-07-24 22:34:26.088384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.608 [2024-07-24 22:34:26.088440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.608 qpair failed and we were unable to recover it. 00:25:00.608 [2024-07-24 22:34:26.088581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.608 [2024-07-24 22:34:26.088610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.608 qpair failed and we were unable to recover it. 00:25:00.608 [2024-07-24 22:34:26.088789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.608 [2024-07-24 22:34:26.088840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.608 qpair failed and we were unable to recover it. 00:25:00.608 [2024-07-24 22:34:26.089028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.608 [2024-07-24 22:34:26.089087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.608 qpair failed and we were unable to recover it. 00:25:00.608 [2024-07-24 22:34:26.089267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.608 [2024-07-24 22:34:26.089295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.608 qpair failed and we were unable to recover it. 00:25:00.608 [2024-07-24 22:34:26.089420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.608 [2024-07-24 22:34:26.089474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.608 qpair failed and we were unable to recover it. 00:25:00.608 [2024-07-24 22:34:26.089589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.608 [2024-07-24 22:34:26.089617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.608 qpair failed and we were unable to recover it. 00:25:00.608 [2024-07-24 22:34:26.089748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.608 [2024-07-24 22:34:26.089801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.608 qpair failed and we were unable to recover it. 00:25:00.608 [2024-07-24 22:34:26.089904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.608 [2024-07-24 22:34:26.089930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.608 qpair failed and we were unable to recover it. 00:25:00.608 [2024-07-24 22:34:26.090136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.608 [2024-07-24 22:34:26.090162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.609 qpair failed and we were unable to recover it. 00:25:00.609 [2024-07-24 22:34:26.090338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.609 [2024-07-24 22:34:26.090364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.609 qpair failed and we were unable to recover it. 00:25:00.609 [2024-07-24 22:34:26.090504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.609 [2024-07-24 22:34:26.090560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.609 qpair failed and we were unable to recover it. 00:25:00.609 [2024-07-24 22:34:26.090712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.609 [2024-07-24 22:34:26.090764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.609 qpair failed and we were unable to recover it. 00:25:00.609 [2024-07-24 22:34:26.090936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.609 [2024-07-24 22:34:26.090991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.609 qpair failed and we were unable to recover it. 00:25:00.609 [2024-07-24 22:34:26.091213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.609 [2024-07-24 22:34:26.091260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.609 qpair failed and we were unable to recover it. 00:25:00.609 [2024-07-24 22:34:26.091428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.609 [2024-07-24 22:34:26.091454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.609 qpair failed and we were unable to recover it. 00:25:00.609 [2024-07-24 22:34:26.091624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.609 [2024-07-24 22:34:26.091657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.609 qpair failed and we were unable to recover it. 00:25:00.609 [2024-07-24 22:34:26.091828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.609 [2024-07-24 22:34:26.091883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.609 qpair failed and we were unable to recover it. 00:25:00.609 [2024-07-24 22:34:26.092020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.609 [2024-07-24 22:34:26.092047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.609 qpair failed and we were unable to recover it. 00:25:00.609 [2024-07-24 22:34:26.092239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.609 [2024-07-24 22:34:26.092266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.609 qpair failed and we were unable to recover it. 00:25:00.609 [2024-07-24 22:34:26.092408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.609 [2024-07-24 22:34:26.092459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.609 qpair failed and we were unable to recover it. 00:25:00.609 [2024-07-24 22:34:26.092689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.609 [2024-07-24 22:34:26.092730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.609 qpair failed and we were unable to recover it. 00:25:00.609 [2024-07-24 22:34:26.092924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.609 [2024-07-24 22:34:26.092957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.609 qpair failed and we were unable to recover it. 00:25:00.609 [2024-07-24 22:34:26.093164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.609 [2024-07-24 22:34:26.093213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.609 qpair failed and we were unable to recover it. 00:25:00.609 [2024-07-24 22:34:26.093317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.609 [2024-07-24 22:34:26.093344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.609 qpair failed and we were unable to recover it. 00:25:00.609 [2024-07-24 22:34:26.093450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.609 [2024-07-24 22:34:26.093478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.609 qpair failed and we were unable to recover it. 00:25:00.609 [2024-07-24 22:34:26.093697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.609 [2024-07-24 22:34:26.093725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.609 qpair failed and we were unable to recover it. 00:25:00.609 [2024-07-24 22:34:26.093910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.609 [2024-07-24 22:34:26.093960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.609 qpair failed and we were unable to recover it. 00:25:00.609 [2024-07-24 22:34:26.094144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.609 [2024-07-24 22:34:26.094193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.609 qpair failed and we were unable to recover it. 00:25:00.609 [2024-07-24 22:34:26.094392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.609 [2024-07-24 22:34:26.094418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.609 qpair failed and we were unable to recover it. 00:25:00.609 [2024-07-24 22:34:26.094532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.609 [2024-07-24 22:34:26.094561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.609 qpair failed and we were unable to recover it. 00:25:00.609 [2024-07-24 22:34:26.094686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.609 [2024-07-24 22:34:26.094745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.609 qpair failed and we were unable to recover it. 00:25:00.609 [2024-07-24 22:34:26.094889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.609 [2024-07-24 22:34:26.094941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.609 qpair failed and we were unable to recover it. 00:25:00.609 [2024-07-24 22:34:26.095122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.609 [2024-07-24 22:34:26.095148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.609 qpair failed and we were unable to recover it. 00:25:00.609 [2024-07-24 22:34:26.095333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.609 [2024-07-24 22:34:26.095383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.609 qpair failed and we were unable to recover it. 00:25:00.609 [2024-07-24 22:34:26.095606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.609 [2024-07-24 22:34:26.095655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.609 qpair failed and we were unable to recover it. 00:25:00.609 [2024-07-24 22:34:26.095894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.609 [2024-07-24 22:34:26.095920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.609 qpair failed and we were unable to recover it. 00:25:00.609 [2024-07-24 22:34:26.096093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.609 [2024-07-24 22:34:26.096119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.609 qpair failed and we were unable to recover it. 00:25:00.609 [2024-07-24 22:34:26.096247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.609 [2024-07-24 22:34:26.096298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.609 qpair failed and we were unable to recover it. 00:25:00.609 [2024-07-24 22:34:26.096443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.609 [2024-07-24 22:34:26.096514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.609 qpair failed and we were unable to recover it. 00:25:00.609 [2024-07-24 22:34:26.096623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.609 [2024-07-24 22:34:26.096651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.609 qpair failed and we were unable to recover it. 00:25:00.609 [2024-07-24 22:34:26.096753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.609 [2024-07-24 22:34:26.096781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.609 qpair failed and we were unable to recover it. 00:25:00.609 [2024-07-24 22:34:26.096992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.609 [2024-07-24 22:34:26.097045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.609 qpair failed and we were unable to recover it. 00:25:00.609 [2024-07-24 22:34:26.097261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.609 [2024-07-24 22:34:26.097329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.609 qpair failed and we were unable to recover it. 00:25:00.609 [2024-07-24 22:34:26.097505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.610 [2024-07-24 22:34:26.097535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.610 qpair failed and we were unable to recover it. 00:25:00.610 [2024-07-24 22:34:26.097753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.610 [2024-07-24 22:34:26.097805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.610 qpair failed and we were unable to recover it. 00:25:00.610 [2024-07-24 22:34:26.097995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.610 [2024-07-24 22:34:26.098045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.610 qpair failed and we were unable to recover it. 00:25:00.610 [2024-07-24 22:34:26.098143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.610 [2024-07-24 22:34:26.098169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.610 qpair failed and we were unable to recover it. 00:25:00.610 [2024-07-24 22:34:26.098355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.610 [2024-07-24 22:34:26.098405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.610 qpair failed and we were unable to recover it. 00:25:00.610 [2024-07-24 22:34:26.098568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.610 [2024-07-24 22:34:26.098616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.610 qpair failed and we were unable to recover it. 00:25:00.610 [2024-07-24 22:34:26.098716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.610 [2024-07-24 22:34:26.098742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.610 qpair failed and we were unable to recover it. 00:25:00.610 [2024-07-24 22:34:26.098871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.610 [2024-07-24 22:34:26.098922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.610 qpair failed and we were unable to recover it. 00:25:00.610 [2024-07-24 22:34:26.099050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.610 [2024-07-24 22:34:26.099103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.610 qpair failed and we were unable to recover it. 00:25:00.610 [2024-07-24 22:34:26.099317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.610 [2024-07-24 22:34:26.099370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.610 qpair failed and we were unable to recover it. 00:25:00.610 [2024-07-24 22:34:26.099569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.610 [2024-07-24 22:34:26.099618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.610 qpair failed and we were unable to recover it. 00:25:00.610 [2024-07-24 22:34:26.099839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.610 [2024-07-24 22:34:26.099893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.610 qpair failed and we were unable to recover it. 00:25:00.610 [2024-07-24 22:34:26.100102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.610 [2024-07-24 22:34:26.100161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.610 qpair failed and we were unable to recover it. 00:25:00.610 [2024-07-24 22:34:26.100294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.610 [2024-07-24 22:34:26.100343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.610 qpair failed and we were unable to recover it. 00:25:00.610 [2024-07-24 22:34:26.100445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.610 [2024-07-24 22:34:26.100472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.610 qpair failed and we were unable to recover it. 00:25:00.610 [2024-07-24 22:34:26.100669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.610 [2024-07-24 22:34:26.100720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.610 qpair failed and we were unable to recover it. 00:25:00.610 [2024-07-24 22:34:26.100821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.610 [2024-07-24 22:34:26.100848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.610 qpair failed and we were unable to recover it. 00:25:00.610 [2024-07-24 22:34:26.100980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.610 [2024-07-24 22:34:26.101006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.610 qpair failed and we were unable to recover it. 00:25:00.610 [2024-07-24 22:34:26.101207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.610 [2024-07-24 22:34:26.101258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.610 qpair failed and we were unable to recover it. 00:25:00.610 [2024-07-24 22:34:26.101367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.610 [2024-07-24 22:34:26.101397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.610 qpair failed and we were unable to recover it. 00:25:00.610 [2024-07-24 22:34:26.101571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.610 [2024-07-24 22:34:26.101623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.610 qpair failed and we were unable to recover it. 00:25:00.610 [2024-07-24 22:34:26.101726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.610 [2024-07-24 22:34:26.101752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.610 qpair failed and we were unable to recover it. 00:25:00.610 [2024-07-24 22:34:26.101915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.610 [2024-07-24 22:34:26.101976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.610 qpair failed and we were unable to recover it. 00:25:00.610 [2024-07-24 22:34:26.102134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.610 [2024-07-24 22:34:26.102189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.610 qpair failed and we were unable to recover it. 00:25:00.610 [2024-07-24 22:34:26.102432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.610 [2024-07-24 22:34:26.102505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.610 qpair failed and we were unable to recover it. 00:25:00.610 [2024-07-24 22:34:26.102651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.610 [2024-07-24 22:34:26.102704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.610 qpair failed and we were unable to recover it. 00:25:00.610 [2024-07-24 22:34:26.102887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.610 [2024-07-24 22:34:26.102914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.610 qpair failed and we were unable to recover it. 00:25:00.610 [2024-07-24 22:34:26.103090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.610 [2024-07-24 22:34:26.103116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.610 qpair failed and we were unable to recover it. 00:25:00.610 [2024-07-24 22:34:26.103300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.610 [2024-07-24 22:34:26.103348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.610 qpair failed and we were unable to recover it. 00:25:00.610 [2024-07-24 22:34:26.103600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.610 [2024-07-24 22:34:26.103646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.610 qpair failed and we were unable to recover it. 00:25:00.610 [2024-07-24 22:34:26.103821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.610 [2024-07-24 22:34:26.103847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.611 qpair failed and we were unable to recover it. 00:25:00.611 [2024-07-24 22:34:26.103944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.611 [2024-07-24 22:34:26.104002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.611 qpair failed and we were unable to recover it. 00:25:00.611 [2024-07-24 22:34:26.104145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.611 [2024-07-24 22:34:26.104203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.611 qpair failed and we were unable to recover it. 00:25:00.611 [2024-07-24 22:34:26.104349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.611 [2024-07-24 22:34:26.104418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.611 qpair failed and we were unable to recover it. 00:25:00.611 [2024-07-24 22:34:26.104681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.611 [2024-07-24 22:34:26.104730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.611 qpair failed and we were unable to recover it. 00:25:00.611 [2024-07-24 22:34:26.104957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.611 [2024-07-24 22:34:26.105007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.611 qpair failed and we were unable to recover it. 00:25:00.611 [2024-07-24 22:34:26.105184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.611 [2024-07-24 22:34:26.105210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.611 qpair failed and we were unable to recover it. 00:25:00.611 [2024-07-24 22:34:26.105390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.611 [2024-07-24 22:34:26.105439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.611 qpair failed and we were unable to recover it. 00:25:00.611 [2024-07-24 22:34:26.105550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.611 [2024-07-24 22:34:26.105580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.611 qpair failed and we were unable to recover it. 00:25:00.611 [2024-07-24 22:34:26.105794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.611 [2024-07-24 22:34:26.105831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.611 qpair failed and we were unable to recover it. 00:25:00.611 [2024-07-24 22:34:26.105967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.611 [2024-07-24 22:34:26.106023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.611 qpair failed and we were unable to recover it. 00:25:00.611 [2024-07-24 22:34:26.106142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.611 [2024-07-24 22:34:26.106203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.611 qpair failed and we were unable to recover it. 00:25:00.611 [2024-07-24 22:34:26.106348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.611 [2024-07-24 22:34:26.106402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.611 qpair failed and we were unable to recover it. 00:25:00.611 [2024-07-24 22:34:26.106540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.611 [2024-07-24 22:34:26.106594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.611 qpair failed and we were unable to recover it. 00:25:00.611 [2024-07-24 22:34:26.106742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.611 [2024-07-24 22:34:26.106799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.611 qpair failed and we were unable to recover it. 00:25:00.611 [2024-07-24 22:34:26.106904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.611 [2024-07-24 22:34:26.106931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.611 qpair failed and we were unable to recover it. 00:25:00.611 [2024-07-24 22:34:26.107059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.611 [2024-07-24 22:34:26.107087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.611 qpair failed and we were unable to recover it. 00:25:00.611 [2024-07-24 22:34:26.107311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.611 [2024-07-24 22:34:26.107361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.611 qpair failed and we were unable to recover it. 00:25:00.611 [2024-07-24 22:34:26.107622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.611 [2024-07-24 22:34:26.107650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.611 qpair failed and we were unable to recover it. 00:25:00.611 [2024-07-24 22:34:26.107825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.611 [2024-07-24 22:34:26.107853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.611 qpair failed and we were unable to recover it. 00:25:00.611 [2024-07-24 22:34:26.108002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.611 [2024-07-24 22:34:26.108045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.611 qpair failed and we were unable to recover it. 00:25:00.611 [2024-07-24 22:34:26.108209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.611 [2024-07-24 22:34:26.108235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.611 qpair failed and we were unable to recover it. 00:25:00.611 [2024-07-24 22:34:26.108352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.611 [2024-07-24 22:34:26.108403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.611 qpair failed and we were unable to recover it. 00:25:00.611 [2024-07-24 22:34:26.108579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.611 [2024-07-24 22:34:26.108637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.611 qpair failed and we were unable to recover it. 00:25:00.611 [2024-07-24 22:34:26.108825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.611 [2024-07-24 22:34:26.108854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.611 qpair failed and we were unable to recover it. 00:25:00.611 [2024-07-24 22:34:26.109059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.611 [2024-07-24 22:34:26.109109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.611 qpair failed and we were unable to recover it. 00:25:00.611 [2024-07-24 22:34:26.109211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.611 [2024-07-24 22:34:26.109237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.611 qpair failed and we were unable to recover it. 00:25:00.611 [2024-07-24 22:34:26.109427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.611 [2024-07-24 22:34:26.109476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.611 qpair failed and we were unable to recover it. 00:25:00.611 [2024-07-24 22:34:26.109675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.611 [2024-07-24 22:34:26.109725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.611 qpair failed and we were unable to recover it. 00:25:00.611 [2024-07-24 22:34:26.109884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.611 [2024-07-24 22:34:26.109910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.611 qpair failed and we were unable to recover it. 00:25:00.611 [2024-07-24 22:34:26.110048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.611 [2024-07-24 22:34:26.110105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.611 qpair failed and we were unable to recover it. 00:25:00.611 [2024-07-24 22:34:26.110202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.611 [2024-07-24 22:34:26.110228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.612 qpair failed and we were unable to recover it. 00:25:00.612 [2024-07-24 22:34:26.110407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.612 [2024-07-24 22:34:26.110433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.612 qpair failed and we were unable to recover it. 00:25:00.612 [2024-07-24 22:34:26.110599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.612 [2024-07-24 22:34:26.110650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.612 qpair failed and we were unable to recover it. 00:25:00.612 [2024-07-24 22:34:26.110797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.612 [2024-07-24 22:34:26.110844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.612 qpair failed and we were unable to recover it. 00:25:00.612 [2024-07-24 22:34:26.110970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.612 [2024-07-24 22:34:26.111019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.612 qpair failed and we were unable to recover it. 00:25:00.612 [2024-07-24 22:34:26.111161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.612 [2024-07-24 22:34:26.111217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.612 qpair failed and we were unable to recover it. 00:25:00.612 [2024-07-24 22:34:26.111403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.612 [2024-07-24 22:34:26.111458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.612 qpair failed and we were unable to recover it. 00:25:00.612 [2024-07-24 22:34:26.111659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.612 [2024-07-24 22:34:26.111689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.612 qpair failed and we were unable to recover it. 00:25:00.612 [2024-07-24 22:34:26.111790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.612 [2024-07-24 22:34:26.111816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.612 qpair failed and we were unable to recover it. 00:25:00.612 [2024-07-24 22:34:26.112008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.612 [2024-07-24 22:34:26.112063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.612 qpair failed and we were unable to recover it. 00:25:00.612 [2024-07-24 22:34:26.112258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.612 [2024-07-24 22:34:26.112307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.612 qpair failed and we were unable to recover it. 00:25:00.612 [2024-07-24 22:34:26.112439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.612 [2024-07-24 22:34:26.112464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.612 qpair failed and we were unable to recover it. 00:25:00.612 [2024-07-24 22:34:26.112587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.612 [2024-07-24 22:34:26.112613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.612 qpair failed and we were unable to recover it. 00:25:00.612 [2024-07-24 22:34:26.112746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.612 [2024-07-24 22:34:26.112800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.612 qpair failed and we were unable to recover it. 00:25:00.612 [2024-07-24 22:34:26.112934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.612 [2024-07-24 22:34:26.112986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.612 qpair failed and we were unable to recover it. 00:25:00.612 [2024-07-24 22:34:26.113131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.612 [2024-07-24 22:34:26.113183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.612 qpair failed and we were unable to recover it. 00:25:00.612 [2024-07-24 22:34:26.113405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.612 [2024-07-24 22:34:26.113458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.612 qpair failed and we were unable to recover it. 00:25:00.612 [2024-07-24 22:34:26.113665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.612 [2024-07-24 22:34:26.113721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.612 qpair failed and we were unable to recover it. 00:25:00.612 [2024-07-24 22:34:26.113855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.612 [2024-07-24 22:34:26.113887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.612 qpair failed and we were unable to recover it. 00:25:00.612 [2024-07-24 22:34:26.114110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.612 [2024-07-24 22:34:26.114158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.612 qpair failed and we were unable to recover it. 00:25:00.612 [2024-07-24 22:34:26.114307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.612 [2024-07-24 22:34:26.114375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.612 qpair failed and we were unable to recover it. 00:25:00.612 [2024-07-24 22:34:26.114565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.612 [2024-07-24 22:34:26.114620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.612 qpair failed and we were unable to recover it. 00:25:00.612 [2024-07-24 22:34:26.114829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.612 [2024-07-24 22:34:26.114886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.612 qpair failed and we were unable to recover it. 00:25:00.612 [2024-07-24 22:34:26.115026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.612 [2024-07-24 22:34:26.115075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.612 qpair failed and we were unable to recover it. 00:25:00.612 [2024-07-24 22:34:26.115212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.612 [2024-07-24 22:34:26.115262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.612 qpair failed and we were unable to recover it. 00:25:00.612 [2024-07-24 22:34:26.115467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.612 [2024-07-24 22:34:26.115528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.612 qpair failed and we were unable to recover it. 00:25:00.612 [2024-07-24 22:34:26.115660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.612 [2024-07-24 22:34:26.115704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.612 qpair failed and we were unable to recover it. 00:25:00.612 [2024-07-24 22:34:26.115851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.612 [2024-07-24 22:34:26.115909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.612 qpair failed and we were unable to recover it. 00:25:00.612 [2024-07-24 22:34:26.116094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.612 [2024-07-24 22:34:26.116120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.612 qpair failed and we were unable to recover it. 00:25:00.612 [2024-07-24 22:34:26.116239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.612 [2024-07-24 22:34:26.116289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.612 qpair failed and we were unable to recover it. 00:25:00.612 [2024-07-24 22:34:26.116532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.612 [2024-07-24 22:34:26.116559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.612 qpair failed and we were unable to recover it. 00:25:00.612 [2024-07-24 22:34:26.116768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.612 [2024-07-24 22:34:26.116795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.612 qpair failed and we were unable to recover it. 00:25:00.612 [2024-07-24 22:34:26.116952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.612 [2024-07-24 22:34:26.116980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.612 qpair failed and we were unable to recover it. 00:25:00.612 [2024-07-24 22:34:26.117144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.612 [2024-07-24 22:34:26.117205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.612 qpair failed and we were unable to recover it. 00:25:00.612 [2024-07-24 22:34:26.117404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.613 [2024-07-24 22:34:26.117459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.613 qpair failed and we were unable to recover it. 00:25:00.613 [2024-07-24 22:34:26.117643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.613 [2024-07-24 22:34:26.117704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.613 qpair failed and we were unable to recover it. 00:25:00.613 [2024-07-24 22:34:26.117851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.613 [2024-07-24 22:34:26.117899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.613 qpair failed and we were unable to recover it. 00:25:00.613 [2024-07-24 22:34:26.118004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.613 [2024-07-24 22:34:26.118030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.613 qpair failed and we were unable to recover it. 00:25:00.613 [2024-07-24 22:34:26.118126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.613 [2024-07-24 22:34:26.118152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.613 qpair failed and we were unable to recover it. 00:25:00.613 [2024-07-24 22:34:26.118292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.613 [2024-07-24 22:34:26.118344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.613 qpair failed and we were unable to recover it. 00:25:00.613 [2024-07-24 22:34:26.118497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.613 [2024-07-24 22:34:26.118545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.613 qpair failed and we were unable to recover it. 00:25:00.613 [2024-07-24 22:34:26.118726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.613 [2024-07-24 22:34:26.118752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.613 qpair failed and we were unable to recover it. 00:25:00.613 [2024-07-24 22:34:26.118921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.613 [2024-07-24 22:34:26.118978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.613 qpair failed and we were unable to recover it. 00:25:00.613 [2024-07-24 22:34:26.119113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.613 [2024-07-24 22:34:26.119164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.613 qpair failed and we were unable to recover it. 00:25:00.613 [2024-07-24 22:34:26.119336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.613 [2024-07-24 22:34:26.119363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.613 qpair failed and we were unable to recover it. 00:25:00.613 [2024-07-24 22:34:26.119526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.613 [2024-07-24 22:34:26.119553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.613 qpair failed and we were unable to recover it. 00:25:00.613 [2024-07-24 22:34:26.119702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.613 [2024-07-24 22:34:26.119767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.613 qpair failed and we were unable to recover it. 00:25:00.613 [2024-07-24 22:34:26.119930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.613 [2024-07-24 22:34:26.119983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.613 qpair failed and we were unable to recover it. 00:25:00.613 [2024-07-24 22:34:26.120129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.613 [2024-07-24 22:34:26.120158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.613 qpair failed and we were unable to recover it. 00:25:00.613 [2024-07-24 22:34:26.120356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.613 [2024-07-24 22:34:26.120405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.613 qpair failed and we were unable to recover it. 00:25:00.613 [2024-07-24 22:34:26.120532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.613 [2024-07-24 22:34:26.120559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.613 qpair failed and we were unable to recover it. 00:25:00.613 [2024-07-24 22:34:26.120662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.613 [2024-07-24 22:34:26.120688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.613 qpair failed and we were unable to recover it. 00:25:00.613 [2024-07-24 22:34:26.120818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.613 [2024-07-24 22:34:26.120870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.613 qpair failed and we were unable to recover it. 00:25:00.613 [2024-07-24 22:34:26.121041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.613 [2024-07-24 22:34:26.121100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.613 qpair failed and we were unable to recover it. 00:25:00.613 [2024-07-24 22:34:26.121277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.613 [2024-07-24 22:34:26.121304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.613 qpair failed and we were unable to recover it. 00:25:00.613 [2024-07-24 22:34:26.121427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.613 [2024-07-24 22:34:26.121476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.613 qpair failed and we were unable to recover it. 00:25:00.613 [2024-07-24 22:34:26.121582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.613 [2024-07-24 22:34:26.121610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.614 qpair failed and we were unable to recover it. 00:25:00.614 [2024-07-24 22:34:26.121750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.614 [2024-07-24 22:34:26.121798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.614 qpair failed and we were unable to recover it. 00:25:00.614 [2024-07-24 22:34:26.122000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.614 [2024-07-24 22:34:26.122066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.614 qpair failed and we were unable to recover it. 00:25:00.614 [2024-07-24 22:34:26.122207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.614 [2024-07-24 22:34:26.122253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.614 qpair failed and we were unable to recover it. 00:25:00.614 [2024-07-24 22:34:26.122390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.614 [2024-07-24 22:34:26.122438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.614 qpair failed and we were unable to recover it. 00:25:00.614 [2024-07-24 22:34:26.122589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.614 [2024-07-24 22:34:26.122616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.614 qpair failed and we were unable to recover it. 00:25:00.614 [2024-07-24 22:34:26.122840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.614 [2024-07-24 22:34:26.122889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.614 qpair failed and we were unable to recover it. 00:25:00.614 [2024-07-24 22:34:26.123074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.614 [2024-07-24 22:34:26.123100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.614 qpair failed and we were unable to recover it. 00:25:00.614 [2024-07-24 22:34:26.123206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.614 [2024-07-24 22:34:26.123234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.614 qpair failed and we were unable to recover it. 00:25:00.614 [2024-07-24 22:34:26.123376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.614 [2024-07-24 22:34:26.123446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.614 qpair failed and we were unable to recover it. 00:25:00.614 [2024-07-24 22:34:26.123601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.614 [2024-07-24 22:34:26.123628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.614 qpair failed and we were unable to recover it. 00:25:00.614 [2024-07-24 22:34:26.123746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.614 [2024-07-24 22:34:26.123800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.614 qpair failed and we were unable to recover it. 00:25:00.614 [2024-07-24 22:34:26.123973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.614 [2024-07-24 22:34:26.124026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.614 qpair failed and we were unable to recover it. 00:25:00.614 [2024-07-24 22:34:26.124124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.614 [2024-07-24 22:34:26.124150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.614 qpair failed and we were unable to recover it. 00:25:00.614 [2024-07-24 22:34:26.124299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.614 [2024-07-24 22:34:26.124326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.614 qpair failed and we were unable to recover it. 00:25:00.614 [2024-07-24 22:34:26.124510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.614 [2024-07-24 22:34:26.124562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.614 qpair failed and we were unable to recover it. 00:25:00.614 [2024-07-24 22:34:26.124711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.614 [2024-07-24 22:34:26.124761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.614 qpair failed and we were unable to recover it. 00:25:00.614 [2024-07-24 22:34:26.124888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.614 [2024-07-24 22:34:26.124934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.614 qpair failed and we were unable to recover it. 00:25:00.614 [2024-07-24 22:34:26.125080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.614 [2024-07-24 22:34:26.125143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.614 qpair failed and we were unable to recover it. 00:25:00.614 [2024-07-24 22:34:26.125299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.614 [2024-07-24 22:34:26.125351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.614 qpair failed and we were unable to recover it. 00:25:00.614 [2024-07-24 22:34:26.125532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.614 [2024-07-24 22:34:26.125562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.614 qpair failed and we were unable to recover it. 00:25:00.614 [2024-07-24 22:34:26.125752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.614 [2024-07-24 22:34:26.125803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.614 qpair failed and we were unable to recover it. 00:25:00.614 [2024-07-24 22:34:26.125901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.614 [2024-07-24 22:34:26.125956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.614 qpair failed and we were unable to recover it. 00:25:00.614 [2024-07-24 22:34:26.126133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.614 [2024-07-24 22:34:26.126185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.614 qpair failed and we were unable to recover it. 00:25:00.614 [2024-07-24 22:34:26.126289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.614 [2024-07-24 22:34:26.126316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.614 qpair failed and we were unable to recover it. 00:25:00.614 [2024-07-24 22:34:26.126517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.614 [2024-07-24 22:34:26.126566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.614 qpair failed and we were unable to recover it. 00:25:00.614 [2024-07-24 22:34:26.126750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.614 [2024-07-24 22:34:26.126776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.614 qpair failed and we were unable to recover it. 00:25:00.614 [2024-07-24 22:34:26.126954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.614 [2024-07-24 22:34:26.126980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.614 qpair failed and we were unable to recover it. 00:25:00.614 [2024-07-24 22:34:26.127115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.614 [2024-07-24 22:34:26.127163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.614 qpair failed and we were unable to recover it. 00:25:00.614 [2024-07-24 22:34:26.127315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.614 [2024-07-24 22:34:26.127342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.614 qpair failed and we were unable to recover it. 00:25:00.614 [2024-07-24 22:34:26.127498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.614 [2024-07-24 22:34:26.127550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.614 qpair failed and we were unable to recover it. 00:25:00.614 [2024-07-24 22:34:26.127683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.614 [2024-07-24 22:34:26.127740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.614 qpair failed and we were unable to recover it. 00:25:00.614 [2024-07-24 22:34:26.127911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.614 [2024-07-24 22:34:26.127966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.614 qpair failed and we were unable to recover it. 00:25:00.614 [2024-07-24 22:34:26.128068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.614 [2024-07-24 22:34:26.128094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.614 qpair failed and we were unable to recover it. 00:25:00.614 [2024-07-24 22:34:26.128292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.615 [2024-07-24 22:34:26.128343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-24 22:34:26.128532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.615 [2024-07-24 22:34:26.128559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-24 22:34:26.128727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.615 [2024-07-24 22:34:26.128753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-24 22:34:26.128894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.615 [2024-07-24 22:34:26.128949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-24 22:34:26.129068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.615 [2024-07-24 22:34:26.129124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-24 22:34:26.129271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.615 [2024-07-24 22:34:26.129328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-24 22:34:26.129498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.615 [2024-07-24 22:34:26.129554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-24 22:34:26.129741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.615 [2024-07-24 22:34:26.129798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-24 22:34:26.129950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.615 [2024-07-24 22:34:26.130009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-24 22:34:26.130180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.615 [2024-07-24 22:34:26.130206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-24 22:34:26.130307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.615 [2024-07-24 22:34:26.130334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-24 22:34:26.130516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.615 [2024-07-24 22:34:26.130559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-24 22:34:26.130691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.615 [2024-07-24 22:34:26.130773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-24 22:34:26.130934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.615 [2024-07-24 22:34:26.130997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-24 22:34:26.131126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.615 [2024-07-24 22:34:26.131177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-24 22:34:26.131280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.615 [2024-07-24 22:34:26.131307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-24 22:34:26.131436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.615 [2024-07-24 22:34:26.131493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-24 22:34:26.131652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.615 [2024-07-24 22:34:26.131702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-24 22:34:26.131896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.615 [2024-07-24 22:34:26.131952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-24 22:34:26.132102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.615 [2024-07-24 22:34:26.132152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-24 22:34:26.132297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.615 [2024-07-24 22:34:26.132349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-24 22:34:26.132528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.615 [2024-07-24 22:34:26.132555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-24 22:34:26.132692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.615 [2024-07-24 22:34:26.132719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-24 22:34:26.132856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.615 [2024-07-24 22:34:26.132913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-24 22:34:26.133015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.615 [2024-07-24 22:34:26.133042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-24 22:34:26.133235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.615 [2024-07-24 22:34:26.133286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-24 22:34:26.133451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.615 [2024-07-24 22:34:26.133516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-24 22:34:26.133675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.615 [2024-07-24 22:34:26.133736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-24 22:34:26.133863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.615 [2024-07-24 22:34:26.133924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-24 22:34:26.134125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.615 [2024-07-24 22:34:26.134173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-24 22:34:26.134368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.615 [2024-07-24 22:34:26.134421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-24 22:34:26.134544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.615 [2024-07-24 22:34:26.134595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-24 22:34:26.134734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.615 [2024-07-24 22:34:26.134786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-24 22:34:26.134952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.615 [2024-07-24 22:34:26.135041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-24 22:34:26.135174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.615 [2024-07-24 22:34:26.135227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.615 qpair failed and we were unable to recover it. 00:25:00.615 [2024-07-24 22:34:26.135414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.616 [2024-07-24 22:34:26.135461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.616 qpair failed and we were unable to recover it. 00:25:00.616 [2024-07-24 22:34:26.135688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.616 [2024-07-24 22:34:26.135715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.616 qpair failed and we were unable to recover it. 00:25:00.616 [2024-07-24 22:34:26.135881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.616 [2024-07-24 22:34:26.135941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.616 qpair failed and we were unable to recover it. 00:25:00.616 [2024-07-24 22:34:26.136081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.616 [2024-07-24 22:34:26.136131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.616 qpair failed and we were unable to recover it. 00:25:00.616 [2024-07-24 22:34:26.136231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.616 [2024-07-24 22:34:26.136257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.616 qpair failed and we were unable to recover it. 00:25:00.616 [2024-07-24 22:34:26.136457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.616 [2024-07-24 22:34:26.136517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.616 qpair failed and we were unable to recover it. 00:25:00.616 [2024-07-24 22:34:26.136622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.616 [2024-07-24 22:34:26.136649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.616 qpair failed and we were unable to recover it. 00:25:00.616 [2024-07-24 22:34:26.136888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.616 [2024-07-24 22:34:26.136941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.616 qpair failed and we were unable to recover it. 00:25:00.616 [2024-07-24 22:34:26.137104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.616 [2024-07-24 22:34:26.137158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.616 qpair failed and we were unable to recover it. 00:25:00.616 [2024-07-24 22:34:26.137362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.616 [2024-07-24 22:34:26.137388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.616 qpair failed and we were unable to recover it. 00:25:00.616 [2024-07-24 22:34:26.137610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.616 [2024-07-24 22:34:26.137666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.616 qpair failed and we were unable to recover it. 00:25:00.616 [2024-07-24 22:34:26.137848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.616 [2024-07-24 22:34:26.137898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.616 qpair failed and we were unable to recover it. 00:25:00.616 [2024-07-24 22:34:26.138064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.616 [2024-07-24 22:34:26.138090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.616 qpair failed and we were unable to recover it. 00:25:00.616 [2024-07-24 22:34:26.138193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.616 [2024-07-24 22:34:26.138225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.616 qpair failed and we were unable to recover it. 00:25:00.616 [2024-07-24 22:34:26.138436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.616 [2024-07-24 22:34:26.138491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.616 qpair failed and we were unable to recover it. 00:25:00.616 [2024-07-24 22:34:26.138654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.616 [2024-07-24 22:34:26.138712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.616 qpair failed and we were unable to recover it. 00:25:00.616 [2024-07-24 22:34:26.138848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.616 [2024-07-24 22:34:26.138899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.616 qpair failed and we were unable to recover it. 00:25:00.616 [2024-07-24 22:34:26.139097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.616 [2024-07-24 22:34:26.139146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.616 qpair failed and we were unable to recover it. 00:25:00.616 [2024-07-24 22:34:26.139311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.616 [2024-07-24 22:34:26.139368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.616 qpair failed and we were unable to recover it. 00:25:00.616 [2024-07-24 22:34:26.139542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.616 [2024-07-24 22:34:26.139592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.616 qpair failed and we were unable to recover it. 00:25:00.616 [2024-07-24 22:34:26.139738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.616 [2024-07-24 22:34:26.139790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.616 qpair failed and we were unable to recover it. 00:25:00.616 [2024-07-24 22:34:26.139997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.616 [2024-07-24 22:34:26.140026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.616 qpair failed and we were unable to recover it. 00:25:00.616 [2024-07-24 22:34:26.140234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.616 [2024-07-24 22:34:26.140289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.616 qpair failed and we were unable to recover it. 00:25:00.616 [2024-07-24 22:34:26.140430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.616 [2024-07-24 22:34:26.140495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.616 qpair failed and we were unable to recover it. 00:25:00.616 [2024-07-24 22:34:26.140624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.616 [2024-07-24 22:34:26.140681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.616 qpair failed and we were unable to recover it. 00:25:00.616 [2024-07-24 22:34:26.140895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.616 [2024-07-24 22:34:26.140944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.616 qpair failed and we were unable to recover it. 00:25:00.616 [2024-07-24 22:34:26.141086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.616 [2024-07-24 22:34:26.141136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.616 qpair failed and we were unable to recover it. 00:25:00.616 [2024-07-24 22:34:26.141297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.616 [2024-07-24 22:34:26.141351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.616 qpair failed and we were unable to recover it. 00:25:00.616 [2024-07-24 22:34:26.141516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.616 [2024-07-24 22:34:26.141545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.616 qpair failed and we were unable to recover it. 00:25:00.616 [2024-07-24 22:34:26.141729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.616 [2024-07-24 22:34:26.141759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.616 qpair failed and we were unable to recover it. 00:25:00.616 [2024-07-24 22:34:26.141953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.616 [2024-07-24 22:34:26.141994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.616 qpair failed and we were unable to recover it. 00:25:00.616 [2024-07-24 22:34:26.142171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.616 [2024-07-24 22:34:26.142197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.616 qpair failed and we were unable to recover it. 00:25:00.616 [2024-07-24 22:34:26.142368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.616 [2024-07-24 22:34:26.142418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.616 qpair failed and we were unable to recover it. 00:25:00.616 [2024-07-24 22:34:26.142556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.616 [2024-07-24 22:34:26.142610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.616 qpair failed and we were unable to recover it. 00:25:00.616 [2024-07-24 22:34:26.142831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.616 [2024-07-24 22:34:26.142882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.616 qpair failed and we were unable to recover it. 00:25:00.616 [2024-07-24 22:34:26.142990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.616 [2024-07-24 22:34:26.143018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.616 qpair failed and we were unable to recover it. 00:25:00.616 [2024-07-24 22:34:26.143226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.616 [2024-07-24 22:34:26.143252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.616 qpair failed and we were unable to recover it. 00:25:00.616 [2024-07-24 22:34:26.143424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.617 [2024-07-24 22:34:26.143450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.617 qpair failed and we were unable to recover it. 00:25:00.617 [2024-07-24 22:34:26.143571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.617 [2024-07-24 22:34:26.143598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.617 qpair failed and we were unable to recover it. 00:25:00.617 [2024-07-24 22:34:26.143744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.617 [2024-07-24 22:34:26.143801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.617 qpair failed and we were unable to recover it. 00:25:00.617 [2024-07-24 22:34:26.143990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.617 [2024-07-24 22:34:26.144018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.617 qpair failed and we were unable to recover it. 00:25:00.617 [2024-07-24 22:34:26.144239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.617 [2024-07-24 22:34:26.144266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.617 qpair failed and we were unable to recover it. 00:25:00.617 [2024-07-24 22:34:26.144454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.617 [2024-07-24 22:34:26.144507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.617 qpair failed and we were unable to recover it. 00:25:00.617 [2024-07-24 22:34:26.144711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.617 [2024-07-24 22:34:26.144764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.617 qpair failed and we were unable to recover it. 00:25:00.617 [2024-07-24 22:34:26.144932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.617 [2024-07-24 22:34:26.144993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.617 qpair failed and we were unable to recover it. 00:25:00.617 [2024-07-24 22:34:26.145119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.617 [2024-07-24 22:34:26.145171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.617 qpair failed and we were unable to recover it. 00:25:00.617 [2024-07-24 22:34:26.145372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.617 [2024-07-24 22:34:26.145432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.617 qpair failed and we were unable to recover it. 00:25:00.617 [2024-07-24 22:34:26.145626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.617 [2024-07-24 22:34:26.145676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.617 qpair failed and we were unable to recover it. 00:25:00.617 [2024-07-24 22:34:26.145794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.617 [2024-07-24 22:34:26.145851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.617 qpair failed and we were unable to recover it. 00:25:00.617 [2024-07-24 22:34:26.146046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.617 [2024-07-24 22:34:26.146093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.617 qpair failed and we were unable to recover it. 00:25:00.617 [2024-07-24 22:34:26.146215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.617 [2024-07-24 22:34:26.146270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.617 qpair failed and we were unable to recover it. 00:25:00.617 [2024-07-24 22:34:26.146423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.617 [2024-07-24 22:34:26.146478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.617 qpair failed and we were unable to recover it. 00:25:00.617 [2024-07-24 22:34:26.146590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.617 [2024-07-24 22:34:26.146617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.617 qpair failed and we were unable to recover it. 00:25:00.617 [2024-07-24 22:34:26.146789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.617 [2024-07-24 22:34:26.146845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.617 qpair failed and we were unable to recover it. 00:25:00.617 [2024-07-24 22:34:26.147037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.617 [2024-07-24 22:34:26.147065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.617 qpair failed and we were unable to recover it. 00:25:00.617 [2024-07-24 22:34:26.147281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.617 [2024-07-24 22:34:26.147307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.617 qpair failed and we were unable to recover it. 00:25:00.617 [2024-07-24 22:34:26.147511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.617 [2024-07-24 22:34:26.147557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.617 qpair failed and we were unable to recover it. 00:25:00.617 [2024-07-24 22:34:26.147672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.617 [2024-07-24 22:34:26.147700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.617 qpair failed and we were unable to recover it. 00:25:00.617 [2024-07-24 22:34:26.147895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.617 [2024-07-24 22:34:26.147957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.617 qpair failed and we were unable to recover it. 00:25:00.617 [2024-07-24 22:34:26.148056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.617 [2024-07-24 22:34:26.148083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.617 qpair failed and we were unable to recover it. 00:25:00.617 [2024-07-24 22:34:26.148192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.617 [2024-07-24 22:34:26.148217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.617 qpair failed and we were unable to recover it. 00:25:00.617 [2024-07-24 22:34:26.148407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.617 [2024-07-24 22:34:26.148461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.617 qpair failed and we were unable to recover it. 00:25:00.617 [2024-07-24 22:34:26.148721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.617 [2024-07-24 22:34:26.148748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.617 qpair failed and we were unable to recover it. 00:25:00.617 [2024-07-24 22:34:26.148993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.617 [2024-07-24 22:34:26.149050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.617 qpair failed and we were unable to recover it. 00:25:00.617 [2024-07-24 22:34:26.149186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.617 [2024-07-24 22:34:26.149237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.617 qpair failed and we were unable to recover it. 00:25:00.617 [2024-07-24 22:34:26.149341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.617 [2024-07-24 22:34:26.149368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.617 qpair failed and we were unable to recover it. 00:25:00.617 [2024-07-24 22:34:26.149538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.617 [2024-07-24 22:34:26.149565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.617 qpair failed and we were unable to recover it. 00:25:00.617 [2024-07-24 22:34:26.149687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.617 [2024-07-24 22:34:26.149713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.617 qpair failed and we were unable to recover it. 00:25:00.617 [2024-07-24 22:34:26.149838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.617 [2024-07-24 22:34:26.149890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.617 qpair failed and we were unable to recover it. 00:25:00.617 [2024-07-24 22:34:26.150075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.617 [2024-07-24 22:34:26.150123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.617 qpair failed and we were unable to recover it. 00:25:00.617 [2024-07-24 22:34:26.150306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.617 [2024-07-24 22:34:26.150357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.617 qpair failed and we were unable to recover it. 00:25:00.617 [2024-07-24 22:34:26.150472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.617 [2024-07-24 22:34:26.150529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.617 qpair failed and we were unable to recover it. 00:25:00.617 [2024-07-24 22:34:26.150689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.617 [2024-07-24 22:34:26.150739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.617 qpair failed and we were unable to recover it. 00:25:00.617 [2024-07-24 22:34:26.150885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.617 [2024-07-24 22:34:26.150911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.617 qpair failed and we were unable to recover it. 00:25:00.617 [2024-07-24 22:34:26.151059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.618 [2024-07-24 22:34:26.151111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.618 qpair failed and we were unable to recover it. 00:25:00.618 [2024-07-24 22:34:26.151291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.618 [2024-07-24 22:34:26.151340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.618 qpair failed and we were unable to recover it. 00:25:00.618 [2024-07-24 22:34:26.151517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.618 [2024-07-24 22:34:26.151570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.618 qpair failed and we were unable to recover it. 00:25:00.618 [2024-07-24 22:34:26.151677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.618 [2024-07-24 22:34:26.151703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.618 qpair failed and we were unable to recover it. 00:25:00.618 [2024-07-24 22:34:26.151836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.618 [2024-07-24 22:34:26.151886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.618 qpair failed and we were unable to recover it. 00:25:00.618 [2024-07-24 22:34:26.152015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.618 [2024-07-24 22:34:26.152063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.618 qpair failed and we were unable to recover it. 00:25:00.618 [2024-07-24 22:34:26.152169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.618 [2024-07-24 22:34:26.152195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.618 qpair failed and we were unable to recover it. 00:25:00.618 [2024-07-24 22:34:26.152386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.618 [2024-07-24 22:34:26.152443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.618 qpair failed and we were unable to recover it. 00:25:00.618 [2024-07-24 22:34:26.152553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.618 [2024-07-24 22:34:26.152580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.618 qpair failed and we were unable to recover it. 00:25:00.618 [2024-07-24 22:34:26.152718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.618 [2024-07-24 22:34:26.152765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.618 qpair failed and we were unable to recover it. 00:25:00.618 [2024-07-24 22:34:26.152966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.618 [2024-07-24 22:34:26.152993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.618 qpair failed and we were unable to recover it. 00:25:00.618 [2024-07-24 22:34:26.153126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.618 [2024-07-24 22:34:26.153152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.618 qpair failed and we were unable to recover it. 00:25:00.618 [2024-07-24 22:34:26.153306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.618 [2024-07-24 22:34:26.153366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.618 qpair failed and we were unable to recover it. 00:25:00.618 [2024-07-24 22:34:26.153539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.618 [2024-07-24 22:34:26.153569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.618 qpair failed and we were unable to recover it. 00:25:00.618 [2024-07-24 22:34:26.153778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.618 [2024-07-24 22:34:26.153832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.618 qpair failed and we were unable to recover it. 00:25:00.618 [2024-07-24 22:34:26.153930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.618 [2024-07-24 22:34:26.153955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.618 qpair failed and we were unable to recover it. 00:25:00.618 [2024-07-24 22:34:26.154148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.618 [2024-07-24 22:34:26.154199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.618 qpair failed and we were unable to recover it. 00:25:00.618 [2024-07-24 22:34:26.154389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.618 [2024-07-24 22:34:26.154439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.618 qpair failed and we were unable to recover it. 00:25:00.618 [2024-07-24 22:34:26.154627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.618 [2024-07-24 22:34:26.154676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.618 qpair failed and we were unable to recover it. 00:25:00.618 [2024-07-24 22:34:26.154829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.618 [2024-07-24 22:34:26.154897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.618 qpair failed and we were unable to recover it. 00:25:00.618 [2024-07-24 22:34:26.155066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.618 [2024-07-24 22:34:26.155127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.618 qpair failed and we were unable to recover it. 00:25:00.618 [2024-07-24 22:34:26.155384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.618 [2024-07-24 22:34:26.155413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.618 qpair failed and we were unable to recover it. 00:25:00.618 [2024-07-24 22:34:26.155584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.618 [2024-07-24 22:34:26.155638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.618 qpair failed and we were unable to recover it. 00:25:00.618 [2024-07-24 22:34:26.155813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.618 [2024-07-24 22:34:26.155867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.618 qpair failed and we were unable to recover it. 00:25:00.618 [2024-07-24 22:34:26.156051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.618 [2024-07-24 22:34:26.156105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.618 qpair failed and we were unable to recover it. 00:25:00.618 [2024-07-24 22:34:26.156278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.618 [2024-07-24 22:34:26.156329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.618 qpair failed and we were unable to recover it. 00:25:00.618 [2024-07-24 22:34:26.156528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.618 [2024-07-24 22:34:26.156557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.618 qpair failed and we were unable to recover it. 00:25:00.618 [2024-07-24 22:34:26.156675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.618 [2024-07-24 22:34:26.156734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.618 qpair failed and we were unable to recover it. 00:25:00.618 [2024-07-24 22:34:26.156844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.618 [2024-07-24 22:34:26.156872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.618 qpair failed and we were unable to recover it. 00:25:00.618 [2024-07-24 22:34:26.157027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.618 [2024-07-24 22:34:26.157053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.618 qpair failed and we were unable to recover it. 00:25:00.618 [2024-07-24 22:34:26.157154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.618 [2024-07-24 22:34:26.157180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.618 qpair failed and we were unable to recover it. 00:25:00.618 [2024-07-24 22:34:26.157318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.618 [2024-07-24 22:34:26.157370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.618 qpair failed and we were unable to recover it. 00:25:00.618 [2024-07-24 22:34:26.157530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.619 [2024-07-24 22:34:26.157557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.619 qpair failed and we were unable to recover it. 00:25:00.619 [2024-07-24 22:34:26.157749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.619 [2024-07-24 22:34:26.157801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.619 qpair failed and we were unable to recover it. 00:25:00.619 [2024-07-24 22:34:26.157969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.619 [2024-07-24 22:34:26.158019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.619 qpair failed and we were unable to recover it. 00:25:00.619 [2024-07-24 22:34:26.158174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.619 [2024-07-24 22:34:26.158225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.619 qpair failed and we were unable to recover it. 00:25:00.619 [2024-07-24 22:34:26.158416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.619 [2024-07-24 22:34:26.158470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.619 qpair failed and we were unable to recover it. 00:25:00.619 [2024-07-24 22:34:26.158655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.619 [2024-07-24 22:34:26.158706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.619 qpair failed and we were unable to recover it. 00:25:00.619 [2024-07-24 22:34:26.158853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.619 [2024-07-24 22:34:26.158904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.619 qpair failed and we were unable to recover it. 00:25:00.619 [2024-07-24 22:34:26.159087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.619 [2024-07-24 22:34:26.159137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.619 qpair failed and we were unable to recover it. 00:25:00.619 [2024-07-24 22:34:26.159309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.619 [2024-07-24 22:34:26.159337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.619 qpair failed and we were unable to recover it. 00:25:00.619 [2024-07-24 22:34:26.159528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.619 [2024-07-24 22:34:26.159555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.619 qpair failed and we were unable to recover it. 00:25:00.619 [2024-07-24 22:34:26.159748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.619 [2024-07-24 22:34:26.159796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.619 qpair failed and we were unable to recover it. 00:25:00.619 [2024-07-24 22:34:26.159892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.619 [2024-07-24 22:34:26.159918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.619 qpair failed and we were unable to recover it. 00:25:00.619 [2024-07-24 22:34:26.160046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.619 [2024-07-24 22:34:26.160093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.619 qpair failed and we were unable to recover it. 00:25:00.619 [2024-07-24 22:34:26.160307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.619 [2024-07-24 22:34:26.160364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.619 qpair failed and we were unable to recover it. 00:25:00.619 [2024-07-24 22:34:26.160563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.619 [2024-07-24 22:34:26.160609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.619 qpair failed and we were unable to recover it. 00:25:00.619 [2024-07-24 22:34:26.160780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.619 [2024-07-24 22:34:26.160836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.619 qpair failed and we were unable to recover it. 00:25:00.619 [2024-07-24 22:34:26.160986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.619 [2024-07-24 22:34:26.161037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.619 qpair failed and we were unable to recover it. 00:25:00.619 [2024-07-24 22:34:26.161221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.619 [2024-07-24 22:34:26.161250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.619 qpair failed and we were unable to recover it. 00:25:00.619 [2024-07-24 22:34:26.161445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.619 [2024-07-24 22:34:26.161502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.619 qpair failed and we were unable to recover it. 00:25:00.619 [2024-07-24 22:34:26.161655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.619 [2024-07-24 22:34:26.161717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.619 qpair failed and we were unable to recover it. 00:25:00.619 [2024-07-24 22:34:26.161850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.619 [2024-07-24 22:34:26.161901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.619 qpair failed and we were unable to recover it. 00:25:00.619 [2024-07-24 22:34:26.162085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.619 [2024-07-24 22:34:26.162111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.619 qpair failed and we were unable to recover it. 00:25:00.619 [2024-07-24 22:34:26.162305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.619 [2024-07-24 22:34:26.162331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.619 qpair failed and we were unable to recover it. 00:25:00.619 [2024-07-24 22:34:26.162512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.619 [2024-07-24 22:34:26.162563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.619 qpair failed and we were unable to recover it. 00:25:00.619 [2024-07-24 22:34:26.162702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.619 [2024-07-24 22:34:26.162750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.619 qpair failed and we were unable to recover it. 00:25:00.619 [2024-07-24 22:34:26.162850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.619 [2024-07-24 22:34:26.162876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.619 qpair failed and we were unable to recover it. 00:25:00.619 [2024-07-24 22:34:26.163078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.619 [2024-07-24 22:34:26.163136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.619 qpair failed and we were unable to recover it. 00:25:00.619 [2024-07-24 22:34:26.163310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.619 [2024-07-24 22:34:26.163371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.619 qpair failed and we were unable to recover it. 00:25:00.619 [2024-07-24 22:34:26.163556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.619 [2024-07-24 22:34:26.163607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.619 qpair failed and we were unable to recover it. 00:25:00.619 [2024-07-24 22:34:26.163719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.619 [2024-07-24 22:34:26.163781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.619 qpair failed and we were unable to recover it. 00:25:00.619 [2024-07-24 22:34:26.163942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.619 [2024-07-24 22:34:26.164003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.619 qpair failed and we were unable to recover it. 00:25:00.619 [2024-07-24 22:34:26.164113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.619 [2024-07-24 22:34:26.164139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.619 qpair failed and we were unable to recover it. 00:25:00.619 [2024-07-24 22:34:26.164310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.619 [2024-07-24 22:34:26.164362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.619 qpair failed and we were unable to recover it. 00:25:00.619 [2024-07-24 22:34:26.164468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.619 [2024-07-24 22:34:26.164507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.619 qpair failed and we were unable to recover it. 00:25:00.619 [2024-07-24 22:34:26.164678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.619 [2024-07-24 22:34:26.164705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.619 qpair failed and we were unable to recover it. 00:25:00.619 [2024-07-24 22:34:26.164848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.619 [2024-07-24 22:34:26.164897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.619 qpair failed and we were unable to recover it. 00:25:00.619 [2024-07-24 22:34:26.165037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.619 [2024-07-24 22:34:26.165086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.619 qpair failed and we were unable to recover it. 00:25:00.619 [2024-07-24 22:34:26.165346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.619 [2024-07-24 22:34:26.165395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.619 qpair failed and we were unable to recover it. 00:25:00.620 [2024-07-24 22:34:26.165582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.620 [2024-07-24 22:34:26.165634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.620 qpair failed and we were unable to recover it. 00:25:00.620 [2024-07-24 22:34:26.165794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.620 [2024-07-24 22:34:26.165855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.620 qpair failed and we were unable to recover it. 00:25:00.620 [2024-07-24 22:34:26.165985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.620 [2024-07-24 22:34:26.166037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.620 qpair failed and we were unable to recover it. 00:25:00.620 [2024-07-24 22:34:26.166233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.620 [2024-07-24 22:34:26.166259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.620 qpair failed and we were unable to recover it. 00:25:00.620 [2024-07-24 22:34:26.166448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.620 [2024-07-24 22:34:26.166474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.620 qpair failed and we were unable to recover it. 00:25:00.620 [2024-07-24 22:34:26.166627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.620 [2024-07-24 22:34:26.166697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.620 qpair failed and we were unable to recover it. 00:25:00.620 [2024-07-24 22:34:26.166930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.620 [2024-07-24 22:34:26.166956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.620 qpair failed and we were unable to recover it. 00:25:00.620 [2024-07-24 22:34:26.167118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.620 [2024-07-24 22:34:26.167175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.620 qpair failed and we were unable to recover it. 00:25:00.620 [2024-07-24 22:34:26.167335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.620 [2024-07-24 22:34:26.167363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.620 qpair failed and we were unable to recover it. 00:25:00.620 [2024-07-24 22:34:26.167516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.620 [2024-07-24 22:34:26.167543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.620 qpair failed and we were unable to recover it. 00:25:00.620 [2024-07-24 22:34:26.167697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.620 [2024-07-24 22:34:26.167725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.620 qpair failed and we were unable to recover it. 00:25:00.620 [2024-07-24 22:34:26.167895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.620 [2024-07-24 22:34:26.167944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.620 qpair failed and we were unable to recover it. 00:25:00.620 [2024-07-24 22:34:26.168104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.620 [2024-07-24 22:34:26.168161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.620 qpair failed and we were unable to recover it. 00:25:00.620 [2024-07-24 22:34:26.168342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.620 [2024-07-24 22:34:26.168391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.620 qpair failed and we were unable to recover it. 00:25:00.620 [2024-07-24 22:34:26.168503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.620 [2024-07-24 22:34:26.168533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.620 qpair failed and we were unable to recover it. 00:25:00.620 [2024-07-24 22:34:26.168739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.620 [2024-07-24 22:34:26.168766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.620 qpair failed and we were unable to recover it. 00:25:00.620 [2024-07-24 22:34:26.168972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.620 [2024-07-24 22:34:26.169032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.620 qpair failed and we were unable to recover it. 00:25:00.620 [2024-07-24 22:34:26.169218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.620 [2024-07-24 22:34:26.169276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.620 qpair failed and we were unable to recover it. 00:25:00.620 [2024-07-24 22:34:26.169500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.620 [2024-07-24 22:34:26.169545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.620 qpair failed and we were unable to recover it. 00:25:00.620 [2024-07-24 22:34:26.169687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.620 [2024-07-24 22:34:26.169736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.620 qpair failed and we were unable to recover it. 00:25:00.620 [2024-07-24 22:34:26.169953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.620 [2024-07-24 22:34:26.170006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.620 qpair failed and we were unable to recover it. 00:25:00.620 [2024-07-24 22:34:26.170170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.620 [2024-07-24 22:34:26.170230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.620 qpair failed and we were unable to recover it. 00:25:00.620 [2024-07-24 22:34:26.170414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.620 [2024-07-24 22:34:26.170464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.620 qpair failed and we were unable to recover it. 00:25:00.620 [2024-07-24 22:34:26.170646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.620 [2024-07-24 22:34:26.170672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.620 qpair failed and we were unable to recover it. 00:25:00.620 [2024-07-24 22:34:26.170874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.620 [2024-07-24 22:34:26.170900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.620 qpair failed and we were unable to recover it. 00:25:00.620 [2024-07-24 22:34:26.171094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.620 [2024-07-24 22:34:26.171144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.620 qpair failed and we were unable to recover it. 00:25:00.620 [2024-07-24 22:34:26.171271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.620 [2024-07-24 22:34:26.171339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.620 qpair failed and we were unable to recover it. 00:25:00.620 [2024-07-24 22:34:26.171522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.620 [2024-07-24 22:34:26.171580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.620 qpair failed and we were unable to recover it. 00:25:00.620 [2024-07-24 22:34:26.171773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.620 [2024-07-24 22:34:26.171823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.620 qpair failed and we were unable to recover it. 00:25:00.620 [2024-07-24 22:34:26.171998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.620 [2024-07-24 22:34:26.172057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.620 qpair failed and we were unable to recover it. 00:25:00.620 [2024-07-24 22:34:26.172257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.620 [2024-07-24 22:34:26.172309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.620 qpair failed and we were unable to recover it. 00:25:00.620 [2024-07-24 22:34:26.172461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.620 [2024-07-24 22:34:26.172497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.620 qpair failed and we were unable to recover it. 00:25:00.620 [2024-07-24 22:34:26.172613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.620 [2024-07-24 22:34:26.172640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.620 qpair failed and we were unable to recover it. 00:25:00.620 [2024-07-24 22:34:26.172777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.620 [2024-07-24 22:34:26.172820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.620 qpair failed and we were unable to recover it. 00:25:00.620 [2024-07-24 22:34:26.172923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.620 [2024-07-24 22:34:26.172949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.620 qpair failed and we were unable to recover it. 00:25:00.620 [2024-07-24 22:34:26.173121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.620 [2024-07-24 22:34:26.173147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.620 qpair failed and we were unable to recover it. 00:25:00.620 [2024-07-24 22:34:26.173307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.620 [2024-07-24 22:34:26.173364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.620 qpair failed and we were unable to recover it. 00:25:00.621 [2024-07-24 22:34:26.173490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.621 [2024-07-24 22:34:26.173517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.621 qpair failed and we were unable to recover it. 00:25:00.621 [2024-07-24 22:34:26.173689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.621 [2024-07-24 22:34:26.173716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.621 qpair failed and we were unable to recover it. 00:25:00.621 [2024-07-24 22:34:26.173869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.621 [2024-07-24 22:34:26.173936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.621 qpair failed and we were unable to recover it. 00:25:00.621 [2024-07-24 22:34:26.174040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.621 [2024-07-24 22:34:26.174067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.621 qpair failed and we were unable to recover it. 00:25:00.621 [2024-07-24 22:34:26.174255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.621 [2024-07-24 22:34:26.174305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.621 qpair failed and we were unable to recover it. 00:25:00.621 [2024-07-24 22:34:26.174438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.621 [2024-07-24 22:34:26.174497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.621 qpair failed and we were unable to recover it. 00:25:00.621 [2024-07-24 22:34:26.174684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.621 [2024-07-24 22:34:26.174711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.621 qpair failed and we were unable to recover it. 00:25:00.621 [2024-07-24 22:34:26.174888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.621 [2024-07-24 22:34:26.174939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.621 qpair failed and we were unable to recover it. 00:25:00.621 [2024-07-24 22:34:26.175049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.621 [2024-07-24 22:34:26.175112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.621 qpair failed and we were unable to recover it. 00:25:00.621 [2024-07-24 22:34:26.175297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.621 [2024-07-24 22:34:26.175346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.621 qpair failed and we were unable to recover it. 00:25:00.621 [2024-07-24 22:34:26.175539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.621 [2024-07-24 22:34:26.175591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.621 qpair failed and we were unable to recover it. 00:25:00.621 [2024-07-24 22:34:26.175756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.621 [2024-07-24 22:34:26.175816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.621 qpair failed and we were unable to recover it. 00:25:00.621 [2024-07-24 22:34:26.175931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.621 [2024-07-24 22:34:26.175980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.621 qpair failed and we were unable to recover it. 00:25:00.621 [2024-07-24 22:34:26.176137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.621 [2024-07-24 22:34:26.176165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.621 qpair failed and we were unable to recover it. 00:25:00.621 [2024-07-24 22:34:26.176301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.621 [2024-07-24 22:34:26.176327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.621 qpair failed and we were unable to recover it. 00:25:00.621 [2024-07-24 22:34:26.176498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.621 [2024-07-24 22:34:26.176552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.621 qpair failed and we were unable to recover it. 00:25:00.621 [2024-07-24 22:34:26.176681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.621 [2024-07-24 22:34:26.176730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.621 qpair failed and we were unable to recover it. 00:25:00.621 [2024-07-24 22:34:26.176859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.621 [2024-07-24 22:34:26.176939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.621 qpair failed and we were unable to recover it. 00:25:00.621 [2024-07-24 22:34:26.177134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.621 [2024-07-24 22:34:26.177183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.621 qpair failed and we were unable to recover it. 00:25:00.621 [2024-07-24 22:34:26.177312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.621 [2024-07-24 22:34:26.177348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.621 qpair failed and we were unable to recover it. 00:25:00.621 [2024-07-24 22:34:26.177530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.621 [2024-07-24 22:34:26.177571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.621 qpair failed and we were unable to recover it. 00:25:00.621 [2024-07-24 22:34:26.177726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.621 [2024-07-24 22:34:26.177787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.621 qpair failed and we were unable to recover it. 00:25:00.621 [2024-07-24 22:34:26.177972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.621 [2024-07-24 22:34:26.177998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.621 qpair failed and we were unable to recover it. 00:25:00.621 [2024-07-24 22:34:26.178179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.621 [2024-07-24 22:34:26.178205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.621 qpair failed and we were unable to recover it. 00:25:00.621 [2024-07-24 22:34:26.178422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.621 [2024-07-24 22:34:26.178471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.621 qpair failed and we were unable to recover it. 00:25:00.621 [2024-07-24 22:34:26.178636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.621 [2024-07-24 22:34:26.178663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.621 qpair failed and we were unable to recover it. 00:25:00.621 [2024-07-24 22:34:26.178865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.621 [2024-07-24 22:34:26.178912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.621 qpair failed and we were unable to recover it. 00:25:00.621 [2024-07-24 22:34:26.179097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.621 [2024-07-24 22:34:26.179124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.621 qpair failed and we were unable to recover it. 00:25:00.621 [2024-07-24 22:34:26.179264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.621 [2024-07-24 22:34:26.179315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.621 qpair failed and we were unable to recover it. 00:25:00.621 [2024-07-24 22:34:26.179437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.621 [2024-07-24 22:34:26.179491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.621 qpair failed and we were unable to recover it. 00:25:00.621 [2024-07-24 22:34:26.179673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.621 [2024-07-24 22:34:26.179700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.621 qpair failed and we were unable to recover it. 00:25:00.621 [2024-07-24 22:34:26.179871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.621 [2024-07-24 22:34:26.179925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.621 qpair failed and we were unable to recover it. 00:25:00.621 [2024-07-24 22:34:26.180061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.621 [2024-07-24 22:34:26.180150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.621 qpair failed and we were unable to recover it. 00:25:00.621 [2024-07-24 22:34:26.180307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.621 [2024-07-24 22:34:26.180334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.621 qpair failed and we were unable to recover it. 00:25:00.621 [2024-07-24 22:34:26.180528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.621 [2024-07-24 22:34:26.180584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.621 qpair failed and we were unable to recover it. 00:25:00.621 [2024-07-24 22:34:26.180781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.621 [2024-07-24 22:34:26.180829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.621 qpair failed and we were unable to recover it. 00:25:00.621 [2024-07-24 22:34:26.180947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.621 [2024-07-24 22:34:26.180975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.622 qpair failed and we were unable to recover it. 00:25:00.622 [2024-07-24 22:34:26.181076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.622 [2024-07-24 22:34:26.181103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.622 qpair failed and we were unable to recover it. 00:25:00.622 [2024-07-24 22:34:26.181206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.622 [2024-07-24 22:34:26.181234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.622 qpair failed and we were unable to recover it. 00:25:00.622 [2024-07-24 22:34:26.181358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.622 [2024-07-24 22:34:26.181387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.622 qpair failed and we were unable to recover it. 00:25:00.622 [2024-07-24 22:34:26.181495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.622 [2024-07-24 22:34:26.181522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.622 qpair failed and we were unable to recover it. 00:25:00.622 [2024-07-24 22:34:26.181712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.622 [2024-07-24 22:34:26.181765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.622 qpair failed and we were unable to recover it. 00:25:00.622 [2024-07-24 22:34:26.181865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.622 [2024-07-24 22:34:26.181892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.622 qpair failed and we were unable to recover it. 00:25:00.622 [2024-07-24 22:34:26.182010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.622 [2024-07-24 22:34:26.182064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.622 qpair failed and we were unable to recover it. 00:25:00.622 [2024-07-24 22:34:26.182196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.622 [2024-07-24 22:34:26.182224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.622 qpair failed and we were unable to recover it. 00:25:00.622 [2024-07-24 22:34:26.182425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.622 [2024-07-24 22:34:26.182489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.622 qpair failed and we were unable to recover it. 00:25:00.622 [2024-07-24 22:34:26.182620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.622 [2024-07-24 22:34:26.182646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.622 qpair failed and we were unable to recover it. 00:25:00.622 [2024-07-24 22:34:26.182806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.622 [2024-07-24 22:34:26.182833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.622 qpair failed and we were unable to recover it. 00:25:00.622 [2024-07-24 22:34:26.182996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.622 [2024-07-24 22:34:26.183055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.622 qpair failed and we were unable to recover it. 00:25:00.622 [2024-07-24 22:34:26.183245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.622 [2024-07-24 22:34:26.183294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.622 qpair failed and we were unable to recover it. 00:25:00.622 [2024-07-24 22:34:26.183509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.622 [2024-07-24 22:34:26.183555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.622 qpair failed and we were unable to recover it. 00:25:00.622 [2024-07-24 22:34:26.183749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.622 [2024-07-24 22:34:26.183800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.622 qpair failed and we were unable to recover it. 00:25:00.622 [2024-07-24 22:34:26.184008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.622 [2024-07-24 22:34:26.184058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.622 qpair failed and we were unable to recover it. 00:25:00.622 [2024-07-24 22:34:26.184224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.622 [2024-07-24 22:34:26.184285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.622 qpair failed and we were unable to recover it. 00:25:00.622 [2024-07-24 22:34:26.184495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.622 [2024-07-24 22:34:26.184524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.622 qpair failed and we were unable to recover it. 00:25:00.622 [2024-07-24 22:34:26.184730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.622 [2024-07-24 22:34:26.184783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.622 qpair failed and we were unable to recover it. 00:25:00.622 [2024-07-24 22:34:26.184975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.622 [2024-07-24 22:34:26.185036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.622 qpair failed and we were unable to recover it. 00:25:00.622 [2024-07-24 22:34:26.185138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.622 [2024-07-24 22:34:26.185165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.622 qpair failed and we were unable to recover it. 00:25:00.622 [2024-07-24 22:34:26.185356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.622 [2024-07-24 22:34:26.185411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.622 qpair failed and we were unable to recover it. 00:25:00.622 [2024-07-24 22:34:26.185621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.622 [2024-07-24 22:34:26.185680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.622 qpair failed and we were unable to recover it. 00:25:00.622 [2024-07-24 22:34:26.185801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.622 [2024-07-24 22:34:26.185829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.622 qpair failed and we were unable to recover it. 00:25:00.622 [2024-07-24 22:34:26.186055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.622 [2024-07-24 22:34:26.186081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.622 qpair failed and we were unable to recover it. 00:25:00.622 [2024-07-24 22:34:26.186207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.622 [2024-07-24 22:34:26.186289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.622 qpair failed and we were unable to recover it. 00:25:00.622 [2024-07-24 22:34:26.186510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.622 [2024-07-24 22:34:26.186550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.622 qpair failed and we were unable to recover it. 00:25:00.622 [2024-07-24 22:34:26.186710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.622 [2024-07-24 22:34:26.186735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.622 qpair failed and we were unable to recover it. 00:25:00.622 [2024-07-24 22:34:26.186935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.622 [2024-07-24 22:34:26.186987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.622 qpair failed and we were unable to recover it. 00:25:00.622 [2024-07-24 22:34:26.187153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.622 [2024-07-24 22:34:26.187212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.622 qpair failed and we were unable to recover it. 00:25:00.622 [2024-07-24 22:34:26.187400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.622 [2024-07-24 22:34:26.187449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.622 qpair failed and we were unable to recover it. 00:25:00.622 [2024-07-24 22:34:26.187650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.622 [2024-07-24 22:34:26.187701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.622 qpair failed and we were unable to recover it. 00:25:00.622 [2024-07-24 22:34:26.187821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.622 [2024-07-24 22:34:26.187870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.622 qpair failed and we were unable to recover it. 00:25:00.622 [2024-07-24 22:34:26.188071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.622 [2024-07-24 22:34:26.188097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.622 qpair failed and we were unable to recover it. 00:25:00.622 [2024-07-24 22:34:26.188201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.622 [2024-07-24 22:34:26.188229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.622 qpair failed and we were unable to recover it. 00:25:00.622 [2024-07-24 22:34:26.188406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.622 [2024-07-24 22:34:26.188441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.622 qpair failed and we were unable to recover it. 00:25:00.622 [2024-07-24 22:34:26.188620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.622 [2024-07-24 22:34:26.188676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.622 qpair failed and we were unable to recover it. 00:25:00.623 [2024-07-24 22:34:26.188791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.623 [2024-07-24 22:34:26.188817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.623 qpair failed and we were unable to recover it. 00:25:00.623 [2024-07-24 22:34:26.188942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.623 [2024-07-24 22:34:26.188989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.623 qpair failed and we were unable to recover it. 00:25:00.623 [2024-07-24 22:34:26.189190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.623 [2024-07-24 22:34:26.189240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.623 qpair failed and we were unable to recover it. 00:25:00.623 [2024-07-24 22:34:26.189384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.623 [2024-07-24 22:34:26.189410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.623 qpair failed and we were unable to recover it. 00:25:00.623 [2024-07-24 22:34:26.189509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.623 [2024-07-24 22:34:26.189535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.623 qpair failed and we were unable to recover it. 00:25:00.623 [2024-07-24 22:34:26.189704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.623 [2024-07-24 22:34:26.189756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.623 qpair failed and we were unable to recover it. 00:25:00.623 [2024-07-24 22:34:26.189947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.623 [2024-07-24 22:34:26.189996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.623 qpair failed and we were unable to recover it. 00:25:00.623 [2024-07-24 22:34:26.190185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.623 [2024-07-24 22:34:26.190213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.623 qpair failed and we were unable to recover it. 00:25:00.623 [2024-07-24 22:34:26.190353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.623 [2024-07-24 22:34:26.190401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.623 qpair failed and we were unable to recover it. 00:25:00.623 [2024-07-24 22:34:26.190515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.623 [2024-07-24 22:34:26.190544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.623 qpair failed and we were unable to recover it. 00:25:00.623 [2024-07-24 22:34:26.190712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.623 [2024-07-24 22:34:26.190773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.623 qpair failed and we were unable to recover it. 00:25:00.623 [2024-07-24 22:34:26.190967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.623 [2024-07-24 22:34:26.191017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.623 qpair failed and we were unable to recover it. 00:25:00.623 [2024-07-24 22:34:26.191237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.623 [2024-07-24 22:34:26.191292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.623 qpair failed and we were unable to recover it. 00:25:00.623 [2024-07-24 22:34:26.191392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.623 [2024-07-24 22:34:26.191419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.623 qpair failed and we were unable to recover it. 00:25:00.623 [2024-07-24 22:34:26.191612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.623 [2024-07-24 22:34:26.191664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.623 qpair failed and we were unable to recover it. 00:25:00.623 [2024-07-24 22:34:26.191824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.623 [2024-07-24 22:34:26.191881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.623 qpair failed and we were unable to recover it. 00:25:00.623 [2024-07-24 22:34:26.192002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.623 [2024-07-24 22:34:26.192051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.623 qpair failed and we were unable to recover it. 00:25:00.623 [2024-07-24 22:34:26.192227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.623 [2024-07-24 22:34:26.192276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.623 qpair failed and we were unable to recover it. 00:25:00.623 [2024-07-24 22:34:26.192450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.623 [2024-07-24 22:34:26.192507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.623 qpair failed and we were unable to recover it. 00:25:00.623 [2024-07-24 22:34:26.192608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.623 [2024-07-24 22:34:26.192634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.623 qpair failed and we were unable to recover it. 00:25:00.623 [2024-07-24 22:34:26.192727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.623 [2024-07-24 22:34:26.192753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.623 qpair failed and we were unable to recover it. 00:25:00.623 [2024-07-24 22:34:26.192922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.623 [2024-07-24 22:34:26.192948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.623 qpair failed and we were unable to recover it. 00:25:00.623 [2024-07-24 22:34:26.193075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.623 [2024-07-24 22:34:26.193121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.623 qpair failed and we were unable to recover it. 00:25:00.623 [2024-07-24 22:34:26.193230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.623 [2024-07-24 22:34:26.193260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.623 qpair failed and we were unable to recover it. 00:25:00.623 [2024-07-24 22:34:26.193445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.623 [2024-07-24 22:34:26.193472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.623 qpair failed and we were unable to recover it. 00:25:00.623 [2024-07-24 22:34:26.193704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.623 [2024-07-24 22:34:26.193731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.623 qpair failed and we were unable to recover it. 00:25:00.623 [2024-07-24 22:34:26.193837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.623 [2024-07-24 22:34:26.193864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.623 qpair failed and we were unable to recover it. 00:25:00.623 [2024-07-24 22:34:26.193968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.623 [2024-07-24 22:34:26.193995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.623 qpair failed and we were unable to recover it. 00:25:00.623 [2024-07-24 22:34:26.194211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.623 [2024-07-24 22:34:26.194237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.623 qpair failed and we were unable to recover it. 00:25:00.623 [2024-07-24 22:34:26.194397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.623 [2024-07-24 22:34:26.194454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.623 qpair failed and we were unable to recover it. 00:25:00.623 [2024-07-24 22:34:26.194567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.623 [2024-07-24 22:34:26.194596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.623 qpair failed and we were unable to recover it. 00:25:00.623 [2024-07-24 22:34:26.194814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.623 [2024-07-24 22:34:26.194842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.624 qpair failed and we were unable to recover it. 00:25:00.624 [2024-07-24 22:34:26.194950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.624 [2024-07-24 22:34:26.194977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.624 qpair failed and we were unable to recover it. 00:25:00.624 [2024-07-24 22:34:26.195200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.624 [2024-07-24 22:34:26.195254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.624 qpair failed and we were unable to recover it. 00:25:00.624 [2024-07-24 22:34:26.195459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.624 [2024-07-24 22:34:26.195515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.624 qpair failed and we were unable to recover it. 00:25:00.624 [2024-07-24 22:34:26.195618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.624 [2024-07-24 22:34:26.195645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.624 qpair failed and we were unable to recover it. 00:25:00.624 [2024-07-24 22:34:26.195825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.624 [2024-07-24 22:34:26.195851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.624 qpair failed and we were unable to recover it. 00:25:00.624 [2024-07-24 22:34:26.196007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.624 [2024-07-24 22:34:26.196069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.624 qpair failed and we were unable to recover it. 00:25:00.624 [2024-07-24 22:34:26.196280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.624 [2024-07-24 22:34:26.196332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.624 qpair failed and we were unable to recover it. 00:25:00.624 [2024-07-24 22:34:26.196556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.624 [2024-07-24 22:34:26.196611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.624 qpair failed and we were unable to recover it. 00:25:00.624 [2024-07-24 22:34:26.196817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.624 [2024-07-24 22:34:26.196868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.624 qpair failed and we were unable to recover it. 00:25:00.624 [2024-07-24 22:34:26.196979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.624 [2024-07-24 22:34:26.197006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.624 qpair failed and we were unable to recover it. 00:25:00.624 [2024-07-24 22:34:26.197114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.624 [2024-07-24 22:34:26.197140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.624 qpair failed and we were unable to recover it. 00:25:00.624 [2024-07-24 22:34:26.197270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.624 [2024-07-24 22:34:26.197317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.624 qpair failed and we were unable to recover it. 00:25:00.624 [2024-07-24 22:34:26.197446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.624 [2024-07-24 22:34:26.197472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.624 qpair failed and we were unable to recover it. 00:25:00.624 [2024-07-24 22:34:26.197627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.624 [2024-07-24 22:34:26.197690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.624 qpair failed and we were unable to recover it. 00:25:00.624 [2024-07-24 22:34:26.197817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.624 [2024-07-24 22:34:26.197870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.624 qpair failed and we were unable to recover it. 00:25:00.624 [2024-07-24 22:34:26.198000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.624 [2024-07-24 22:34:26.198026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.624 qpair failed and we were unable to recover it. 00:25:00.624 [2024-07-24 22:34:26.198212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.624 [2024-07-24 22:34:26.198262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.624 qpair failed and we were unable to recover it. 00:25:00.624 [2024-07-24 22:34:26.198411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.624 [2024-07-24 22:34:26.198439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.624 qpair failed and we were unable to recover it. 00:25:00.624 [2024-07-24 22:34:26.198638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.624 [2024-07-24 22:34:26.198689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.624 qpair failed and we were unable to recover it. 00:25:00.624 [2024-07-24 22:34:26.198876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.624 [2024-07-24 22:34:26.198902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.624 qpair failed and we were unable to recover it. 00:25:00.624 [2024-07-24 22:34:26.199099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.624 [2024-07-24 22:34:26.199147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.624 qpair failed and we were unable to recover it. 00:25:00.624 [2024-07-24 22:34:26.199315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.624 [2024-07-24 22:34:26.199372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.624 qpair failed and we were unable to recover it. 00:25:00.624 [2024-07-24 22:34:26.199502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.624 [2024-07-24 22:34:26.199552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.624 qpair failed and we were unable to recover it. 00:25:00.624 [2024-07-24 22:34:26.199654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.624 [2024-07-24 22:34:26.199681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.624 qpair failed and we were unable to recover it. 00:25:00.624 [2024-07-24 22:34:26.199792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.624 [2024-07-24 22:34:26.199818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.624 qpair failed and we were unable to recover it. 00:25:00.624 [2024-07-24 22:34:26.199997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.624 [2024-07-24 22:34:26.200024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.624 qpair failed and we were unable to recover it. 00:25:00.624 [2024-07-24 22:34:26.200128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.624 [2024-07-24 22:34:26.200156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.624 qpair failed and we were unable to recover it. 00:25:00.624 [2024-07-24 22:34:26.200368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.624 [2024-07-24 22:34:26.200423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.624 qpair failed and we were unable to recover it. 00:25:00.624 [2024-07-24 22:34:26.200534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.624 [2024-07-24 22:34:26.200562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.624 qpair failed and we were unable to recover it. 00:25:00.624 [2024-07-24 22:34:26.200706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.624 [2024-07-24 22:34:26.200732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.624 qpair failed and we were unable to recover it. 00:25:00.624 [2024-07-24 22:34:26.200860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.624 [2024-07-24 22:34:26.200886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.624 qpair failed and we were unable to recover it. 00:25:00.624 [2024-07-24 22:34:26.201051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.624 [2024-07-24 22:34:26.201107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.624 qpair failed and we were unable to recover it. 00:25:00.624 [2024-07-24 22:34:26.201209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.624 [2024-07-24 22:34:26.201235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.624 qpair failed and we were unable to recover it. 00:25:00.624 [2024-07-24 22:34:26.201338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.624 [2024-07-24 22:34:26.201369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.624 qpair failed and we were unable to recover it. 00:25:00.624 [2024-07-24 22:34:26.201533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.624 [2024-07-24 22:34:26.201559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.624 qpair failed and we were unable to recover it. 00:25:00.624 [2024-07-24 22:34:26.201662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.624 [2024-07-24 22:34:26.201689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.624 qpair failed and we were unable to recover it. 00:25:00.624 [2024-07-24 22:34:26.201837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.624 [2024-07-24 22:34:26.201900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.625 qpair failed and we were unable to recover it. 00:25:00.625 [2024-07-24 22:34:26.202115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.625 [2024-07-24 22:34:26.202141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.625 qpair failed and we were unable to recover it. 00:25:00.625 [2024-07-24 22:34:26.202323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.625 [2024-07-24 22:34:26.202377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.625 qpair failed and we were unable to recover it. 00:25:00.625 [2024-07-24 22:34:26.202563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.625 [2024-07-24 22:34:26.202593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.625 qpair failed and we were unable to recover it. 00:25:00.625 [2024-07-24 22:34:26.202742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.625 [2024-07-24 22:34:26.202769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.625 qpair failed and we were unable to recover it. 00:25:00.625 [2024-07-24 22:34:26.203019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.625 [2024-07-24 22:34:26.203072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.625 qpair failed and we were unable to recover it. 00:25:00.625 [2024-07-24 22:34:26.203343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.625 [2024-07-24 22:34:26.203396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.625 qpair failed and we were unable to recover it. 00:25:00.625 [2024-07-24 22:34:26.203513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.625 [2024-07-24 22:34:26.203541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.625 qpair failed and we were unable to recover it. 00:25:00.625 [2024-07-24 22:34:26.203647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.625 [2024-07-24 22:34:26.203674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.625 qpair failed and we were unable to recover it. 00:25:00.625 [2024-07-24 22:34:26.203862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.625 [2024-07-24 22:34:26.203916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.625 qpair failed and we were unable to recover it. 00:25:00.625 [2024-07-24 22:34:26.204116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.625 [2024-07-24 22:34:26.204171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.625 qpair failed and we were unable to recover it. 00:25:00.625 [2024-07-24 22:34:26.204278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.625 [2024-07-24 22:34:26.204304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.625 qpair failed and we were unable to recover it. 00:25:00.625 [2024-07-24 22:34:26.204512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.625 [2024-07-24 22:34:26.204540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.625 qpair failed and we were unable to recover it. 00:25:00.625 [2024-07-24 22:34:26.204716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.625 [2024-07-24 22:34:26.204773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.625 qpair failed and we were unable to recover it. 00:25:00.625 [2024-07-24 22:34:26.204959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.625 [2024-07-24 22:34:26.204985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.625 qpair failed and we were unable to recover it. 00:25:00.625 [2024-07-24 22:34:26.205144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.625 [2024-07-24 22:34:26.205170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.625 qpair failed and we were unable to recover it. 00:25:00.625 [2024-07-24 22:34:26.205302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.625 [2024-07-24 22:34:26.205328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.625 qpair failed and we were unable to recover it. 00:25:00.625 [2024-07-24 22:34:26.205518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.625 [2024-07-24 22:34:26.205570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.625 qpair failed and we were unable to recover it. 00:25:00.625 [2024-07-24 22:34:26.205673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.625 [2024-07-24 22:34:26.205700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.625 qpair failed and we were unable to recover it. 00:25:00.625 [2024-07-24 22:34:26.205895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.625 [2024-07-24 22:34:26.205924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.625 qpair failed and we were unable to recover it. 00:25:00.625 [2024-07-24 22:34:26.206144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.625 [2024-07-24 22:34:26.206194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.625 qpair failed and we were unable to recover it. 00:25:00.625 [2024-07-24 22:34:26.206305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.625 [2024-07-24 22:34:26.206333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.625 qpair failed and we were unable to recover it. 00:25:00.625 [2024-07-24 22:34:26.206443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.625 [2024-07-24 22:34:26.206470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.625 qpair failed and we were unable to recover it. 00:25:00.625 [2024-07-24 22:34:26.206653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.625 [2024-07-24 22:34:26.206679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.625 qpair failed and we were unable to recover it. 00:25:00.625 [2024-07-24 22:34:26.206847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.625 [2024-07-24 22:34:26.206904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.625 qpair failed and we were unable to recover it. 00:25:00.625 [2024-07-24 22:34:26.207027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.625 [2024-07-24 22:34:26.207084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.625 qpair failed and we were unable to recover it. 00:25:00.625 [2024-07-24 22:34:26.207222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.625 [2024-07-24 22:34:26.207270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.625 qpair failed and we were unable to recover it. 00:25:00.625 [2024-07-24 22:34:26.207438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.625 [2024-07-24 22:34:26.207501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.625 qpair failed and we were unable to recover it. 00:25:00.625 [2024-07-24 22:34:26.207606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.625 [2024-07-24 22:34:26.207633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.625 qpair failed and we were unable to recover it. 00:25:00.625 [2024-07-24 22:34:26.207809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.625 [2024-07-24 22:34:26.207835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.625 qpair failed and we were unable to recover it. 00:25:00.625 [2024-07-24 22:34:26.208044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.625 [2024-07-24 22:34:26.208095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.625 qpair failed and we were unable to recover it. 00:25:00.625 [2024-07-24 22:34:26.208265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.625 [2024-07-24 22:34:26.208316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.625 qpair failed and we were unable to recover it. 00:25:00.625 [2024-07-24 22:34:26.208465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.625 [2024-07-24 22:34:26.208500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.625 qpair failed and we were unable to recover it. 00:25:00.625 [2024-07-24 22:34:26.208745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.625 [2024-07-24 22:34:26.208772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.625 qpair failed and we were unable to recover it. 00:25:00.625 [2024-07-24 22:34:26.208928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.625 [2024-07-24 22:34:26.208984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.625 qpair failed and we were unable to recover it. 00:25:00.625 [2024-07-24 22:34:26.209169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.625 [2024-07-24 22:34:26.209196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.625 qpair failed and we were unable to recover it. 00:25:00.625 [2024-07-24 22:34:26.209299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.625 [2024-07-24 22:34:26.209326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.625 qpair failed and we were unable to recover it. 00:25:00.625 [2024-07-24 22:34:26.209432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.625 [2024-07-24 22:34:26.209465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.626 qpair failed and we were unable to recover it. 00:25:00.626 [2024-07-24 22:34:26.209617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.626 [2024-07-24 22:34:26.209643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.626 qpair failed and we were unable to recover it. 00:25:00.626 [2024-07-24 22:34:26.209806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.626 [2024-07-24 22:34:26.209858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.626 qpair failed and we were unable to recover it. 00:25:00.626 [2024-07-24 22:34:26.210063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.626 [2024-07-24 22:34:26.210116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.626 qpair failed and we were unable to recover it. 00:25:00.626 [2024-07-24 22:34:26.210355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.626 [2024-07-24 22:34:26.210403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.626 qpair failed and we were unable to recover it. 00:25:00.626 [2024-07-24 22:34:26.210527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.626 [2024-07-24 22:34:26.210554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.626 qpair failed and we were unable to recover it. 00:25:00.626 [2024-07-24 22:34:26.210688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.626 [2024-07-24 22:34:26.210742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.626 qpair failed and we were unable to recover it. 00:25:00.626 [2024-07-24 22:34:26.210894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.626 [2024-07-24 22:34:26.210942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.626 qpair failed and we were unable to recover it. 00:25:00.626 [2024-07-24 22:34:26.211124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.626 [2024-07-24 22:34:26.211171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.626 qpair failed and we were unable to recover it. 00:25:00.626 [2024-07-24 22:34:26.211371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.626 [2024-07-24 22:34:26.211398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.626 qpair failed and we were unable to recover it. 00:25:00.626 [2024-07-24 22:34:26.211503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.626 [2024-07-24 22:34:26.211530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.626 qpair failed and we were unable to recover it. 00:25:00.626 [2024-07-24 22:34:26.211637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.626 [2024-07-24 22:34:26.211663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.626 qpair failed and we were unable to recover it. 00:25:00.626 [2024-07-24 22:34:26.211767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.626 [2024-07-24 22:34:26.211795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.626 qpair failed and we were unable to recover it. 00:25:00.626 [2024-07-24 22:34:26.211985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.626 [2024-07-24 22:34:26.212012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.626 qpair failed and we were unable to recover it. 00:25:00.626 [2024-07-24 22:34:26.212168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.626 [2024-07-24 22:34:26.212220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.626 qpair failed and we were unable to recover it. 00:25:00.626 [2024-07-24 22:34:26.212339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.626 [2024-07-24 22:34:26.212365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.626 qpair failed and we were unable to recover it. 00:25:00.626 [2024-07-24 22:34:26.212491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.626 [2024-07-24 22:34:26.212517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.626 qpair failed and we were unable to recover it. 00:25:00.626 [2024-07-24 22:34:26.212656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.626 [2024-07-24 22:34:26.212711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.626 qpair failed and we were unable to recover it. 00:25:00.626 [2024-07-24 22:34:26.212815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.626 [2024-07-24 22:34:26.212872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.626 qpair failed and we were unable to recover it. 00:25:00.626 [2024-07-24 22:34:26.213108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.626 [2024-07-24 22:34:26.213155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.626 qpair failed and we were unable to recover it. 00:25:00.626 [2024-07-24 22:34:26.213310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.626 [2024-07-24 22:34:26.213372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.626 qpair failed and we were unable to recover it. 00:25:00.626 [2024-07-24 22:34:26.213485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.626 [2024-07-24 22:34:26.213513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.626 qpair failed and we were unable to recover it. 00:25:00.626 [2024-07-24 22:34:26.213740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.626 [2024-07-24 22:34:26.213790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.626 qpair failed and we were unable to recover it. 00:25:00.626 [2024-07-24 22:34:26.213917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.626 [2024-07-24 22:34:26.213944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.626 qpair failed and we were unable to recover it. 00:25:00.626 [2024-07-24 22:34:26.214100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.626 [2024-07-24 22:34:26.214152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.626 qpair failed and we were unable to recover it. 00:25:00.626 [2024-07-24 22:34:26.214277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.626 [2024-07-24 22:34:26.214327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.626 qpair failed and we were unable to recover it. 00:25:00.626 [2024-07-24 22:34:26.214508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.626 [2024-07-24 22:34:26.214548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.626 qpair failed and we were unable to recover it. 00:25:00.626 [2024-07-24 22:34:26.214742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.626 [2024-07-24 22:34:26.214796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.626 qpair failed and we were unable to recover it. 00:25:00.626 [2024-07-24 22:34:26.214944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.626 [2024-07-24 22:34:26.214996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.626 qpair failed and we were unable to recover it. 00:25:00.626 [2024-07-24 22:34:26.215134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.626 [2024-07-24 22:34:26.215187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.626 qpair failed and we were unable to recover it. 00:25:00.626 [2024-07-24 22:34:26.215298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.626 [2024-07-24 22:34:26.215327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.626 qpair failed and we were unable to recover it. 00:25:00.626 [2024-07-24 22:34:26.215498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.626 [2024-07-24 22:34:26.215542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.626 qpair failed and we were unable to recover it. 00:25:00.626 [2024-07-24 22:34:26.215740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.626 [2024-07-24 22:34:26.215788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.626 qpair failed and we were unable to recover it. 00:25:00.626 [2024-07-24 22:34:26.215904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.626 [2024-07-24 22:34:26.215952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.626 qpair failed and we were unable to recover it. 00:25:00.626 [2024-07-24 22:34:26.216125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.626 [2024-07-24 22:34:26.216152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.626 qpair failed and we were unable to recover it. 00:25:00.626 [2024-07-24 22:34:26.216285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.626 [2024-07-24 22:34:26.216339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.626 qpair failed and we were unable to recover it. 00:25:00.626 [2024-07-24 22:34:26.216538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.626 [2024-07-24 22:34:26.216567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.626 qpair failed and we were unable to recover it. 00:25:00.626 [2024-07-24 22:34:26.216740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.626 [2024-07-24 22:34:26.216803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.626 qpair failed and we were unable to recover it. 00:25:00.627 [2024-07-24 22:34:26.216972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.627 [2024-07-24 22:34:26.216998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.627 qpair failed and we were unable to recover it. 00:25:00.627 [2024-07-24 22:34:26.217124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.627 [2024-07-24 22:34:26.217150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.627 qpair failed and we were unable to recover it. 00:25:00.627 [2024-07-24 22:34:26.217333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.627 [2024-07-24 22:34:26.217390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.627 qpair failed and we were unable to recover it. 00:25:00.627 [2024-07-24 22:34:26.217505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.627 [2024-07-24 22:34:26.217532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.627 qpair failed and we were unable to recover it. 00:25:00.627 [2024-07-24 22:34:26.217697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.627 [2024-07-24 22:34:26.217759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.627 qpair failed and we were unable to recover it. 00:25:00.627 [2024-07-24 22:34:26.217890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.627 [2024-07-24 22:34:26.217969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.627 qpair failed and we were unable to recover it. 00:25:00.627 [2024-07-24 22:34:26.218072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.627 [2024-07-24 22:34:26.218098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.627 qpair failed and we were unable to recover it. 00:25:00.627 [2024-07-24 22:34:26.218281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.627 [2024-07-24 22:34:26.218340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.627 qpair failed and we were unable to recover it. 00:25:00.627 [2024-07-24 22:34:26.218512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.627 [2024-07-24 22:34:26.218565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.627 qpair failed and we were unable to recover it. 00:25:00.627 [2024-07-24 22:34:26.218712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.627 [2024-07-24 22:34:26.218771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.627 qpair failed and we were unable to recover it. 00:25:00.627 [2024-07-24 22:34:26.218867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.627 [2024-07-24 22:34:26.218894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.627 qpair failed and we were unable to recover it. 00:25:00.627 [2024-07-24 22:34:26.219047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.627 [2024-07-24 22:34:26.219099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.627 qpair failed and we were unable to recover it. 00:25:00.627 [2024-07-24 22:34:26.219210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.627 [2024-07-24 22:34:26.219238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.627 qpair failed and we were unable to recover it. 00:25:00.627 [2024-07-24 22:34:26.219372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.627 [2024-07-24 22:34:26.219453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.627 qpair failed and we were unable to recover it. 00:25:00.627 [2024-07-24 22:34:26.219578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.627 [2024-07-24 22:34:26.219605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.627 qpair failed and we were unable to recover it. 00:25:00.627 [2024-07-24 22:34:26.219773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.627 [2024-07-24 22:34:26.219827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.627 qpair failed and we were unable to recover it. 00:25:00.627 [2024-07-24 22:34:26.219961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.627 [2024-07-24 22:34:26.220043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.627 qpair failed and we were unable to recover it. 00:25:00.627 [2024-07-24 22:34:26.220177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.627 [2024-07-24 22:34:26.220223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.627 qpair failed and we were unable to recover it. 00:25:00.627 [2024-07-24 22:34:26.220351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.627 [2024-07-24 22:34:26.220409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.627 qpair failed and we were unable to recover it. 00:25:00.627 [2024-07-24 22:34:26.220548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.627 [2024-07-24 22:34:26.220575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.627 qpair failed and we were unable to recover it. 00:25:00.627 [2024-07-24 22:34:26.220695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.627 [2024-07-24 22:34:26.220723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.627 qpair failed and we were unable to recover it. 00:25:00.627 [2024-07-24 22:34:26.220906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.627 [2024-07-24 22:34:26.220932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.627 qpair failed and we were unable to recover it. 00:25:00.627 [2024-07-24 22:34:26.221038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.627 [2024-07-24 22:34:26.221065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.627 qpair failed and we were unable to recover it. 00:25:00.627 [2024-07-24 22:34:26.221261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.627 [2024-07-24 22:34:26.221309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.627 qpair failed and we were unable to recover it. 00:25:00.627 [2024-07-24 22:34:26.221426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.627 [2024-07-24 22:34:26.221452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.627 qpair failed and we were unable to recover it. 00:25:00.627 [2024-07-24 22:34:26.221680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.627 [2024-07-24 22:34:26.221710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.627 qpair failed and we were unable to recover it. 00:25:00.627 [2024-07-24 22:34:26.221915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.627 [2024-07-24 22:34:26.221964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.627 qpair failed and we were unable to recover it. 00:25:00.627 [2024-07-24 22:34:26.222136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.627 [2024-07-24 22:34:26.222165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.627 qpair failed and we were unable to recover it. 00:25:00.627 [2024-07-24 22:34:26.222313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.627 [2024-07-24 22:34:26.222341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.627 qpair failed and we were unable to recover it. 00:25:00.627 [2024-07-24 22:34:26.222557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.627 [2024-07-24 22:34:26.222605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.627 qpair failed and we were unable to recover it. 00:25:00.627 [2024-07-24 22:34:26.222809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.627 [2024-07-24 22:34:26.222857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.627 qpair failed and we were unable to recover it. 00:25:00.627 [2024-07-24 22:34:26.222951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.627 [2024-07-24 22:34:26.222978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.627 qpair failed and we were unable to recover it. 00:25:00.627 [2024-07-24 22:34:26.223097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.627 [2024-07-24 22:34:26.223146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.627 qpair failed and we were unable to recover it. 00:25:00.627 [2024-07-24 22:34:26.223309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.627 [2024-07-24 22:34:26.223360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.627 qpair failed and we were unable to recover it. 00:25:00.627 [2024-07-24 22:34:26.223513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.627 [2024-07-24 22:34:26.223566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.627 qpair failed and we were unable to recover it. 00:25:00.627 [2024-07-24 22:34:26.223682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.627 [2024-07-24 22:34:26.223709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.627 qpair failed and we were unable to recover it. 00:25:00.627 [2024-07-24 22:34:26.223897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.627 [2024-07-24 22:34:26.223945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.627 qpair failed and we were unable to recover it. 00:25:00.628 [2024-07-24 22:34:26.224105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.628 [2024-07-24 22:34:26.224156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.628 qpair failed and we were unable to recover it. 00:25:00.628 [2024-07-24 22:34:26.224256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.628 [2024-07-24 22:34:26.224284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.628 qpair failed and we were unable to recover it. 00:25:00.628 [2024-07-24 22:34:26.224462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.628 [2024-07-24 22:34:26.224520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.628 qpair failed and we were unable to recover it. 00:25:00.628 [2024-07-24 22:34:26.224703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.628 [2024-07-24 22:34:26.224753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.628 qpair failed and we were unable to recover it. 00:25:00.628 [2024-07-24 22:34:26.224926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.628 [2024-07-24 22:34:26.224982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.628 qpair failed and we were unable to recover it. 00:25:00.628 [2024-07-24 22:34:26.225111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.628 [2024-07-24 22:34:26.225167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.628 qpair failed and we were unable to recover it. 00:25:00.628 [2024-07-24 22:34:26.225334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.628 [2024-07-24 22:34:26.225387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.628 qpair failed and we were unable to recover it. 00:25:00.628 [2024-07-24 22:34:26.225573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.628 [2024-07-24 22:34:26.225631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.628 qpair failed and we were unable to recover it. 00:25:00.628 [2024-07-24 22:34:26.225784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.628 [2024-07-24 22:34:26.225838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.628 qpair failed and we were unable to recover it. 00:25:00.628 [2024-07-24 22:34:26.225941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.628 [2024-07-24 22:34:26.225967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.628 qpair failed and we were unable to recover it. 00:25:00.628 [2024-07-24 22:34:26.226122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.628 [2024-07-24 22:34:26.226168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.628 qpair failed and we were unable to recover it. 00:25:00.628 [2024-07-24 22:34:26.226340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.628 [2024-07-24 22:34:26.226387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.628 qpair failed and we were unable to recover it. 00:25:00.628 [2024-07-24 22:34:26.226498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.628 [2024-07-24 22:34:26.226526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.628 qpair failed and we were unable to recover it. 00:25:00.628 [2024-07-24 22:34:26.226666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.628 [2024-07-24 22:34:26.226708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.628 qpair failed and we were unable to recover it. 00:25:00.628 [2024-07-24 22:34:26.226945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.628 [2024-07-24 22:34:26.226992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.628 qpair failed and we were unable to recover it. 00:25:00.628 [2024-07-24 22:34:26.227134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.628 [2024-07-24 22:34:26.227160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.628 qpair failed and we were unable to recover it. 00:25:00.628 [2024-07-24 22:34:26.227306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.628 [2024-07-24 22:34:26.227333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.628 qpair failed and we were unable to recover it. 00:25:00.628 [2024-07-24 22:34:26.227491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.628 [2024-07-24 22:34:26.227546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.628 qpair failed and we were unable to recover it. 00:25:00.628 [2024-07-24 22:34:26.227665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.628 [2024-07-24 22:34:26.227714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.628 qpair failed and we were unable to recover it. 00:25:00.628 [2024-07-24 22:34:26.227863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.628 [2024-07-24 22:34:26.227914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.628 qpair failed and we were unable to recover it. 00:25:00.628 [2024-07-24 22:34:26.228058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.628 [2024-07-24 22:34:26.228111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.628 qpair failed and we were unable to recover it. 00:25:00.628 [2024-07-24 22:34:26.228268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.628 [2024-07-24 22:34:26.228328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.628 qpair failed and we were unable to recover it. 00:25:00.628 [2024-07-24 22:34:26.228510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.628 [2024-07-24 22:34:26.228560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.628 qpair failed and we were unable to recover it. 00:25:00.628 [2024-07-24 22:34:26.228723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.628 [2024-07-24 22:34:26.228776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.628 qpair failed and we were unable to recover it. 00:25:00.628 [2024-07-24 22:34:26.228945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.628 [2024-07-24 22:34:26.229013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.628 qpair failed and we were unable to recover it. 00:25:00.628 [2024-07-24 22:34:26.229165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.628 [2024-07-24 22:34:26.229191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.628 qpair failed and we were unable to recover it. 00:25:00.628 [2024-07-24 22:34:26.229416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.628 [2024-07-24 22:34:26.229470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.628 qpair failed and we were unable to recover it. 00:25:00.628 [2024-07-24 22:34:26.229684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.628 [2024-07-24 22:34:26.229742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.628 qpair failed and we were unable to recover it. 00:25:00.628 [2024-07-24 22:34:26.229893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.628 [2024-07-24 22:34:26.229951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.628 qpair failed and we were unable to recover it. 00:25:00.628 [2024-07-24 22:34:26.230146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.628 [2024-07-24 22:34:26.230197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.628 qpair failed and we were unable to recover it. 00:25:00.628 [2024-07-24 22:34:26.230369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.628 [2024-07-24 22:34:26.230413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.628 qpair failed and we were unable to recover it. 00:25:00.628 [2024-07-24 22:34:26.230595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.628 [2024-07-24 22:34:26.230653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.628 qpair failed and we were unable to recover it. 00:25:00.629 [2024-07-24 22:34:26.230799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.629 [2024-07-24 22:34:26.230841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.629 qpair failed and we were unable to recover it. 00:25:00.629 [2024-07-24 22:34:26.230964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.629 [2024-07-24 22:34:26.231012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.629 qpair failed and we were unable to recover it. 00:25:00.629 [2024-07-24 22:34:26.231209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.629 [2024-07-24 22:34:26.231257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.629 qpair failed and we were unable to recover it. 00:25:00.629 [2024-07-24 22:34:26.231418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.629 [2024-07-24 22:34:26.231477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.629 qpair failed and we were unable to recover it. 00:25:00.629 [2024-07-24 22:34:26.231644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.629 [2024-07-24 22:34:26.231695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.629 qpair failed and we were unable to recover it. 00:25:00.629 [2024-07-24 22:34:26.231832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.629 [2024-07-24 22:34:26.231913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.629 qpair failed and we were unable to recover it. 00:25:00.629 [2024-07-24 22:34:26.232099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.629 [2024-07-24 22:34:26.232125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.629 qpair failed and we were unable to recover it. 00:25:00.629 [2024-07-24 22:34:26.232309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.629 [2024-07-24 22:34:26.232362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.629 qpair failed and we were unable to recover it. 00:25:00.629 [2024-07-24 22:34:26.232554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.629 [2024-07-24 22:34:26.232581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.629 qpair failed and we were unable to recover it. 00:25:00.629 [2024-07-24 22:34:26.232741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.629 [2024-07-24 22:34:26.232799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.629 qpair failed and we were unable to recover it. 00:25:00.629 [2024-07-24 22:34:26.232928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.629 [2024-07-24 22:34:26.232970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.629 qpair failed and we were unable to recover it. 00:25:00.629 [2024-07-24 22:34:26.233084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.629 [2024-07-24 22:34:26.233109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.629 qpair failed and we were unable to recover it. 00:25:00.629 [2024-07-24 22:34:26.233276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.629 [2024-07-24 22:34:26.233323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.629 qpair failed and we were unable to recover it. 00:25:00.629 [2024-07-24 22:34:26.233420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.629 [2024-07-24 22:34:26.233451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.629 qpair failed and we were unable to recover it. 00:25:00.629 [2024-07-24 22:34:26.233635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.629 [2024-07-24 22:34:26.233694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.629 qpair failed and we were unable to recover it. 00:25:00.629 [2024-07-24 22:34:26.233846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.629 [2024-07-24 22:34:26.233897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.629 qpair failed and we were unable to recover it. 00:25:00.629 [2024-07-24 22:34:26.234037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.629 [2024-07-24 22:34:26.234063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.629 qpair failed and we were unable to recover it. 00:25:00.629 [2024-07-24 22:34:26.234244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.629 [2024-07-24 22:34:26.234298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.629 qpair failed and we were unable to recover it. 00:25:00.629 [2024-07-24 22:34:26.234392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.629 [2024-07-24 22:34:26.234418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.629 qpair failed and we were unable to recover it. 00:25:00.629 [2024-07-24 22:34:26.234524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.629 [2024-07-24 22:34:26.234552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.629 qpair failed and we were unable to recover it. 00:25:00.629 [2024-07-24 22:34:26.234727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.629 [2024-07-24 22:34:26.234784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.629 qpair failed and we were unable to recover it. 00:25:00.629 [2024-07-24 22:34:26.234950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.629 [2024-07-24 22:34:26.235010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.629 qpair failed and we were unable to recover it. 00:25:00.629 [2024-07-24 22:34:26.235210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.629 [2024-07-24 22:34:26.235263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.629 qpair failed and we were unable to recover it. 00:25:00.629 [2024-07-24 22:34:26.235437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.629 [2024-07-24 22:34:26.235465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.629 qpair failed and we were unable to recover it. 00:25:00.629 [2024-07-24 22:34:26.235581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.629 [2024-07-24 22:34:26.235609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.629 qpair failed and we were unable to recover it. 00:25:00.629 [2024-07-24 22:34:26.235763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.629 [2024-07-24 22:34:26.235814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.629 qpair failed and we were unable to recover it. 00:25:00.629 [2024-07-24 22:34:26.235916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.629 [2024-07-24 22:34:26.235944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.629 qpair failed and we were unable to recover it. 00:25:00.629 [2024-07-24 22:34:26.236083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.629 [2024-07-24 22:34:26.236131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.629 qpair failed and we were unable to recover it. 00:25:00.629 [2024-07-24 22:34:26.236296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.629 [2024-07-24 22:34:26.236322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.629 qpair failed and we were unable to recover it. 00:25:00.629 [2024-07-24 22:34:26.236511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.629 [2024-07-24 22:34:26.236539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.629 qpair failed and we were unable to recover it. 00:25:00.629 [2024-07-24 22:34:26.236711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.629 [2024-07-24 22:34:26.236763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.629 qpair failed and we were unable to recover it. 00:25:00.629 [2024-07-24 22:34:26.236917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.629 [2024-07-24 22:34:26.236946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.629 qpair failed and we were unable to recover it. 00:25:00.629 [2024-07-24 22:34:26.237098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.629 [2024-07-24 22:34:26.237180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.629 qpair failed and we were unable to recover it. 00:25:00.629 [2024-07-24 22:34:26.237334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.629 [2024-07-24 22:34:26.237386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.629 qpair failed and we were unable to recover it. 00:25:00.629 [2024-07-24 22:34:26.237551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.629 [2024-07-24 22:34:26.237578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.629 qpair failed and we were unable to recover it. 00:25:00.629 [2024-07-24 22:34:26.237719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.629 [2024-07-24 22:34:26.237767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.629 qpair failed and we were unable to recover it. 00:25:00.629 [2024-07-24 22:34:26.237866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.629 [2024-07-24 22:34:26.237891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.629 qpair failed and we were unable to recover it. 00:25:00.630 [2024-07-24 22:34:26.237989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.630 [2024-07-24 22:34:26.238016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.630 qpair failed and we were unable to recover it. 00:25:00.630 [2024-07-24 22:34:26.238175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.630 [2024-07-24 22:34:26.238233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.630 qpair failed and we were unable to recover it. 00:25:00.630 [2024-07-24 22:34:26.238400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.630 [2024-07-24 22:34:26.238448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.630 qpair failed and we were unable to recover it. 00:25:00.630 [2024-07-24 22:34:26.238614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.630 [2024-07-24 22:34:26.238679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.630 qpair failed and we were unable to recover it. 00:25:00.630 [2024-07-24 22:34:26.238882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.630 [2024-07-24 22:34:26.238933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.630 qpair failed and we were unable to recover it. 00:25:00.630 [2024-07-24 22:34:26.239157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.630 [2024-07-24 22:34:26.239206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.630 qpair failed and we were unable to recover it. 00:25:00.630 [2024-07-24 22:34:26.239324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.630 [2024-07-24 22:34:26.239388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.630 qpair failed and we were unable to recover it. 00:25:00.630 [2024-07-24 22:34:26.239548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.630 [2024-07-24 22:34:26.239606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.630 qpair failed and we were unable to recover it. 00:25:00.630 [2024-07-24 22:34:26.239747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.630 [2024-07-24 22:34:26.239798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.630 qpair failed and we were unable to recover it. 00:25:00.630 [2024-07-24 22:34:26.239968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.630 [2024-07-24 22:34:26.239995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.630 qpair failed and we were unable to recover it. 00:25:00.630 [2024-07-24 22:34:26.240163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.630 [2024-07-24 22:34:26.240217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.630 qpair failed and we were unable to recover it. 00:25:00.630 [2024-07-24 22:34:26.240366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.630 [2024-07-24 22:34:26.240426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.630 qpair failed and we were unable to recover it. 00:25:00.630 [2024-07-24 22:34:26.240538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.630 [2024-07-24 22:34:26.240566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.630 qpair failed and we were unable to recover it. 00:25:00.630 [2024-07-24 22:34:26.240690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.630 [2024-07-24 22:34:26.240738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.630 qpair failed and we were unable to recover it. 00:25:00.630 [2024-07-24 22:34:26.240884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.630 [2024-07-24 22:34:26.240937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.630 qpair failed and we were unable to recover it. 00:25:00.630 [2024-07-24 22:34:26.241087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.630 [2024-07-24 22:34:26.241143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.630 qpair failed and we were unable to recover it. 00:25:00.630 [2024-07-24 22:34:26.241254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.630 [2024-07-24 22:34:26.241287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.630 qpair failed and we were unable to recover it. 00:25:00.630 [2024-07-24 22:34:26.241505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.630 [2024-07-24 22:34:26.241545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.630 qpair failed and we were unable to recover it. 00:25:00.630 [2024-07-24 22:34:26.241681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.630 [2024-07-24 22:34:26.241730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.630 qpair failed and we were unable to recover it. 00:25:00.630 [2024-07-24 22:34:26.241882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.630 [2024-07-24 22:34:26.241909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.630 qpair failed and we were unable to recover it. 00:25:00.630 [2024-07-24 22:34:26.242075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.630 [2024-07-24 22:34:26.242132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.630 qpair failed and we were unable to recover it. 00:25:00.630 [2024-07-24 22:34:26.242323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.630 [2024-07-24 22:34:26.242375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.630 qpair failed and we were unable to recover it. 00:25:00.630 [2024-07-24 22:34:26.242556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.630 [2024-07-24 22:34:26.242583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.630 qpair failed and we were unable to recover it. 00:25:00.630 [2024-07-24 22:34:26.242692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.630 [2024-07-24 22:34:26.242718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.630 qpair failed and we were unable to recover it. 00:25:00.630 [2024-07-24 22:34:26.242852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.630 [2024-07-24 22:34:26.242881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.630 qpair failed and we were unable to recover it. 00:25:00.630 [2024-07-24 22:34:26.243101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.630 [2024-07-24 22:34:26.243154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.630 qpair failed and we were unable to recover it. 00:25:00.630 [2024-07-24 22:34:26.243338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.630 [2024-07-24 22:34:26.243395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.630 qpair failed and we were unable to recover it. 00:25:00.630 [2024-07-24 22:34:26.243513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.630 [2024-07-24 22:34:26.243541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.630 qpair failed and we were unable to recover it. 00:25:00.630 [2024-07-24 22:34:26.243642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.630 [2024-07-24 22:34:26.243670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.630 qpair failed and we were unable to recover it. 00:25:00.630 [2024-07-24 22:34:26.243773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.630 [2024-07-24 22:34:26.243800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.630 qpair failed and we were unable to recover it. 00:25:00.630 [2024-07-24 22:34:26.243943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.630 [2024-07-24 22:34:26.243969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.630 qpair failed and we were unable to recover it. 00:25:00.630 [2024-07-24 22:34:26.244152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.630 [2024-07-24 22:34:26.244202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.630 qpair failed and we were unable to recover it. 00:25:00.630 [2024-07-24 22:34:26.244359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.630 [2024-07-24 22:34:26.244411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.630 qpair failed and we were unable to recover it. 00:25:00.630 [2024-07-24 22:34:26.244595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.630 [2024-07-24 22:34:26.244654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.630 qpair failed and we were unable to recover it. 00:25:00.630 [2024-07-24 22:34:26.244814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.630 [2024-07-24 22:34:26.244867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.630 qpair failed and we were unable to recover it. 00:25:00.630 [2024-07-24 22:34:26.245027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.630 [2024-07-24 22:34:26.245087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.630 qpair failed and we were unable to recover it. 00:25:00.630 [2024-07-24 22:34:26.245235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.630 [2024-07-24 22:34:26.245287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.630 qpair failed and we were unable to recover it. 00:25:00.631 [2024-07-24 22:34:26.245456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.631 [2024-07-24 22:34:26.245518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.631 qpair failed and we were unable to recover it. 00:25:00.631 [2024-07-24 22:34:26.245678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.631 [2024-07-24 22:34:26.245741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.631 qpair failed and we were unable to recover it. 00:25:00.631 [2024-07-24 22:34:26.245878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.631 [2024-07-24 22:34:26.245932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.631 qpair failed and we were unable to recover it. 00:25:00.631 [2024-07-24 22:34:26.246066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.631 [2024-07-24 22:34:26.246092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.631 qpair failed and we were unable to recover it. 00:25:00.631 [2024-07-24 22:34:26.246291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.631 [2024-07-24 22:34:26.246341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.631 qpair failed and we were unable to recover it. 00:25:00.631 [2024-07-24 22:34:26.246440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.631 [2024-07-24 22:34:26.246524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.631 qpair failed and we were unable to recover it. 00:25:00.631 [2024-07-24 22:34:26.246675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.631 [2024-07-24 22:34:26.246726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.631 qpair failed and we were unable to recover it. 00:25:00.631 [2024-07-24 22:34:26.246874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.631 [2024-07-24 22:34:26.246934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.631 qpair failed and we were unable to recover it. 00:25:00.631 [2024-07-24 22:34:26.247090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.631 [2024-07-24 22:34:26.247137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.631 qpair failed and we were unable to recover it. 00:25:00.631 [2024-07-24 22:34:26.247263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.631 [2024-07-24 22:34:26.247311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.631 qpair failed and we were unable to recover it. 00:25:00.631 [2024-07-24 22:34:26.247417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.631 [2024-07-24 22:34:26.247445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.631 qpair failed and we were unable to recover it. 00:25:00.631 [2024-07-24 22:34:26.247620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.631 [2024-07-24 22:34:26.247674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.631 qpair failed and we were unable to recover it. 00:25:00.631 [2024-07-24 22:34:26.247840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.631 [2024-07-24 22:34:26.247895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.631 qpair failed and we were unable to recover it. 00:25:00.631 [2024-07-24 22:34:26.248009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.631 [2024-07-24 22:34:26.248054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.631 qpair failed and we were unable to recover it. 00:25:00.631 [2024-07-24 22:34:26.248228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.631 [2024-07-24 22:34:26.248254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.631 qpair failed and we were unable to recover it. 00:25:00.631 [2024-07-24 22:34:26.248443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.631 [2024-07-24 22:34:26.248469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.631 qpair failed and we were unable to recover it. 00:25:00.631 [2024-07-24 22:34:26.248641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.631 [2024-07-24 22:34:26.248669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.631 qpair failed and we were unable to recover it. 00:25:00.631 [2024-07-24 22:34:26.248823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.631 [2024-07-24 22:34:26.248885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.631 qpair failed and we were unable to recover it. 00:25:00.631 [2024-07-24 22:34:26.249073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.631 [2024-07-24 22:34:26.249126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.631 qpair failed and we were unable to recover it. 00:25:00.631 [2024-07-24 22:34:26.249303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.631 [2024-07-24 22:34:26.249358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.631 qpair failed and we were unable to recover it. 00:25:00.631 [2024-07-24 22:34:26.249507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.631 [2024-07-24 22:34:26.249555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.631 qpair failed and we were unable to recover it. 00:25:00.631 [2024-07-24 22:34:26.249686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.631 [2024-07-24 22:34:26.249735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.631 qpair failed and we were unable to recover it. 00:25:00.631 [2024-07-24 22:34:26.249902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.631 [2024-07-24 22:34:26.249959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.631 qpair failed and we were unable to recover it. 00:25:00.631 [2024-07-24 22:34:26.250151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.631 [2024-07-24 22:34:26.250200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.631 qpair failed and we were unable to recover it. 00:25:00.631 [2024-07-24 22:34:26.250347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.631 [2024-07-24 22:34:26.250390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.631 qpair failed and we were unable to recover it. 00:25:00.631 [2024-07-24 22:34:26.250592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.631 [2024-07-24 22:34:26.250622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.631 qpair failed and we were unable to recover it. 00:25:00.631 [2024-07-24 22:34:26.250727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.631 [2024-07-24 22:34:26.250756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.631 qpair failed and we were unable to recover it. 00:25:00.631 [2024-07-24 22:34:26.250913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.631 [2024-07-24 22:34:26.250954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.631 qpair failed and we were unable to recover it. 00:25:00.631 [2024-07-24 22:34:26.251143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.631 [2024-07-24 22:34:26.251192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.631 qpair failed and we were unable to recover it. 00:25:00.631 [2024-07-24 22:34:26.251297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.631 [2024-07-24 22:34:26.251323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.631 qpair failed and we were unable to recover it. 00:25:00.631 [2024-07-24 22:34:26.251459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.631 [2024-07-24 22:34:26.251516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.631 qpair failed and we were unable to recover it. 00:25:00.631 [2024-07-24 22:34:26.251721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.631 [2024-07-24 22:34:26.251776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.631 qpair failed and we were unable to recover it. 00:25:00.631 [2024-07-24 22:34:26.251964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.631 [2024-07-24 22:34:26.252014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.631 qpair failed and we were unable to recover it. 00:25:00.631 [2024-07-24 22:34:26.252148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.631 [2024-07-24 22:34:26.252231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.631 qpair failed and we were unable to recover it. 00:25:00.631 [2024-07-24 22:34:26.252415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.631 [2024-07-24 22:34:26.252465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.631 qpair failed and we were unable to recover it. 00:25:00.631 [2024-07-24 22:34:26.252620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.631 [2024-07-24 22:34:26.252672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.631 qpair failed and we were unable to recover it. 00:25:00.631 [2024-07-24 22:34:26.252846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.631 [2024-07-24 22:34:26.252874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.631 qpair failed and we were unable to recover it. 00:25:00.632 [2024-07-24 22:34:26.253023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.632 [2024-07-24 22:34:26.253080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.632 qpair failed and we were unable to recover it. 00:25:00.632 [2024-07-24 22:34:26.253280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.632 [2024-07-24 22:34:26.253307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.632 qpair failed and we were unable to recover it. 00:25:00.632 [2024-07-24 22:34:26.253525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.632 [2024-07-24 22:34:26.253552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.632 qpair failed and we were unable to recover it. 00:25:00.632 [2024-07-24 22:34:26.253687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.632 [2024-07-24 22:34:26.253742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.632 qpair failed and we were unable to recover it. 00:25:00.632 [2024-07-24 22:34:26.253853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.632 [2024-07-24 22:34:26.253880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.632 qpair failed and we were unable to recover it. 00:25:00.632 [2024-07-24 22:34:26.254035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.632 [2024-07-24 22:34:26.254082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.632 qpair failed and we were unable to recover it. 00:25:00.632 [2024-07-24 22:34:26.254196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.632 [2024-07-24 22:34:26.254224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.632 qpair failed and we were unable to recover it. 00:25:00.632 [2024-07-24 22:34:26.254427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.632 [2024-07-24 22:34:26.254487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.632 qpair failed and we were unable to recover it. 00:25:00.632 [2024-07-24 22:34:26.254590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.632 [2024-07-24 22:34:26.254616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.632 qpair failed and we were unable to recover it. 00:25:00.632 [2024-07-24 22:34:26.254761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.632 [2024-07-24 22:34:26.254817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.632 qpair failed and we were unable to recover it. 00:25:00.632 [2024-07-24 22:34:26.254982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.632 [2024-07-24 22:34:26.255044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.632 qpair failed and we were unable to recover it. 00:25:00.632 [2024-07-24 22:34:26.255255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.632 [2024-07-24 22:34:26.255305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.632 qpair failed and we were unable to recover it. 00:25:00.632 [2024-07-24 22:34:26.255447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.632 [2024-07-24 22:34:26.255500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.632 qpair failed and we were unable to recover it. 00:25:00.632 [2024-07-24 22:34:26.255701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.632 [2024-07-24 22:34:26.255755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.632 qpair failed and we were unable to recover it. 00:25:00.632 [2024-07-24 22:34:26.255855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.632 [2024-07-24 22:34:26.255883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.632 qpair failed and we were unable to recover it. 00:25:00.632 [2024-07-24 22:34:26.256075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.632 [2024-07-24 22:34:26.256124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.632 qpair failed and we were unable to recover it. 00:25:00.632 [2024-07-24 22:34:26.256299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.632 [2024-07-24 22:34:26.256353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.632 qpair failed and we were unable to recover it. 00:25:00.632 [2024-07-24 22:34:26.256476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.632 [2024-07-24 22:34:26.256541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.632 qpair failed and we were unable to recover it. 00:25:00.632 [2024-07-24 22:34:26.256671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.632 [2024-07-24 22:34:26.256751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.632 qpair failed and we were unable to recover it. 00:25:00.632 [2024-07-24 22:34:26.256947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.632 [2024-07-24 22:34:26.256975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.632 qpair failed and we were unable to recover it. 00:25:00.632 [2024-07-24 22:34:26.257178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.632 [2024-07-24 22:34:26.257230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.632 qpair failed and we were unable to recover it. 00:25:00.632 [2024-07-24 22:34:26.257438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.632 [2024-07-24 22:34:26.257501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.632 qpair failed and we were unable to recover it. 00:25:00.632 [2024-07-24 22:34:26.257691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.632 [2024-07-24 22:34:26.257723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.632 qpair failed and we were unable to recover it. 00:25:00.632 [2024-07-24 22:34:26.257860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.632 [2024-07-24 22:34:26.257913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.632 qpair failed and we were unable to recover it. 00:25:00.632 [2024-07-24 22:34:26.258073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.632 [2024-07-24 22:34:26.258127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.632 qpair failed and we were unable to recover it. 00:25:00.632 [2024-07-24 22:34:26.258366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.632 [2024-07-24 22:34:26.258416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.632 qpair failed and we were unable to recover it. 00:25:00.632 [2024-07-24 22:34:26.258624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.632 [2024-07-24 22:34:26.258676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.632 qpair failed and we were unable to recover it. 00:25:00.632 [2024-07-24 22:34:26.258805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.632 [2024-07-24 22:34:26.258885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.632 qpair failed and we were unable to recover it. 00:25:00.632 [2024-07-24 22:34:26.259076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.632 [2024-07-24 22:34:26.259128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.632 qpair failed and we were unable to recover it. 00:25:00.632 [2024-07-24 22:34:26.259250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.632 [2024-07-24 22:34:26.259299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.632 qpair failed and we were unable to recover it. 00:25:00.632 [2024-07-24 22:34:26.259426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.632 [2024-07-24 22:34:26.259452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.632 qpair failed and we were unable to recover it. 00:25:00.632 [2024-07-24 22:34:26.259656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.632 [2024-07-24 22:34:26.259707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.632 qpair failed and we were unable to recover it. 00:25:00.632 [2024-07-24 22:34:26.259894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.633 [2024-07-24 22:34:26.259920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.633 qpair failed and we were unable to recover it. 00:25:00.633 [2024-07-24 22:34:26.260064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.633 [2024-07-24 22:34:26.260120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.633 qpair failed and we were unable to recover it. 00:25:00.633 [2024-07-24 22:34:26.260301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.633 [2024-07-24 22:34:26.260329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.633 qpair failed and we were unable to recover it. 00:25:00.633 [2024-07-24 22:34:26.260471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.633 [2024-07-24 22:34:26.260526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.633 qpair failed and we were unable to recover it. 00:25:00.633 [2024-07-24 22:34:26.260682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.633 [2024-07-24 22:34:26.260737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.633 qpair failed and we were unable to recover it. 00:25:00.633 [2024-07-24 22:34:26.260901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.633 [2024-07-24 22:34:26.260956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.633 qpair failed and we were unable to recover it. 00:25:00.633 [2024-07-24 22:34:26.261058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.633 [2024-07-24 22:34:26.261086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.633 qpair failed and we were unable to recover it. 00:25:00.633 [2024-07-24 22:34:26.261252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.633 [2024-07-24 22:34:26.261279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.633 qpair failed and we were unable to recover it. 00:25:00.633 [2024-07-24 22:34:26.261452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.633 [2024-07-24 22:34:26.261509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.633 qpair failed and we were unable to recover it. 00:25:00.633 [2024-07-24 22:34:26.261692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.633 [2024-07-24 22:34:26.261746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.633 qpair failed and we were unable to recover it. 00:25:00.633 [2024-07-24 22:34:26.261878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.633 [2024-07-24 22:34:26.261958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.633 qpair failed and we were unable to recover it. 00:25:00.633 [2024-07-24 22:34:26.262076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.633 [2024-07-24 22:34:26.262125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.633 qpair failed and we were unable to recover it. 00:25:00.633 [2024-07-24 22:34:26.262289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.633 [2024-07-24 22:34:26.262345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.633 qpair failed and we were unable to recover it. 00:25:00.633 [2024-07-24 22:34:26.262536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.633 [2024-07-24 22:34:26.262563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.633 qpair failed and we were unable to recover it. 00:25:00.633 [2024-07-24 22:34:26.262717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.633 [2024-07-24 22:34:26.262744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.633 qpair failed and we were unable to recover it. 00:25:00.633 [2024-07-24 22:34:26.262883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.633 [2024-07-24 22:34:26.262930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.633 qpair failed and we were unable to recover it. 00:25:00.633 [2024-07-24 22:34:26.263075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.633 [2024-07-24 22:34:26.263142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.633 qpair failed and we were unable to recover it. 00:25:00.633 [2024-07-24 22:34:26.263330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.633 [2024-07-24 22:34:26.263378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.633 qpair failed and we were unable to recover it. 00:25:00.633 [2024-07-24 22:34:26.263511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.633 [2024-07-24 22:34:26.263556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.633 qpair failed and we were unable to recover it. 00:25:00.633 [2024-07-24 22:34:26.263741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.633 [2024-07-24 22:34:26.263767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.633 qpair failed and we were unable to recover it. 00:25:00.633 [2024-07-24 22:34:26.263900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.633 [2024-07-24 22:34:26.263927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.633 qpair failed and we were unable to recover it. 00:25:00.633 [2024-07-24 22:34:26.264026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.633 [2024-07-24 22:34:26.264054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.633 qpair failed and we were unable to recover it. 00:25:00.633 [2024-07-24 22:34:26.264200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.633 [2024-07-24 22:34:26.264251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.633 qpair failed and we were unable to recover it. 00:25:00.633 [2024-07-24 22:34:26.264431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.633 [2024-07-24 22:34:26.264457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.633 qpair failed and we were unable to recover it. 00:25:00.633 [2024-07-24 22:34:26.264566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.633 [2024-07-24 22:34:26.264594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.633 qpair failed and we were unable to recover it. 00:25:00.633 [2024-07-24 22:34:26.264695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.633 [2024-07-24 22:34:26.264721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.633 qpair failed and we were unable to recover it. 00:25:00.633 [2024-07-24 22:34:26.264882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.633 [2024-07-24 22:34:26.264936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.633 qpair failed and we were unable to recover it. 00:25:00.633 [2024-07-24 22:34:26.265126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.633 [2024-07-24 22:34:26.265180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.633 qpair failed and we were unable to recover it. 00:25:00.633 [2024-07-24 22:34:26.265321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.633 [2024-07-24 22:34:26.265378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.633 qpair failed and we were unable to recover it. 00:25:00.633 [2024-07-24 22:34:26.265596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.633 [2024-07-24 22:34:26.265659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.633 qpair failed and we were unable to recover it. 00:25:00.633 [2024-07-24 22:34:26.265808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.633 [2024-07-24 22:34:26.265858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.633 qpair failed and we were unable to recover it. 00:25:00.633 [2024-07-24 22:34:26.265999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.633 [2024-07-24 22:34:26.266028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.633 qpair failed and we were unable to recover it. 00:25:00.633 [2024-07-24 22:34:26.266160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.633 [2024-07-24 22:34:26.266242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.633 qpair failed and we were unable to recover it. 00:25:00.633 [2024-07-24 22:34:26.266408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.633 [2024-07-24 22:34:26.266466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.633 qpair failed and we were unable to recover it. 00:25:00.633 [2024-07-24 22:34:26.266610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.633 [2024-07-24 22:34:26.266665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.633 qpair failed and we were unable to recover it. 00:25:00.633 [2024-07-24 22:34:26.266767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.633 [2024-07-24 22:34:26.266795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.633 qpair failed and we were unable to recover it. 00:25:00.633 [2024-07-24 22:34:26.266960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.633 [2024-07-24 22:34:26.267022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.633 qpair failed and we were unable to recover it. 00:25:00.633 [2024-07-24 22:34:26.267162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.634 [2024-07-24 22:34:26.267219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.634 qpair failed and we were unable to recover it. 00:25:00.634 [2024-07-24 22:34:26.267398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.634 [2024-07-24 22:34:26.267444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.634 qpair failed and we were unable to recover it. 00:25:00.634 [2024-07-24 22:34:26.267558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.634 [2024-07-24 22:34:26.267584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.634 qpair failed and we were unable to recover it. 00:25:00.634 [2024-07-24 22:34:26.267714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.634 [2024-07-24 22:34:26.267764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.634 qpair failed and we were unable to recover it. 00:25:00.634 [2024-07-24 22:34:26.267916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.634 [2024-07-24 22:34:26.267944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.634 qpair failed and we were unable to recover it. 00:25:00.634 [2024-07-24 22:34:26.268135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.634 [2024-07-24 22:34:26.268187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.634 qpair failed and we were unable to recover it. 00:25:00.634 [2024-07-24 22:34:26.268346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.634 [2024-07-24 22:34:26.268372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.634 qpair failed and we were unable to recover it. 00:25:00.634 [2024-07-24 22:34:26.268536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.634 [2024-07-24 22:34:26.268565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.634 qpair failed and we were unable to recover it. 00:25:00.634 [2024-07-24 22:34:26.268671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.634 [2024-07-24 22:34:26.268697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.634 qpair failed and we were unable to recover it. 00:25:00.634 [2024-07-24 22:34:26.268851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.634 [2024-07-24 22:34:26.268902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.634 qpair failed and we were unable to recover it. 00:25:00.634 [2024-07-24 22:34:26.269085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.634 [2024-07-24 22:34:26.269142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.634 qpair failed and we were unable to recover it. 00:25:00.634 [2024-07-24 22:34:26.269298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.634 [2024-07-24 22:34:26.269359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.634 qpair failed and we were unable to recover it. 00:25:00.634 [2024-07-24 22:34:26.269498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.634 [2024-07-24 22:34:26.269547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.634 qpair failed and we were unable to recover it. 00:25:00.634 [2024-07-24 22:34:26.269665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.634 [2024-07-24 22:34:26.269692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.634 qpair failed and we were unable to recover it. 00:25:00.634 [2024-07-24 22:34:26.269847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.634 [2024-07-24 22:34:26.269934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.634 qpair failed and we were unable to recover it. 00:25:00.634 [2024-07-24 22:34:26.270055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.634 [2024-07-24 22:34:26.270107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.634 qpair failed and we were unable to recover it. 00:25:00.634 [2024-07-24 22:34:26.270272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.634 [2024-07-24 22:34:26.270299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.634 qpair failed and we were unable to recover it. 00:25:00.634 [2024-07-24 22:34:26.270469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.634 [2024-07-24 22:34:26.270503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.634 qpair failed and we were unable to recover it. 00:25:00.634 [2024-07-24 22:34:26.270667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.634 [2024-07-24 22:34:26.270717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.634 qpair failed and we were unable to recover it. 00:25:00.634 [2024-07-24 22:34:26.270864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.634 [2024-07-24 22:34:26.270916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.634 qpair failed and we were unable to recover it. 00:25:00.634 [2024-07-24 22:34:26.271091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.634 [2024-07-24 22:34:26.271149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.634 qpair failed and we were unable to recover it. 00:25:00.634 [2024-07-24 22:34:26.271274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.634 [2024-07-24 22:34:26.271341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.634 qpair failed and we were unable to recover it. 00:25:00.634 [2024-07-24 22:34:26.271508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.634 [2024-07-24 22:34:26.271550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.634 qpair failed and we were unable to recover it. 00:25:00.634 [2024-07-24 22:34:26.271667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.634 [2024-07-24 22:34:26.271716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.634 qpair failed and we were unable to recover it. 00:25:00.634 [2024-07-24 22:34:26.271846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.634 [2024-07-24 22:34:26.271895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.634 qpair failed and we were unable to recover it. 00:25:00.634 [2024-07-24 22:34:26.272012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.634 [2024-07-24 22:34:26.272038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.634 qpair failed and we were unable to recover it. 00:25:00.634 [2024-07-24 22:34:26.272136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.634 [2024-07-24 22:34:26.272162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.634 qpair failed and we were unable to recover it. 00:25:00.634 [2024-07-24 22:34:26.272297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.634 [2024-07-24 22:34:26.272356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.634 qpair failed and we were unable to recover it. 00:25:00.634 [2024-07-24 22:34:26.272504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.634 [2024-07-24 22:34:26.272585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.634 qpair failed and we were unable to recover it. 00:25:00.634 [2024-07-24 22:34:26.272730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.634 [2024-07-24 22:34:26.272782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.634 qpair failed and we were unable to recover it. 00:25:00.634 [2024-07-24 22:34:26.272964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.634 [2024-07-24 22:34:26.273018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.634 qpair failed and we were unable to recover it. 00:25:00.634 [2024-07-24 22:34:26.273180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.634 [2024-07-24 22:34:26.273241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.634 qpair failed and we were unable to recover it. 00:25:00.634 [2024-07-24 22:34:26.273388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.634 [2024-07-24 22:34:26.273445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.634 qpair failed and we were unable to recover it. 00:25:00.634 [2024-07-24 22:34:26.273580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.634 [2024-07-24 22:34:26.273669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.634 qpair failed and we were unable to recover it. 00:25:00.634 [2024-07-24 22:34:26.273824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.634 [2024-07-24 22:34:26.273851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.634 qpair failed and we were unable to recover it. 00:25:00.634 [2024-07-24 22:34:26.273987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.634 [2024-07-24 22:34:26.274037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.634 qpair failed and we were unable to recover it. 00:25:00.634 [2024-07-24 22:34:26.274171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.634 [2024-07-24 22:34:26.274220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.634 qpair failed and we were unable to recover it. 00:25:00.634 [2024-07-24 22:34:26.274316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.634 [2024-07-24 22:34:26.274342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.634 qpair failed and we were unable to recover it. 00:25:00.635 [2024-07-24 22:34:26.274502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.635 [2024-07-24 22:34:26.274530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.635 qpair failed and we were unable to recover it. 00:25:00.635 [2024-07-24 22:34:26.274718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.635 [2024-07-24 22:34:26.274745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.635 qpair failed and we were unable to recover it. 00:25:00.635 [2024-07-24 22:34:26.274874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.635 [2024-07-24 22:34:26.274924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.635 qpair failed and we were unable to recover it. 00:25:00.635 [2024-07-24 22:34:26.275028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.635 [2024-07-24 22:34:26.275055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.635 qpair failed and we were unable to recover it. 00:25:00.635 [2024-07-24 22:34:26.275226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.635 [2024-07-24 22:34:26.275284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.635 qpair failed and we were unable to recover it. 00:25:00.635 [2024-07-24 22:34:26.275415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.635 [2024-07-24 22:34:26.275441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.635 qpair failed and we were unable to recover it. 00:25:00.635 [2024-07-24 22:34:26.275596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.635 [2024-07-24 22:34:26.275678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.635 qpair failed and we were unable to recover it. 00:25:00.635 [2024-07-24 22:34:26.275830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.635 [2024-07-24 22:34:26.275883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.635 qpair failed and we were unable to recover it. 00:25:00.635 [2024-07-24 22:34:26.275986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.635 [2024-07-24 22:34:26.276014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.635 qpair failed and we were unable to recover it. 00:25:00.635 [2024-07-24 22:34:26.276137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.635 [2024-07-24 22:34:26.276193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.635 qpair failed and we were unable to recover it. 00:25:00.635 [2024-07-24 22:34:26.276330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.635 [2024-07-24 22:34:26.276376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.635 qpair failed and we were unable to recover it. 00:25:00.635 [2024-07-24 22:34:26.276537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.635 [2024-07-24 22:34:26.276563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.635 qpair failed and we were unable to recover it. 00:25:00.635 [2024-07-24 22:34:26.276673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.635 [2024-07-24 22:34:26.276699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.635 qpair failed and we were unable to recover it. 00:25:00.635 [2024-07-24 22:34:26.276800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.635 [2024-07-24 22:34:26.276828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.635 qpair failed and we were unable to recover it. 00:25:00.635 [2024-07-24 22:34:26.276945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.635 [2024-07-24 22:34:26.276987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.635 qpair failed and we were unable to recover it. 00:25:00.635 [2024-07-24 22:34:26.277135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.635 [2024-07-24 22:34:26.277185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.635 qpair failed and we were unable to recover it. 00:25:00.635 [2024-07-24 22:34:26.277322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.635 [2024-07-24 22:34:26.277369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.635 qpair failed and we were unable to recover it. 00:25:00.635 [2024-07-24 22:34:26.277477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.635 [2024-07-24 22:34:26.277515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.635 qpair failed and we were unable to recover it. 00:25:00.635 [2024-07-24 22:34:26.277653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.635 [2024-07-24 22:34:26.277707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.635 qpair failed and we were unable to recover it. 00:25:00.635 [2024-07-24 22:34:26.277908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.635 [2024-07-24 22:34:26.277968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.635 qpair failed and we were unable to recover it. 00:25:00.635 [2024-07-24 22:34:26.278143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.635 [2024-07-24 22:34:26.278196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.635 qpair failed and we were unable to recover it. 00:25:00.635 [2024-07-24 22:34:26.278314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.635 [2024-07-24 22:34:26.278371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.635 qpair failed and we were unable to recover it. 00:25:00.635 [2024-07-24 22:34:26.278507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.635 [2024-07-24 22:34:26.278549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.635 qpair failed and we were unable to recover it. 00:25:00.635 [2024-07-24 22:34:26.278714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.635 [2024-07-24 22:34:26.278740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.635 qpair failed and we were unable to recover it. 00:25:00.635 [2024-07-24 22:34:26.278864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.635 [2024-07-24 22:34:26.278915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.635 qpair failed and we were unable to recover it. 00:25:00.635 [2024-07-24 22:34:26.279097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.635 [2024-07-24 22:34:26.279145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.635 qpair failed and we were unable to recover it. 00:25:00.635 [2024-07-24 22:34:26.279279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.635 [2024-07-24 22:34:26.279324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.635 qpair failed and we were unable to recover it. 00:25:00.635 [2024-07-24 22:34:26.279466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.635 [2024-07-24 22:34:26.279526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.635 qpair failed and we were unable to recover it. 00:25:00.635 [2024-07-24 22:34:26.279661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.635 [2024-07-24 22:34:26.279707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.635 qpair failed and we were unable to recover it. 00:25:00.635 [2024-07-24 22:34:26.279849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.635 [2024-07-24 22:34:26.279894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.635 qpair failed and we were unable to recover it. 00:25:00.635 [2024-07-24 22:34:26.279999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.635 [2024-07-24 22:34:26.280035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.635 qpair failed and we were unable to recover it. 00:25:00.635 [2024-07-24 22:34:26.280191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.635 [2024-07-24 22:34:26.280245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.635 qpair failed and we were unable to recover it. 00:25:00.635 [2024-07-24 22:34:26.280402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.635 [2024-07-24 22:34:26.280467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.635 qpair failed and we were unable to recover it. 00:25:00.635 [2024-07-24 22:34:26.280587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.635 [2024-07-24 22:34:26.280614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.635 qpair failed and we were unable to recover it. 00:25:00.635 [2024-07-24 22:34:26.280739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.635 [2024-07-24 22:34:26.280789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.635 qpair failed and we were unable to recover it. 00:25:00.635 [2024-07-24 22:34:26.280941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.635 [2024-07-24 22:34:26.281008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.635 qpair failed and we were unable to recover it. 00:25:00.635 [2024-07-24 22:34:26.281170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.635 [2024-07-24 22:34:26.281218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.635 qpair failed and we were unable to recover it. 00:25:00.636 [2024-07-24 22:34:26.281362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.636 [2024-07-24 22:34:26.281388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.636 qpair failed and we were unable to recover it. 00:25:00.636 [2024-07-24 22:34:26.281543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.636 [2024-07-24 22:34:26.281579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.636 qpair failed and we were unable to recover it. 00:25:00.636 [2024-07-24 22:34:26.281703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.636 [2024-07-24 22:34:26.281753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.636 qpair failed and we were unable to recover it. 00:25:00.636 [2024-07-24 22:34:26.281887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.636 [2024-07-24 22:34:26.281934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.636 qpair failed and we were unable to recover it. 00:25:00.636 [2024-07-24 22:34:26.282070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.636 [2024-07-24 22:34:26.282121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.636 qpair failed and we were unable to recover it. 00:25:00.636 [2024-07-24 22:34:26.282247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.636 [2024-07-24 22:34:26.282289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.636 qpair failed and we were unable to recover it. 00:25:00.636 [2024-07-24 22:34:26.282406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.636 [2024-07-24 22:34:26.282433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.636 qpair failed and we were unable to recover it. 00:25:00.636 [2024-07-24 22:34:26.282545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.636 [2024-07-24 22:34:26.282573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.636 qpair failed and we were unable to recover it. 00:25:00.636 [2024-07-24 22:34:26.282686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.636 [2024-07-24 22:34:26.282735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.636 qpair failed and we were unable to recover it. 00:25:00.922 [2024-07-24 22:34:26.282865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.922 [2024-07-24 22:34:26.282915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.922 qpair failed and we were unable to recover it. 00:25:00.922 [2024-07-24 22:34:26.283124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.922 [2024-07-24 22:34:26.283149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.922 qpair failed and we were unable to recover it. 00:25:00.922 [2024-07-24 22:34:26.283318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.922 [2024-07-24 22:34:26.283344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.922 qpair failed and we were unable to recover it. 00:25:00.922 [2024-07-24 22:34:26.283456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.922 [2024-07-24 22:34:26.283491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.922 qpair failed and we were unable to recover it. 00:25:00.922 [2024-07-24 22:34:26.283613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.922 [2024-07-24 22:34:26.283662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.922 qpair failed and we were unable to recover it. 00:25:00.922 [2024-07-24 22:34:26.283817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.922 [2024-07-24 22:34:26.283880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.922 qpair failed and we were unable to recover it. 00:25:00.922 [2024-07-24 22:34:26.283991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.922 [2024-07-24 22:34:26.284017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.922 qpair failed and we were unable to recover it. 00:25:00.922 [2024-07-24 22:34:26.284165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.922 [2024-07-24 22:34:26.284229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.922 qpair failed and we were unable to recover it. 00:25:00.922 [2024-07-24 22:34:26.284352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.922 [2024-07-24 22:34:26.284399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.922 qpair failed and we were unable to recover it. 00:25:00.922 [2024-07-24 22:34:26.284505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.922 [2024-07-24 22:34:26.284532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.922 qpair failed and we were unable to recover it. 00:25:00.922 [2024-07-24 22:34:26.284653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.922 [2024-07-24 22:34:26.284702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.922 qpair failed and we were unable to recover it. 00:25:00.922 [2024-07-24 22:34:26.284869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.922 [2024-07-24 22:34:26.284935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.922 qpair failed and we were unable to recover it. 00:25:00.922 [2024-07-24 22:34:26.285038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.922 [2024-07-24 22:34:26.285066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.922 qpair failed and we were unable to recover it. 00:25:00.922 [2024-07-24 22:34:26.285190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.922 [2024-07-24 22:34:26.285272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.922 qpair failed and we were unable to recover it. 00:25:00.922 [2024-07-24 22:34:26.285374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.922 [2024-07-24 22:34:26.285399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.922 qpair failed and we were unable to recover it. 00:25:00.922 [2024-07-24 22:34:26.285507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.922 [2024-07-24 22:34:26.285533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.922 qpair failed and we were unable to recover it. 00:25:00.922 [2024-07-24 22:34:26.285670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.922 [2024-07-24 22:34:26.285722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.922 qpair failed and we were unable to recover it. 00:25:00.923 [2024-07-24 22:34:26.285844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.923 [2024-07-24 22:34:26.285892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.923 qpair failed and we were unable to recover it. 00:25:00.923 [2024-07-24 22:34:26.286013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.923 [2024-07-24 22:34:26.286058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.923 qpair failed and we were unable to recover it. 00:25:00.923 [2024-07-24 22:34:26.286234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.923 [2024-07-24 22:34:26.286264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.923 qpair failed and we were unable to recover it. 00:25:00.923 [2024-07-24 22:34:26.286394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.923 [2024-07-24 22:34:26.286444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.923 qpair failed and we were unable to recover it. 00:25:00.923 [2024-07-24 22:34:26.286577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.923 [2024-07-24 22:34:26.286624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.923 qpair failed and we were unable to recover it. 00:25:00.923 [2024-07-24 22:34:26.286739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.923 [2024-07-24 22:34:26.286788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.923 qpair failed and we were unable to recover it. 00:25:00.923 [2024-07-24 22:34:26.286957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.923 [2024-07-24 22:34:26.286983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.923 qpair failed and we were unable to recover it. 00:25:00.923 [2024-07-24 22:34:26.287102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.923 [2024-07-24 22:34:26.287149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.923 qpair failed and we were unable to recover it. 00:25:00.923 [2024-07-24 22:34:26.287273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.923 [2024-07-24 22:34:26.287320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.923 qpair failed and we were unable to recover it. 00:25:00.923 [2024-07-24 22:34:26.287422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.923 [2024-07-24 22:34:26.287449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.923 qpair failed and we were unable to recover it. 00:25:00.923 [2024-07-24 22:34:26.287579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.923 [2024-07-24 22:34:26.287633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.923 qpair failed and we were unable to recover it. 00:25:00.923 [2024-07-24 22:34:26.287771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.923 [2024-07-24 22:34:26.287819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.923 qpair failed and we were unable to recover it. 00:25:00.923 [2024-07-24 22:34:26.287948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.923 [2024-07-24 22:34:26.288000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.923 qpair failed and we were unable to recover it. 00:25:00.923 [2024-07-24 22:34:26.288123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.923 [2024-07-24 22:34:26.288172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.923 qpair failed and we were unable to recover it. 00:25:00.923 [2024-07-24 22:34:26.288273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.923 [2024-07-24 22:34:26.288300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.923 qpair failed and we were unable to recover it. 00:25:00.923 [2024-07-24 22:34:26.288399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.923 [2024-07-24 22:34:26.288425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.923 qpair failed and we were unable to recover it. 00:25:00.923 [2024-07-24 22:34:26.288526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.923 [2024-07-24 22:34:26.288554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.923 qpair failed and we were unable to recover it. 00:25:00.923 [2024-07-24 22:34:26.288667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.923 [2024-07-24 22:34:26.288693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.923 qpair failed and we were unable to recover it. 00:25:00.923 [2024-07-24 22:34:26.288811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.923 [2024-07-24 22:34:26.288858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.923 qpair failed and we were unable to recover it. 00:25:00.923 [2024-07-24 22:34:26.288964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.923 [2024-07-24 22:34:26.288991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.923 qpair failed and we were unable to recover it. 00:25:00.923 [2024-07-24 22:34:26.289104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.923 [2024-07-24 22:34:26.289131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.923 qpair failed and we were unable to recover it. 00:25:00.923 [2024-07-24 22:34:26.289255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.923 [2024-07-24 22:34:26.289301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.923 qpair failed and we were unable to recover it. 00:25:00.923 [2024-07-24 22:34:26.289398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.923 [2024-07-24 22:34:26.289424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.923 qpair failed and we were unable to recover it. 00:25:00.923 [2024-07-24 22:34:26.289594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.923 [2024-07-24 22:34:26.289620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.923 qpair failed and we were unable to recover it. 00:25:00.923 [2024-07-24 22:34:26.289766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.923 [2024-07-24 22:34:26.289792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.923 qpair failed and we were unable to recover it. 00:25:00.923 [2024-07-24 22:34:26.289905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.923 [2024-07-24 22:34:26.289955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.923 qpair failed and we were unable to recover it. 00:25:00.923 [2024-07-24 22:34:26.290064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.923 [2024-07-24 22:34:26.290092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.923 qpair failed and we were unable to recover it. 00:25:00.923 [2024-07-24 22:34:26.290244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.923 [2024-07-24 22:34:26.290270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.923 qpair failed and we were unable to recover it. 00:25:00.923 [2024-07-24 22:34:26.290371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.923 [2024-07-24 22:34:26.290397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.923 qpair failed and we were unable to recover it. 00:25:00.923 [2024-07-24 22:34:26.290515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.923 [2024-07-24 22:34:26.290560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.923 qpair failed and we were unable to recover it. 00:25:00.923 [2024-07-24 22:34:26.290727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.923 [2024-07-24 22:34:26.290753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.923 qpair failed and we were unable to recover it. 00:25:00.923 [2024-07-24 22:34:26.290874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.923 [2024-07-24 22:34:26.290921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.923 qpair failed and we were unable to recover it. 00:25:00.923 [2024-07-24 22:34:26.291063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.923 [2024-07-24 22:34:26.291145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.923 qpair failed and we were unable to recover it. 00:25:00.923 [2024-07-24 22:34:26.291273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.923 [2024-07-24 22:34:26.291320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.923 qpair failed and we were unable to recover it. 00:25:00.923 [2024-07-24 22:34:26.291435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.923 [2024-07-24 22:34:26.291500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.923 qpair failed and we were unable to recover it. 00:25:00.923 [2024-07-24 22:34:26.291634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.923 [2024-07-24 22:34:26.291679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.923 qpair failed and we were unable to recover it. 00:25:00.923 [2024-07-24 22:34:26.291804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.923 [2024-07-24 22:34:26.291852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.923 qpair failed and we were unable to recover it. 00:25:00.923 [2024-07-24 22:34:26.291969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.924 [2024-07-24 22:34:26.292017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.924 qpair failed and we were unable to recover it. 00:25:00.924 [2024-07-24 22:34:26.292143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.924 [2024-07-24 22:34:26.292189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.924 qpair failed and we were unable to recover it. 00:25:00.924 [2024-07-24 22:34:26.292290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.924 [2024-07-24 22:34:26.292316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.924 qpair failed and we were unable to recover it. 00:25:00.924 [2024-07-24 22:34:26.292411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.924 [2024-07-24 22:34:26.292437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.924 qpair failed and we were unable to recover it. 00:25:00.924 [2024-07-24 22:34:26.292567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.924 [2024-07-24 22:34:26.292615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.924 qpair failed and we were unable to recover it. 00:25:00.924 [2024-07-24 22:34:26.292762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.924 [2024-07-24 22:34:26.292789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.924 qpair failed and we were unable to recover it. 00:25:00.924 [2024-07-24 22:34:26.292952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.924 [2024-07-24 22:34:26.293010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.924 qpair failed and we were unable to recover it. 00:25:00.924 [2024-07-24 22:34:26.293137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.924 [2024-07-24 22:34:26.293185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.924 qpair failed and we were unable to recover it. 00:25:00.924 [2024-07-24 22:34:26.293287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.924 [2024-07-24 22:34:26.293315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.924 qpair failed and we were unable to recover it. 00:25:00.924 [2024-07-24 22:34:26.293419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.924 [2024-07-24 22:34:26.293447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.924 qpair failed and we were unable to recover it. 00:25:00.924 [2024-07-24 22:34:26.293577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.924 [2024-07-24 22:34:26.293624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.924 qpair failed and we were unable to recover it. 00:25:00.924 [2024-07-24 22:34:26.293753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.924 [2024-07-24 22:34:26.293836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.924 qpair failed and we were unable to recover it. 00:25:00.924 [2024-07-24 22:34:26.294013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.924 [2024-07-24 22:34:26.294038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.924 qpair failed and we were unable to recover it. 00:25:00.924 [2024-07-24 22:34:26.294133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.924 [2024-07-24 22:34:26.294160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.924 qpair failed and we were unable to recover it. 00:25:00.924 [2024-07-24 22:34:26.294284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.924 [2024-07-24 22:34:26.294332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.924 qpair failed and we were unable to recover it. 00:25:00.924 [2024-07-24 22:34:26.294456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.924 [2024-07-24 22:34:26.294557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.924 qpair failed and we were unable to recover it. 00:25:00.924 [2024-07-24 22:34:26.294732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.924 [2024-07-24 22:34:26.294758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.924 qpair failed and we were unable to recover it. 00:25:00.924 [2024-07-24 22:34:26.294885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.924 [2024-07-24 22:34:26.294929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.924 qpair failed and we were unable to recover it. 00:25:00.924 [2024-07-24 22:34:26.295075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.924 [2024-07-24 22:34:26.295101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.924 qpair failed and we were unable to recover it. 00:25:00.924 [2024-07-24 22:34:26.295213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.924 [2024-07-24 22:34:26.295261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.924 qpair failed and we were unable to recover it. 00:25:00.924 [2024-07-24 22:34:26.295366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.924 [2024-07-24 22:34:26.295393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.924 qpair failed and we were unable to recover it. 00:25:00.924 [2024-07-24 22:34:26.295502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.924 [2024-07-24 22:34:26.295531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.924 qpair failed and we were unable to recover it. 00:25:00.924 [2024-07-24 22:34:26.295661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.924 [2024-07-24 22:34:26.295706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.924 qpair failed and we were unable to recover it. 00:25:00.924 [2024-07-24 22:34:26.295830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.924 [2024-07-24 22:34:26.295885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.924 qpair failed and we were unable to recover it. 00:25:00.924 [2024-07-24 22:34:26.295995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.924 [2024-07-24 22:34:26.296021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.924 qpair failed and we were unable to recover it. 00:25:00.924 [2024-07-24 22:34:26.296141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.924 [2024-07-24 22:34:26.296189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.924 qpair failed and we were unable to recover it. 00:25:00.924 [2024-07-24 22:34:26.296314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.924 [2024-07-24 22:34:26.296364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.924 qpair failed and we were unable to recover it. 00:25:00.924 [2024-07-24 22:34:26.296513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.924 [2024-07-24 22:34:26.296542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.924 qpair failed and we were unable to recover it. 00:25:00.924 [2024-07-24 22:34:26.296669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.924 [2024-07-24 22:34:26.296737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.924 qpair failed and we were unable to recover it. 00:25:00.924 [2024-07-24 22:34:26.296909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.924 [2024-07-24 22:34:26.296962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.924 qpair failed and we were unable to recover it. 00:25:00.924 [2024-07-24 22:34:26.297089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.924 [2024-07-24 22:34:26.297170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.924 qpair failed and we were unable to recover it. 00:25:00.924 [2024-07-24 22:34:26.297289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.924 [2024-07-24 22:34:26.297348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.924 qpair failed and we were unable to recover it. 00:25:00.924 [2024-07-24 22:34:26.297507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.924 [2024-07-24 22:34:26.297568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.924 qpair failed and we were unable to recover it. 00:25:00.924 [2024-07-24 22:34:26.297728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.924 [2024-07-24 22:34:26.297791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.924 qpair failed and we were unable to recover it. 00:25:00.924 [2024-07-24 22:34:26.297923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.924 [2024-07-24 22:34:26.298006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.924 qpair failed and we were unable to recover it. 00:25:00.924 [2024-07-24 22:34:26.298133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.924 [2024-07-24 22:34:26.298216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.924 qpair failed and we were unable to recover it. 00:25:00.924 [2024-07-24 22:34:26.298334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.924 [2024-07-24 22:34:26.298379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.924 qpair failed and we were unable to recover it. 00:25:00.924 [2024-07-24 22:34:26.298490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.925 [2024-07-24 22:34:26.298517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.925 qpair failed and we were unable to recover it. 00:25:00.925 [2024-07-24 22:34:26.298644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.925 [2024-07-24 22:34:26.298684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.925 qpair failed and we were unable to recover it. 00:25:00.925 [2024-07-24 22:34:26.298814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.925 [2024-07-24 22:34:26.298856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.925 qpair failed and we were unable to recover it. 00:25:00.925 [2024-07-24 22:34:26.299013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.925 [2024-07-24 22:34:26.299067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.925 qpair failed and we were unable to recover it. 00:25:00.925 [2024-07-24 22:34:26.299192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.925 [2024-07-24 22:34:26.299239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.925 qpair failed and we were unable to recover it. 00:25:00.925 [2024-07-24 22:34:26.299360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.925 [2024-07-24 22:34:26.299412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.925 qpair failed and we were unable to recover it. 00:25:00.925 [2024-07-24 22:34:26.299552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.925 [2024-07-24 22:34:26.299605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.925 qpair failed and we were unable to recover it. 00:25:00.925 [2024-07-24 22:34:26.299733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.925 [2024-07-24 22:34:26.299779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.925 qpair failed and we were unable to recover it. 00:25:00.925 [2024-07-24 22:34:26.299893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.925 [2024-07-24 22:34:26.299941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.925 qpair failed and we were unable to recover it. 00:25:00.925 [2024-07-24 22:34:26.300055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.925 [2024-07-24 22:34:26.300101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.925 qpair failed and we were unable to recover it. 00:25:00.925 [2024-07-24 22:34:26.300227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.925 [2024-07-24 22:34:26.300282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.925 qpair failed and we were unable to recover it. 00:25:00.925 [2024-07-24 22:34:26.300409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.925 [2024-07-24 22:34:26.300478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.925 qpair failed and we were unable to recover it. 00:25:00.925 [2024-07-24 22:34:26.300616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.925 [2024-07-24 22:34:26.300663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.925 qpair failed and we were unable to recover it. 00:25:00.925 [2024-07-24 22:34:26.300791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.925 [2024-07-24 22:34:26.300873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.925 qpair failed and we were unable to recover it. 00:25:00.925 [2024-07-24 22:34:26.301020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.925 [2024-07-24 22:34:26.301046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.925 qpair failed and we were unable to recover it. 00:25:00.925 [2024-07-24 22:34:26.301147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.925 [2024-07-24 22:34:26.301175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.925 qpair failed and we were unable to recover it. 00:25:00.925 [2024-07-24 22:34:26.301317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.925 [2024-07-24 22:34:26.301373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.925 qpair failed and we were unable to recover it. 00:25:00.925 [2024-07-24 22:34:26.301471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.925 [2024-07-24 22:34:26.301509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.925 qpair failed and we were unable to recover it. 00:25:00.925 [2024-07-24 22:34:26.301609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.925 [2024-07-24 22:34:26.301636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.925 qpair failed and we were unable to recover it. 00:25:00.925 [2024-07-24 22:34:26.301777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.925 [2024-07-24 22:34:26.301816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.925 qpair failed and we were unable to recover it. 00:25:00.925 [2024-07-24 22:34:26.301941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.925 [2024-07-24 22:34:26.302005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.925 qpair failed and we were unable to recover it. 00:25:00.925 [2024-07-24 22:34:26.302153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.925 [2024-07-24 22:34:26.302191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.925 qpair failed and we were unable to recover it. 00:25:00.925 [2024-07-24 22:34:26.302334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.925 [2024-07-24 22:34:26.302399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.925 qpair failed and we were unable to recover it. 00:25:00.925 [2024-07-24 22:34:26.302496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.925 [2024-07-24 22:34:26.302523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.925 qpair failed and we were unable to recover it. 00:25:00.925 [2024-07-24 22:34:26.302629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.925 [2024-07-24 22:34:26.302659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.925 qpair failed and we were unable to recover it. 00:25:00.925 [2024-07-24 22:34:26.302799] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1699190 is same with the state(5) to be set 00:25:00.925 [2024-07-24 22:34:26.302967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.925 [2024-07-24 22:34:26.303026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.925 qpair failed and we were unable to recover it. 00:25:00.925 [2024-07-24 22:34:26.303188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.925 [2024-07-24 22:34:26.303235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.925 qpair failed and we were unable to recover it. 00:25:00.925 [2024-07-24 22:34:26.303362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.925 [2024-07-24 22:34:26.303409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.925 qpair failed and we were unable to recover it. 00:25:00.925 [2024-07-24 22:34:26.303523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.925 [2024-07-24 22:34:26.303559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.925 qpair failed and we were unable to recover it. 00:25:00.925 [2024-07-24 22:34:26.303680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.925 [2024-07-24 22:34:26.303740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.925 qpair failed and we were unable to recover it. 00:25:00.925 [2024-07-24 22:34:26.303891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.925 [2024-07-24 22:34:26.303945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.925 qpair failed and we were unable to recover it. 00:25:00.925 [2024-07-24 22:34:26.304066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.925 [2024-07-24 22:34:26.304124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.925 qpair failed and we were unable to recover it. 00:25:00.925 [2024-07-24 22:34:26.304296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.925 [2024-07-24 22:34:26.304353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.925 qpair failed and we were unable to recover it. 00:25:00.925 [2024-07-24 22:34:26.304464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.925 [2024-07-24 22:34:26.304500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.925 qpair failed and we were unable to recover it. 00:25:00.925 [2024-07-24 22:34:26.304632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.925 [2024-07-24 22:34:26.304697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.925 qpair failed and we were unable to recover it. 00:25:00.925 [2024-07-24 22:34:26.304891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.925 [2024-07-24 22:34:26.304940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.925 qpair failed and we were unable to recover it. 00:25:00.925 [2024-07-24 22:34:26.305091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.925 [2024-07-24 22:34:26.305152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.925 qpair failed and we were unable to recover it. 00:25:00.925 [2024-07-24 22:34:26.305401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.926 [2024-07-24 22:34:26.305427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.926 qpair failed and we were unable to recover it. 00:25:00.926 [2024-07-24 22:34:26.305571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.926 [2024-07-24 22:34:26.305636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.926 qpair failed and we were unable to recover it. 00:25:00.926 [2024-07-24 22:34:26.305783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.926 [2024-07-24 22:34:26.305848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.926 qpair failed and we were unable to recover it. 00:25:00.926 [2024-07-24 22:34:26.306012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.926 [2024-07-24 22:34:26.306072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.926 qpair failed and we were unable to recover it. 00:25:00.926 [2024-07-24 22:34:26.306190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.926 [2024-07-24 22:34:26.306239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.926 qpair failed and we were unable to recover it. 00:25:00.926 [2024-07-24 22:34:26.306489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.926 [2024-07-24 22:34:26.306516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.926 qpair failed and we were unable to recover it. 00:25:00.926 [2024-07-24 22:34:26.306662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.926 [2024-07-24 22:34:26.306688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.926 qpair failed and we were unable to recover it. 00:25:00.926 [2024-07-24 22:34:26.306816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.926 [2024-07-24 22:34:26.306865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.926 qpair failed and we were unable to recover it. 00:25:00.926 [2024-07-24 22:34:26.307012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.926 [2024-07-24 22:34:26.307068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.926 qpair failed and we were unable to recover it. 00:25:00.926 [2024-07-24 22:34:26.307229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.926 [2024-07-24 22:34:26.307285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.926 qpair failed and we were unable to recover it. 00:25:00.926 [2024-07-24 22:34:26.307388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.926 [2024-07-24 22:34:26.307413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.926 qpair failed and we were unable to recover it. 00:25:00.926 [2024-07-24 22:34:26.307537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.926 [2024-07-24 22:34:26.307597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.926 qpair failed and we were unable to recover it. 00:25:00.926 [2024-07-24 22:34:26.307717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.926 [2024-07-24 22:34:26.307781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.926 qpair failed and we were unable to recover it. 00:25:00.926 [2024-07-24 22:34:26.307914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.926 [2024-07-24 22:34:26.307943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.926 qpair failed and we were unable to recover it. 00:25:00.926 [2024-07-24 22:34:26.308128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.926 [2024-07-24 22:34:26.308180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.926 qpair failed and we were unable to recover it. 00:25:00.926 [2024-07-24 22:34:26.308302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.926 [2024-07-24 22:34:26.308348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.926 qpair failed and we were unable to recover it. 00:25:00.926 [2024-07-24 22:34:26.308529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.926 [2024-07-24 22:34:26.308578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.926 qpair failed and we were unable to recover it. 00:25:00.926 [2024-07-24 22:34:26.308730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.926 [2024-07-24 22:34:26.308792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.926 qpair failed and we were unable to recover it. 00:25:00.926 [2024-07-24 22:34:26.308914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.926 [2024-07-24 22:34:26.308960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.926 qpair failed and we were unable to recover it. 00:25:00.926 [2024-07-24 22:34:26.309087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.926 [2024-07-24 22:34:26.309135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.926 qpair failed and we were unable to recover it. 00:25:00.926 [2024-07-24 22:34:26.309256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.926 [2024-07-24 22:34:26.309323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.926 qpair failed and we were unable to recover it. 00:25:00.926 [2024-07-24 22:34:26.309460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.926 [2024-07-24 22:34:26.309554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.926 qpair failed and we were unable to recover it. 00:25:00.926 [2024-07-24 22:34:26.309686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.926 [2024-07-24 22:34:26.309764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.926 qpair failed and we were unable to recover it. 00:25:00.926 [2024-07-24 22:34:26.309887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.926 [2024-07-24 22:34:26.309947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.926 qpair failed and we were unable to recover it. 00:25:00.926 [2024-07-24 22:34:26.310116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.926 [2024-07-24 22:34:26.310146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.926 qpair failed and we were unable to recover it. 00:25:00.926 [2024-07-24 22:34:26.310275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.926 [2024-07-24 22:34:26.310329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.926 qpair failed and we were unable to recover it. 00:25:00.926 [2024-07-24 22:34:26.310455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.926 [2024-07-24 22:34:26.310506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.926 qpair failed and we were unable to recover it. 00:25:00.926 [2024-07-24 22:34:26.310608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.926 [2024-07-24 22:34:26.310634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.926 qpair failed and we were unable to recover it. 00:25:00.926 [2024-07-24 22:34:26.310792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.926 [2024-07-24 22:34:26.310850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.926 qpair failed and we were unable to recover it. 00:25:00.926 [2024-07-24 22:34:26.310951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.926 [2024-07-24 22:34:26.310978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.926 qpair failed and we were unable to recover it. 00:25:00.926 [2024-07-24 22:34:26.311109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.926 [2024-07-24 22:34:26.311185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.926 qpair failed and we were unable to recover it. 00:25:00.926 [2024-07-24 22:34:26.311305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.926 [2024-07-24 22:34:26.311354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.926 qpair failed and we were unable to recover it. 00:25:00.926 [2024-07-24 22:34:26.311477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.926 [2024-07-24 22:34:26.311564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.926 qpair failed and we were unable to recover it. 00:25:00.926 [2024-07-24 22:34:26.311717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.926 [2024-07-24 22:34:26.311743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.926 qpair failed and we were unable to recover it. 00:25:00.926 [2024-07-24 22:34:26.311861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.926 [2024-07-24 22:34:26.311916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.926 qpair failed and we were unable to recover it. 00:25:00.926 [2024-07-24 22:34:26.312052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.926 [2024-07-24 22:34:26.312098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.926 qpair failed and we were unable to recover it. 00:25:00.926 [2024-07-24 22:34:26.312233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.926 [2024-07-24 22:34:26.312280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.926 qpair failed and we were unable to recover it. 00:25:00.926 [2024-07-24 22:34:26.312391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.926 [2024-07-24 22:34:26.312417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.926 qpair failed and we were unable to recover it. 00:25:00.926 [2024-07-24 22:34:26.312520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.927 [2024-07-24 22:34:26.312548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.927 qpair failed and we were unable to recover it. 00:25:00.927 [2024-07-24 22:34:26.312666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.927 [2024-07-24 22:34:26.312714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.927 qpair failed and we were unable to recover it. 00:25:00.927 [2024-07-24 22:34:26.312864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.927 [2024-07-24 22:34:26.312903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.927 qpair failed and we were unable to recover it. 00:25:00.927 [2024-07-24 22:34:26.313022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.927 [2024-07-24 22:34:26.313048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.927 qpair failed and we were unable to recover it. 00:25:00.927 [2024-07-24 22:34:26.313162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.927 [2024-07-24 22:34:26.313210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.927 qpair failed and we were unable to recover it. 00:25:00.927 [2024-07-24 22:34:26.313312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.927 [2024-07-24 22:34:26.313339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.927 qpair failed and we were unable to recover it. 00:25:00.927 [2024-07-24 22:34:26.313438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.927 [2024-07-24 22:34:26.313464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.927 qpair failed and we were unable to recover it. 00:25:00.927 [2024-07-24 22:34:26.313624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.927 [2024-07-24 22:34:26.313687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.927 qpair failed and we were unable to recover it. 00:25:00.927 [2024-07-24 22:34:26.313820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.927 [2024-07-24 22:34:26.313860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.927 qpair failed and we were unable to recover it. 00:25:00.927 [2024-07-24 22:34:26.313987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.927 [2024-07-24 22:34:26.314047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.927 qpair failed and we were unable to recover it. 00:25:00.927 [2024-07-24 22:34:26.314186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.927 [2024-07-24 22:34:26.314245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.927 qpair failed and we were unable to recover it. 00:25:00.927 [2024-07-24 22:34:26.314349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.927 [2024-07-24 22:34:26.314375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.927 qpair failed and we were unable to recover it. 00:25:00.927 [2024-07-24 22:34:26.314473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.927 [2024-07-24 22:34:26.314505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.927 qpair failed and we were unable to recover it. 00:25:00.927 [2024-07-24 22:34:26.314633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.927 [2024-07-24 22:34:26.314680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.927 qpair failed and we were unable to recover it. 00:25:00.927 [2024-07-24 22:34:26.314835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.927 [2024-07-24 22:34:26.314892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.927 qpair failed and we were unable to recover it. 00:25:00.927 [2024-07-24 22:34:26.315018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.927 [2024-07-24 22:34:26.315076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.927 qpair failed and we were unable to recover it. 00:25:00.927 [2024-07-24 22:34:26.315190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.927 [2024-07-24 22:34:26.315215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.927 qpair failed and we were unable to recover it. 00:25:00.927 [2024-07-24 22:34:26.315315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.927 [2024-07-24 22:34:26.315341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.927 qpair failed and we were unable to recover it. 00:25:00.927 [2024-07-24 22:34:26.315475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.927 [2024-07-24 22:34:26.315532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.927 qpair failed and we were unable to recover it. 00:25:00.927 [2024-07-24 22:34:26.315668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.927 [2024-07-24 22:34:26.315729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.927 qpair failed and we were unable to recover it. 00:25:00.927 [2024-07-24 22:34:26.315853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.927 [2024-07-24 22:34:26.315910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.927 qpair failed and we were unable to recover it. 00:25:00.927 [2024-07-24 22:34:26.316009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.927 [2024-07-24 22:34:26.316035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.927 qpair failed and we were unable to recover it. 00:25:00.927 [2024-07-24 22:34:26.316164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.927 [2024-07-24 22:34:26.316207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.927 qpair failed and we were unable to recover it. 00:25:00.927 [2024-07-24 22:34:26.316304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.927 [2024-07-24 22:34:26.316338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.927 qpair failed and we were unable to recover it. 00:25:00.927 [2024-07-24 22:34:26.316493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.927 [2024-07-24 22:34:26.316531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.927 qpair failed and we were unable to recover it. 00:25:00.927 [2024-07-24 22:34:26.316695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.927 [2024-07-24 22:34:26.316752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.927 qpair failed and we were unable to recover it. 00:25:00.927 [2024-07-24 22:34:26.316881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.927 [2024-07-24 22:34:26.316939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.927 qpair failed and we were unable to recover it. 00:25:00.927 [2024-07-24 22:34:26.317086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.927 [2024-07-24 22:34:26.317145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.927 qpair failed and we were unable to recover it. 00:25:00.927 [2024-07-24 22:34:26.317294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.927 [2024-07-24 22:34:26.317332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.927 qpair failed and we were unable to recover it. 00:25:00.927 [2024-07-24 22:34:26.317439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.927 [2024-07-24 22:34:26.317466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.927 qpair failed and we were unable to recover it. 00:25:00.927 [2024-07-24 22:34:26.317605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.927 [2024-07-24 22:34:26.317662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.927 qpair failed and we were unable to recover it. 00:25:00.927 [2024-07-24 22:34:26.317810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.927 [2024-07-24 22:34:26.317871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.928 qpair failed and we were unable to recover it. 00:25:00.928 [2024-07-24 22:34:26.318007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.928 [2024-07-24 22:34:26.318069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.928 qpair failed and we were unable to recover it. 00:25:00.928 [2024-07-24 22:34:26.318231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.928 [2024-07-24 22:34:26.318258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.928 qpair failed and we were unable to recover it. 00:25:00.928 [2024-07-24 22:34:26.318382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.928 [2024-07-24 22:34:26.318432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.928 qpair failed and we were unable to recover it. 00:25:00.928 [2024-07-24 22:34:26.318577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.928 [2024-07-24 22:34:26.318625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.928 qpair failed and we were unable to recover it. 00:25:00.928 [2024-07-24 22:34:26.318728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.928 [2024-07-24 22:34:26.318755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.928 qpair failed and we were unable to recover it. 00:25:00.928 [2024-07-24 22:34:26.318883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.928 [2024-07-24 22:34:26.318932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.928 qpair failed and we were unable to recover it. 00:25:00.928 [2024-07-24 22:34:26.319033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.928 [2024-07-24 22:34:26.319060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.928 qpair failed and we were unable to recover it. 00:25:00.928 [2024-07-24 22:34:26.319152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.928 [2024-07-24 22:34:26.319181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.928 qpair failed and we were unable to recover it. 00:25:00.928 [2024-07-24 22:34:26.319329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.928 [2024-07-24 22:34:26.319373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.928 qpair failed and we were unable to recover it. 00:25:00.928 [2024-07-24 22:34:26.319515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.928 [2024-07-24 22:34:26.319572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.928 qpair failed and we were unable to recover it. 00:25:00.928 [2024-07-24 22:34:26.319734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.928 [2024-07-24 22:34:26.319760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.928 qpair failed and we were unable to recover it. 00:25:00.928 [2024-07-24 22:34:26.319877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.928 [2024-07-24 22:34:26.319930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.928 qpair failed and we were unable to recover it. 00:25:00.928 [2024-07-24 22:34:26.320038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.928 [2024-07-24 22:34:26.320065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.928 qpair failed and we were unable to recover it. 00:25:00.928 [2024-07-24 22:34:26.320215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.928 [2024-07-24 22:34:26.320254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.928 qpair failed and we were unable to recover it. 00:25:00.928 [2024-07-24 22:34:26.320433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.928 [2024-07-24 22:34:26.320490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.928 qpair failed and we were unable to recover it. 00:25:00.928 [2024-07-24 22:34:26.320633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.928 [2024-07-24 22:34:26.320680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.928 qpair failed and we were unable to recover it. 00:25:00.928 [2024-07-24 22:34:26.320918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.928 [2024-07-24 22:34:26.320947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.928 qpair failed and we were unable to recover it. 00:25:00.928 [2024-07-24 22:34:26.321074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.928 [2024-07-24 22:34:26.321120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.928 qpair failed and we were unable to recover it. 00:25:00.928 [2024-07-24 22:34:26.321226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.928 [2024-07-24 22:34:26.321257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.928 qpair failed and we were unable to recover it. 00:25:00.928 [2024-07-24 22:34:26.321387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.928 [2024-07-24 22:34:26.321432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.928 qpair failed and we were unable to recover it. 00:25:00.928 [2024-07-24 22:34:26.321567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.928 [2024-07-24 22:34:26.321618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.928 qpair failed and we were unable to recover it. 00:25:00.928 [2024-07-24 22:34:26.321756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.928 [2024-07-24 22:34:26.321782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.928 qpair failed and we were unable to recover it. 00:25:00.928 [2024-07-24 22:34:26.321906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.928 [2024-07-24 22:34:26.321947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.928 qpair failed and we were unable to recover it. 00:25:00.928 [2024-07-24 22:34:26.322070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.928 [2024-07-24 22:34:26.322116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.928 qpair failed and we were unable to recover it. 00:25:00.928 [2024-07-24 22:34:26.322239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.928 [2024-07-24 22:34:26.322291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.928 qpair failed and we were unable to recover it. 00:25:00.928 [2024-07-24 22:34:26.322415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.928 [2024-07-24 22:34:26.322463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.928 qpair failed and we were unable to recover it. 00:25:00.928 [2024-07-24 22:34:26.322719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.928 [2024-07-24 22:34:26.322747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.928 qpair failed and we were unable to recover it. 00:25:00.928 [2024-07-24 22:34:26.322878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.928 [2024-07-24 22:34:26.322903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.928 qpair failed and we were unable to recover it. 00:25:00.928 [2024-07-24 22:34:26.323107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.928 [2024-07-24 22:34:26.323156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.928 qpair failed and we were unable to recover it. 00:25:00.928 [2024-07-24 22:34:26.323271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.928 [2024-07-24 22:34:26.323317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.928 qpair failed and we were unable to recover it. 00:25:00.928 [2024-07-24 22:34:26.323461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.928 [2024-07-24 22:34:26.323496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.928 qpair failed and we were unable to recover it. 00:25:00.928 [2024-07-24 22:34:26.323621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.928 [2024-07-24 22:34:26.323668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.928 qpair failed and we were unable to recover it. 00:25:00.928 [2024-07-24 22:34:26.323859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.928 [2024-07-24 22:34:26.323919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.928 qpair failed and we were unable to recover it. 00:25:00.928 [2024-07-24 22:34:26.324070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.928 [2024-07-24 22:34:26.324097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.928 qpair failed and we were unable to recover it. 00:25:00.928 [2024-07-24 22:34:26.324216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.928 [2024-07-24 22:34:26.324262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.928 qpair failed and we were unable to recover it. 00:25:00.928 [2024-07-24 22:34:26.324379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.928 [2024-07-24 22:34:26.324433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.928 qpair failed and we were unable to recover it. 00:25:00.928 [2024-07-24 22:34:26.324539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.928 [2024-07-24 22:34:26.324567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.928 qpair failed and we were unable to recover it. 00:25:00.929 [2024-07-24 22:34:26.324675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.929 [2024-07-24 22:34:26.324702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.929 qpair failed and we were unable to recover it. 00:25:00.929 [2024-07-24 22:34:26.324809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.929 [2024-07-24 22:34:26.324837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.929 qpair failed and we were unable to recover it. 00:25:00.929 [2024-07-24 22:34:26.324957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.929 [2024-07-24 22:34:26.325008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.929 qpair failed and we were unable to recover it. 00:25:00.929 [2024-07-24 22:34:26.325155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.929 [2024-07-24 22:34:26.325196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.929 qpair failed and we were unable to recover it. 00:25:00.929 [2024-07-24 22:34:26.325340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.929 [2024-07-24 22:34:26.325385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.929 qpair failed and we were unable to recover it. 00:25:00.929 [2024-07-24 22:34:26.325531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.929 [2024-07-24 22:34:26.325572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.929 qpair failed and we were unable to recover it. 00:25:00.929 [2024-07-24 22:34:26.325715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.929 [2024-07-24 22:34:26.325761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.929 qpair failed and we were unable to recover it. 00:25:00.929 [2024-07-24 22:34:26.325895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.929 [2024-07-24 22:34:26.325949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.929 qpair failed and we were unable to recover it. 00:25:00.929 [2024-07-24 22:34:26.326114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.929 [2024-07-24 22:34:26.326168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.929 qpair failed and we were unable to recover it. 00:25:00.929 [2024-07-24 22:34:26.326271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.929 [2024-07-24 22:34:26.326298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.929 qpair failed and we were unable to recover it. 00:25:00.929 [2024-07-24 22:34:26.326424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.929 [2024-07-24 22:34:26.326470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.929 qpair failed and we were unable to recover it. 00:25:00.929 [2024-07-24 22:34:26.326619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.929 [2024-07-24 22:34:26.326666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.929 qpair failed and we were unable to recover it. 00:25:00.929 [2024-07-24 22:34:26.326778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.929 [2024-07-24 22:34:26.326820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.929 qpair failed and we were unable to recover it. 00:25:00.929 [2024-07-24 22:34:26.326979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.929 [2024-07-24 22:34:26.327020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.929 qpair failed and we were unable to recover it. 00:25:00.929 [2024-07-24 22:34:26.327135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.929 [2024-07-24 22:34:26.327161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.929 qpair failed and we were unable to recover it. 00:25:00.929 [2024-07-24 22:34:26.327260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.929 [2024-07-24 22:34:26.327286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.929 qpair failed and we were unable to recover it. 00:25:00.929 [2024-07-24 22:34:26.327388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.929 [2024-07-24 22:34:26.327416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.929 qpair failed and we were unable to recover it. 00:25:00.929 [2024-07-24 22:34:26.327519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.929 [2024-07-24 22:34:26.327546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.929 qpair failed and we were unable to recover it. 00:25:00.929 [2024-07-24 22:34:26.327662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.929 [2024-07-24 22:34:26.327711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.929 qpair failed and we were unable to recover it. 00:25:00.929 [2024-07-24 22:34:26.327842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.929 [2024-07-24 22:34:26.327894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.929 qpair failed and we were unable to recover it. 00:25:00.929 [2024-07-24 22:34:26.328011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.929 [2024-07-24 22:34:26.328038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.929 qpair failed and we were unable to recover it. 00:25:00.929 [2024-07-24 22:34:26.328168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.929 [2024-07-24 22:34:26.328217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.929 qpair failed and we were unable to recover it. 00:25:00.929 [2024-07-24 22:34:26.328358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.929 [2024-07-24 22:34:26.328411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.929 qpair failed and we were unable to recover it. 00:25:00.929 [2024-07-24 22:34:26.328516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.929 [2024-07-24 22:34:26.328545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.929 qpair failed and we were unable to recover it. 00:25:00.929 [2024-07-24 22:34:26.328648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.929 [2024-07-24 22:34:26.328675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.929 qpair failed and we were unable to recover it. 00:25:00.929 [2024-07-24 22:34:26.328830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.929 [2024-07-24 22:34:26.328869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.929 qpair failed and we were unable to recover it. 00:25:00.929 [2024-07-24 22:34:26.328993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.929 [2024-07-24 22:34:26.329021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.929 qpair failed and we were unable to recover it. 00:25:00.929 [2024-07-24 22:34:26.329148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.929 [2024-07-24 22:34:26.329193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.929 qpair failed and we were unable to recover it. 00:25:00.929 [2024-07-24 22:34:26.329295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.929 [2024-07-24 22:34:26.329322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.929 qpair failed and we were unable to recover it. 00:25:00.929 [2024-07-24 22:34:26.329436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.929 [2024-07-24 22:34:26.329462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.929 qpair failed and we were unable to recover it. 00:25:00.929 [2024-07-24 22:34:26.329610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.929 [2024-07-24 22:34:26.329656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.929 qpair failed and we were unable to recover it. 00:25:00.929 [2024-07-24 22:34:26.329781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.929 [2024-07-24 22:34:26.329828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.929 qpair failed and we were unable to recover it. 00:25:00.929 [2024-07-24 22:34:26.329954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.929 [2024-07-24 22:34:26.329999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.929 qpair failed and we were unable to recover it. 00:25:00.929 [2024-07-24 22:34:26.330123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.929 [2024-07-24 22:34:26.330165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.929 qpair failed and we were unable to recover it. 00:25:00.929 [2024-07-24 22:34:26.330267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.929 [2024-07-24 22:34:26.330293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.929 qpair failed and we were unable to recover it. 00:25:00.929 [2024-07-24 22:34:26.330399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.929 [2024-07-24 22:34:26.330429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.929 qpair failed and we were unable to recover it. 00:25:00.929 [2024-07-24 22:34:26.330541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.929 [2024-07-24 22:34:26.330569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.929 qpair failed and we were unable to recover it. 00:25:00.929 [2024-07-24 22:34:26.330692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.929 [2024-07-24 22:34:26.330738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.929 qpair failed and we were unable to recover it. 00:25:00.930 [2024-07-24 22:34:26.330874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.930 [2024-07-24 22:34:26.330919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.930 qpair failed and we were unable to recover it. 00:25:00.930 [2024-07-24 22:34:26.331045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.930 [2024-07-24 22:34:26.331089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.930 qpair failed and we were unable to recover it. 00:25:00.930 [2024-07-24 22:34:26.331217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.930 [2024-07-24 22:34:26.331265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.930 qpair failed and we were unable to recover it. 00:25:00.930 [2024-07-24 22:34:26.331387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.930 [2024-07-24 22:34:26.331439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.930 qpair failed and we were unable to recover it. 00:25:00.930 [2024-07-24 22:34:26.331550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.930 [2024-07-24 22:34:26.331577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.930 qpair failed and we were unable to recover it. 00:25:00.930 [2024-07-24 22:34:26.331702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.930 [2024-07-24 22:34:26.331754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.930 qpair failed and we were unable to recover it. 00:25:00.930 [2024-07-24 22:34:26.331874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.930 [2024-07-24 22:34:26.331901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.930 qpair failed and we were unable to recover it. 00:25:00.930 [2024-07-24 22:34:26.332003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.930 [2024-07-24 22:34:26.332030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.930 qpair failed and we were unable to recover it. 00:25:00.930 [2024-07-24 22:34:26.332134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.930 [2024-07-24 22:34:26.332161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.930 qpair failed and we were unable to recover it. 00:25:00.930 [2024-07-24 22:34:26.332271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.930 [2024-07-24 22:34:26.332297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.930 qpair failed and we were unable to recover it. 00:25:00.930 [2024-07-24 22:34:26.332394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.930 [2024-07-24 22:34:26.332425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.930 qpair failed and we were unable to recover it. 00:25:00.930 [2024-07-24 22:34:26.332548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.930 [2024-07-24 22:34:26.332584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.930 qpair failed and we were unable to recover it. 00:25:00.930 [2024-07-24 22:34:26.332742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.930 [2024-07-24 22:34:26.332769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.930 qpair failed and we were unable to recover it. 00:25:00.930 [2024-07-24 22:34:26.332891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.930 [2024-07-24 22:34:26.332937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.930 qpair failed and we were unable to recover it. 00:25:00.930 [2024-07-24 22:34:26.333049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.930 [2024-07-24 22:34:26.333094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.930 qpair failed and we were unable to recover it. 00:25:00.930 [2024-07-24 22:34:26.333218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.930 [2024-07-24 22:34:26.333262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.930 qpair failed and we were unable to recover it. 00:25:00.930 [2024-07-24 22:34:26.333385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.930 [2024-07-24 22:34:26.333427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.930 qpair failed and we were unable to recover it. 00:25:00.930 [2024-07-24 22:34:26.333548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.930 [2024-07-24 22:34:26.333583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.930 qpair failed and we were unable to recover it. 00:25:00.930 [2024-07-24 22:34:26.333722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.930 [2024-07-24 22:34:26.333768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.930 qpair failed and we were unable to recover it. 00:25:00.930 [2024-07-24 22:34:26.333898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.930 [2024-07-24 22:34:26.333943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.930 qpair failed and we were unable to recover it. 00:25:00.930 [2024-07-24 22:34:26.334067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.930 [2024-07-24 22:34:26.334113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.930 qpair failed and we were unable to recover it. 00:25:00.930 [2024-07-24 22:34:26.334222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.930 [2024-07-24 22:34:26.334250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.930 qpair failed and we were unable to recover it. 00:25:00.930 [2024-07-24 22:34:26.334350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.930 [2024-07-24 22:34:26.334376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.930 qpair failed and we were unable to recover it. 00:25:00.930 [2024-07-24 22:34:26.334475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.930 [2024-07-24 22:34:26.334513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.930 qpair failed and we were unable to recover it. 00:25:00.930 [2024-07-24 22:34:26.334645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.930 [2024-07-24 22:34:26.334694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.930 qpair failed and we were unable to recover it. 00:25:00.930 [2024-07-24 22:34:26.334823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.930 [2024-07-24 22:34:26.334863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.930 qpair failed and we were unable to recover it. 00:25:00.930 [2024-07-24 22:34:26.335015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.930 [2024-07-24 22:34:26.335042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.930 qpair failed and we were unable to recover it. 00:25:00.930 [2024-07-24 22:34:26.335142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.930 [2024-07-24 22:34:26.335168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.930 qpair failed and we were unable to recover it. 00:25:00.930 [2024-07-24 22:34:26.335269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.930 [2024-07-24 22:34:26.335295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.930 qpair failed and we were unable to recover it. 00:25:00.930 [2024-07-24 22:34:26.335395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.930 [2024-07-24 22:34:26.335423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.930 qpair failed and we were unable to recover it. 00:25:00.930 [2024-07-24 22:34:26.335551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.930 [2024-07-24 22:34:26.335595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.930 qpair failed and we were unable to recover it. 00:25:00.930 [2024-07-24 22:34:26.335709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.930 [2024-07-24 22:34:26.335743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.930 qpair failed and we were unable to recover it. 00:25:00.930 [2024-07-24 22:34:26.335864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.930 [2024-07-24 22:34:26.335890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.930 qpair failed and we were unable to recover it. 00:25:00.930 [2024-07-24 22:34:26.336010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.930 [2024-07-24 22:34:26.336055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.930 qpair failed and we were unable to recover it. 00:25:00.930 [2024-07-24 22:34:26.336292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.930 [2024-07-24 22:34:26.336324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.930 qpair failed and we were unable to recover it. 00:25:00.930 [2024-07-24 22:34:26.336456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.930 [2024-07-24 22:34:26.336509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.930 qpair failed and we were unable to recover it. 00:25:00.930 [2024-07-24 22:34:26.336648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.930 [2024-07-24 22:34:26.336693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.930 qpair failed and we were unable to recover it. 00:25:00.930 [2024-07-24 22:34:26.336822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.930 [2024-07-24 22:34:26.336869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.931 qpair failed and we were unable to recover it. 00:25:00.931 [2024-07-24 22:34:26.336970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.931 [2024-07-24 22:34:26.336997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.931 qpair failed and we were unable to recover it. 00:25:00.931 [2024-07-24 22:34:26.337116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.931 [2024-07-24 22:34:26.337162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.931 qpair failed and we were unable to recover it. 00:25:00.931 [2024-07-24 22:34:26.337272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.931 [2024-07-24 22:34:26.337299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.931 qpair failed and we were unable to recover it. 00:25:00.931 [2024-07-24 22:34:26.337394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.931 [2024-07-24 22:34:26.337421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.931 qpair failed and we were unable to recover it. 00:25:00.931 [2024-07-24 22:34:26.337533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.931 [2024-07-24 22:34:26.337560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.931 qpair failed and we were unable to recover it. 00:25:00.931 [2024-07-24 22:34:26.337726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.931 [2024-07-24 22:34:26.337752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.931 qpair failed and we were unable to recover it. 00:25:00.931 [2024-07-24 22:34:26.337861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.931 [2024-07-24 22:34:26.337887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.931 qpair failed and we were unable to recover it. 00:25:00.931 [2024-07-24 22:34:26.338009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.931 [2024-07-24 22:34:26.338049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.931 qpair failed and we were unable to recover it. 00:25:00.931 [2024-07-24 22:34:26.338179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.931 [2024-07-24 22:34:26.338223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.931 qpair failed and we were unable to recover it. 00:25:00.931 [2024-07-24 22:34:26.338327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.931 [2024-07-24 22:34:26.338354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.931 qpair failed and we were unable to recover it. 00:25:00.931 [2024-07-24 22:34:26.338471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.931 [2024-07-24 22:34:26.338523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.931 qpair failed and we were unable to recover it. 00:25:00.931 [2024-07-24 22:34:26.338647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.931 [2024-07-24 22:34:26.338686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.931 qpair failed and we were unable to recover it. 00:25:00.931 [2024-07-24 22:34:26.338804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.931 [2024-07-24 22:34:26.338848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.931 qpair failed and we were unable to recover it. 00:25:00.931 [2024-07-24 22:34:26.338975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.931 [2024-07-24 22:34:26.339017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.931 qpair failed and we were unable to recover it. 00:25:00.931 [2024-07-24 22:34:26.339132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.931 [2024-07-24 22:34:26.339164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.931 qpair failed and we were unable to recover it. 00:25:00.931 [2024-07-24 22:34:26.339307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.931 [2024-07-24 22:34:26.339351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.931 qpair failed and we were unable to recover it. 00:25:00.931 [2024-07-24 22:34:26.339475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.931 [2024-07-24 22:34:26.339532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.931 qpair failed and we were unable to recover it. 00:25:00.931 [2024-07-24 22:34:26.339654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.931 [2024-07-24 22:34:26.339696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.931 qpair failed and we were unable to recover it. 00:25:00.931 [2024-07-24 22:34:26.339924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.931 [2024-07-24 22:34:26.339950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.931 qpair failed and we were unable to recover it. 00:25:00.931 [2024-07-24 22:34:26.340074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.931 [2024-07-24 22:34:26.340115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.931 qpair failed and we were unable to recover it. 00:25:00.931 [2024-07-24 22:34:26.340240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.931 [2024-07-24 22:34:26.340283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.931 qpair failed and we were unable to recover it. 00:25:00.931 [2024-07-24 22:34:26.340420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.931 [2024-07-24 22:34:26.340463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.931 qpair failed and we were unable to recover it. 00:25:00.931 [2024-07-24 22:34:26.340594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.931 [2024-07-24 22:34:26.340640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.931 qpair failed and we were unable to recover it. 00:25:00.931 [2024-07-24 22:34:26.340774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.931 [2024-07-24 22:34:26.340812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.931 qpair failed and we were unable to recover it. 00:25:00.931 [2024-07-24 22:34:26.340931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.931 [2024-07-24 22:34:26.340973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.931 qpair failed and we were unable to recover it. 00:25:00.931 [2024-07-24 22:34:26.341091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.931 [2024-07-24 22:34:26.341134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.931 qpair failed and we were unable to recover it. 00:25:00.931 [2024-07-24 22:34:26.341258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.931 [2024-07-24 22:34:26.341304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.931 qpair failed and we were unable to recover it. 00:25:00.931 [2024-07-24 22:34:26.341408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.931 [2024-07-24 22:34:26.341434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.931 qpair failed and we were unable to recover it. 00:25:00.931 [2024-07-24 22:34:26.341575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.931 [2024-07-24 22:34:26.341619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.931 qpair failed and we were unable to recover it. 00:25:00.931 [2024-07-24 22:34:26.341741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.931 [2024-07-24 22:34:26.341785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.931 qpair failed and we were unable to recover it. 00:25:00.931 [2024-07-24 22:34:26.341907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.931 [2024-07-24 22:34:26.341947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.931 qpair failed and we were unable to recover it. 00:25:00.931 [2024-07-24 22:34:26.342072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.931 [2024-07-24 22:34:26.342114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.931 qpair failed and we were unable to recover it. 00:25:00.931 [2024-07-24 22:34:26.342234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.932 [2024-07-24 22:34:26.342266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.932 qpair failed and we were unable to recover it. 00:25:00.932 [2024-07-24 22:34:26.342407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.932 [2024-07-24 22:34:26.342448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.932 qpair failed and we were unable to recover it. 00:25:00.932 [2024-07-24 22:34:26.342582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.932 [2024-07-24 22:34:26.342615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.932 qpair failed and we were unable to recover it. 00:25:00.932 [2024-07-24 22:34:26.342742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.932 [2024-07-24 22:34:26.342774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.932 qpair failed and we were unable to recover it. 00:25:00.932 [2024-07-24 22:34:26.342927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.932 [2024-07-24 22:34:26.342969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.932 qpair failed and we were unable to recover it. 00:25:00.932 [2024-07-24 22:34:26.343080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.932 [2024-07-24 22:34:26.343122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.932 qpair failed and we were unable to recover it. 00:25:00.932 [2024-07-24 22:34:26.343240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.932 [2024-07-24 22:34:26.343286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.932 qpair failed and we were unable to recover it. 00:25:00.932 [2024-07-24 22:34:26.343411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.932 [2024-07-24 22:34:26.343456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.932 qpair failed and we were unable to recover it. 00:25:00.932 [2024-07-24 22:34:26.343627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.932 [2024-07-24 22:34:26.343656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.932 qpair failed and we were unable to recover it. 00:25:00.932 [2024-07-24 22:34:26.343903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.932 [2024-07-24 22:34:26.343929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.932 qpair failed and we were unable to recover it. 00:25:00.932 [2024-07-24 22:34:26.344170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.932 [2024-07-24 22:34:26.344196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.932 qpair failed and we were unable to recover it. 00:25:00.932 [2024-07-24 22:34:26.344316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.932 [2024-07-24 22:34:26.344363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.932 qpair failed and we were unable to recover it. 00:25:00.932 [2024-07-24 22:34:26.344532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.932 [2024-07-24 22:34:26.344574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.932 qpair failed and we were unable to recover it. 00:25:00.932 [2024-07-24 22:34:26.344802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.932 [2024-07-24 22:34:26.344829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.932 qpair failed and we were unable to recover it. 00:25:00.932 [2024-07-24 22:34:26.344950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.932 [2024-07-24 22:34:26.344992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.932 qpair failed and we were unable to recover it. 00:25:00.932 [2024-07-24 22:34:26.345113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.932 [2024-07-24 22:34:26.345155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.932 qpair failed and we were unable to recover it. 00:25:00.932 [2024-07-24 22:34:26.345273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.932 [2024-07-24 22:34:26.345315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.932 qpair failed and we were unable to recover it. 00:25:00.932 [2024-07-24 22:34:26.345437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.932 [2024-07-24 22:34:26.345491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.932 qpair failed and we were unable to recover it. 00:25:00.932 [2024-07-24 22:34:26.345616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.932 [2024-07-24 22:34:26.345659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.932 qpair failed and we were unable to recover it. 00:25:00.932 [2024-07-24 22:34:26.345777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.932 [2024-07-24 22:34:26.345818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.932 qpair failed and we were unable to recover it. 00:25:00.932 [2024-07-24 22:34:26.345936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.932 [2024-07-24 22:34:26.345981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.932 qpair failed and we were unable to recover it. 00:25:00.932 [2024-07-24 22:34:26.346104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.932 [2024-07-24 22:34:26.346148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.932 qpair failed and we were unable to recover it. 00:25:00.932 [2024-07-24 22:34:26.346277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.932 [2024-07-24 22:34:26.346309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.932 qpair failed and we were unable to recover it. 00:25:00.932 [2024-07-24 22:34:26.346426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.932 [2024-07-24 22:34:26.346454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.932 qpair failed and we were unable to recover it. 00:25:00.932 [2024-07-24 22:34:26.346600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.932 [2024-07-24 22:34:26.346684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.932 qpair failed and we were unable to recover it. 00:25:00.932 [2024-07-24 22:34:26.346821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.932 [2024-07-24 22:34:26.346867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.932 qpair failed and we were unable to recover it. 00:25:00.932 [2024-07-24 22:34:26.346967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.932 [2024-07-24 22:34:26.346993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.932 qpair failed and we were unable to recover it. 00:25:00.932 [2024-07-24 22:34:26.347116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.932 [2024-07-24 22:34:26.347163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.932 qpair failed and we were unable to recover it. 00:25:00.932 [2024-07-24 22:34:26.347288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.932 [2024-07-24 22:34:26.347354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.932 qpair failed and we were unable to recover it. 00:25:00.932 [2024-07-24 22:34:26.347544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.932 [2024-07-24 22:34:26.347572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.932 qpair failed and we were unable to recover it. 00:25:00.932 [2024-07-24 22:34:26.347689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.932 [2024-07-24 22:34:26.347735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.932 qpair failed and we were unable to recover it. 00:25:00.932 [2024-07-24 22:34:26.347855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.932 [2024-07-24 22:34:26.347898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.932 qpair failed and we were unable to recover it. 00:25:00.932 [2024-07-24 22:34:26.348025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.932 [2024-07-24 22:34:26.348069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.932 qpair failed and we were unable to recover it. 00:25:00.932 [2024-07-24 22:34:26.348196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.932 [2024-07-24 22:34:26.348243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.932 qpair failed and we were unable to recover it. 00:25:00.932 [2024-07-24 22:34:26.348358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.932 [2024-07-24 22:34:26.348406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.932 qpair failed and we were unable to recover it. 00:25:00.932 [2024-07-24 22:34:26.348533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.932 [2024-07-24 22:34:26.348579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.932 qpair failed and we were unable to recover it. 00:25:00.932 [2024-07-24 22:34:26.348813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.932 [2024-07-24 22:34:26.348839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.932 qpair failed and we were unable to recover it. 00:25:00.932 [2024-07-24 22:34:26.348959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.932 [2024-07-24 22:34:26.349002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.932 qpair failed and we were unable to recover it. 00:25:00.932 [2024-07-24 22:34:26.349124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.933 [2024-07-24 22:34:26.349170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.933 qpair failed and we were unable to recover it. 00:25:00.933 [2024-07-24 22:34:26.349297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.933 [2024-07-24 22:34:26.349340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.933 qpair failed and we were unable to recover it. 00:25:00.933 [2024-07-24 22:34:26.349466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.933 [2024-07-24 22:34:26.349516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.933 qpair failed and we were unable to recover it. 00:25:00.933 [2024-07-24 22:34:26.349635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.933 [2024-07-24 22:34:26.349677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.933 qpair failed and we were unable to recover it. 00:25:00.933 [2024-07-24 22:34:26.349800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.933 [2024-07-24 22:34:26.349842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.933 qpair failed and we were unable to recover it. 00:25:00.933 [2024-07-24 22:34:26.349956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.933 [2024-07-24 22:34:26.349989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.933 qpair failed and we were unable to recover it. 00:25:00.933 [2024-07-24 22:34:26.350125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.933 [2024-07-24 22:34:26.350169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.933 qpair failed and we were unable to recover it. 00:25:00.933 [2024-07-24 22:34:26.350289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.933 [2024-07-24 22:34:26.350332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.933 qpair failed and we were unable to recover it. 00:25:00.933 [2024-07-24 22:34:26.350459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.933 [2024-07-24 22:34:26.350507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.933 qpair failed and we were unable to recover it. 00:25:00.933 [2024-07-24 22:34:26.350632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.933 [2024-07-24 22:34:26.350676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.933 qpair failed and we were unable to recover it. 00:25:00.933 [2024-07-24 22:34:26.350802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.933 [2024-07-24 22:34:26.350845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.933 qpair failed and we were unable to recover it. 00:25:00.933 [2024-07-24 22:34:26.350961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.933 [2024-07-24 22:34:26.351003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.933 qpair failed and we were unable to recover it. 00:25:00.933 [2024-07-24 22:34:26.351116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.933 [2024-07-24 22:34:26.351147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.933 qpair failed and we were unable to recover it. 00:25:00.933 [2024-07-24 22:34:26.351293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.933 [2024-07-24 22:34:26.351335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.933 qpair failed and we were unable to recover it. 00:25:00.933 [2024-07-24 22:34:26.351444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.933 [2024-07-24 22:34:26.351471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.933 qpair failed and we were unable to recover it. 00:25:00.933 [2024-07-24 22:34:26.351602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.933 [2024-07-24 22:34:26.351644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.933 qpair failed and we were unable to recover it. 00:25:00.933 [2024-07-24 22:34:26.351782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.933 [2024-07-24 22:34:26.351828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.933 qpair failed and we were unable to recover it. 00:25:00.933 [2024-07-24 22:34:26.351950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.933 [2024-07-24 22:34:26.351993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.933 qpair failed and we were unable to recover it. 00:25:00.933 [2024-07-24 22:34:26.352099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.933 [2024-07-24 22:34:26.352128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.933 qpair failed and we were unable to recover it. 00:25:00.933 [2024-07-24 22:34:26.352243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.933 [2024-07-24 22:34:26.352284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.933 qpair failed and we were unable to recover it. 00:25:00.933 [2024-07-24 22:34:26.352409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.933 [2024-07-24 22:34:26.352449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.933 qpair failed and we were unable to recover it. 00:25:00.933 [2024-07-24 22:34:26.352579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.933 [2024-07-24 22:34:26.352618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.933 qpair failed and we were unable to recover it. 00:25:00.933 [2024-07-24 22:34:26.352740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.933 [2024-07-24 22:34:26.352770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.933 qpair failed and we were unable to recover it. 00:25:00.933 [2024-07-24 22:34:26.352894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.933 [2024-07-24 22:34:26.352928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.933 qpair failed and we were unable to recover it. 00:25:00.933 [2024-07-24 22:34:26.353043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.933 [2024-07-24 22:34:26.353072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.933 qpair failed and we were unable to recover it. 00:25:00.933 [2024-07-24 22:34:26.353197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.933 [2024-07-24 22:34:26.353239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.933 qpair failed and we were unable to recover it. 00:25:00.933 [2024-07-24 22:34:26.353352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.933 [2024-07-24 22:34:26.353383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.933 qpair failed and we were unable to recover it. 00:25:00.933 [2024-07-24 22:34:26.353525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.933 [2024-07-24 22:34:26.353556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.933 qpair failed and we were unable to recover it. 00:25:00.933 [2024-07-24 22:34:26.353684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.933 [2024-07-24 22:34:26.353727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.933 qpair failed and we were unable to recover it. 00:25:00.933 [2024-07-24 22:34:26.353842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.933 [2024-07-24 22:34:26.353874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.933 qpair failed and we were unable to recover it. 00:25:00.933 [2024-07-24 22:34:26.354011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.933 [2024-07-24 22:34:26.354055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.933 qpair failed and we were unable to recover it. 00:25:00.933 [2024-07-24 22:34:26.354183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.933 [2024-07-24 22:34:26.354225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.933 qpair failed and we were unable to recover it. 00:25:00.933 [2024-07-24 22:34:26.354339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.933 [2024-07-24 22:34:26.354370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.933 qpair failed and we were unable to recover it. 00:25:00.933 [2024-07-24 22:34:26.354496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.933 [2024-07-24 22:34:26.354540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.933 qpair failed and we were unable to recover it. 00:25:00.933 [2024-07-24 22:34:26.354640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.933 [2024-07-24 22:34:26.354666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.933 qpair failed and we were unable to recover it. 00:25:00.933 [2024-07-24 22:34:26.354773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.933 [2024-07-24 22:34:26.354801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.933 qpair failed and we were unable to recover it. 00:25:00.933 [2024-07-24 22:34:26.354924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.933 [2024-07-24 22:34:26.354969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.933 qpair failed and we were unable to recover it. 00:25:00.933 [2024-07-24 22:34:26.355072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.933 [2024-07-24 22:34:26.355099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.934 qpair failed and we were unable to recover it. 00:25:00.934 [2024-07-24 22:34:26.355218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.934 [2024-07-24 22:34:26.355255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.934 qpair failed and we were unable to recover it. 00:25:00.934 [2024-07-24 22:34:26.355374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.934 [2024-07-24 22:34:26.355400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.934 qpair failed and we were unable to recover it. 00:25:00.934 [2024-07-24 22:34:26.355535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.934 [2024-07-24 22:34:26.355580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.934 qpair failed and we were unable to recover it. 00:25:00.934 [2024-07-24 22:34:26.355706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.934 [2024-07-24 22:34:26.355748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.934 qpair failed and we were unable to recover it. 00:25:00.934 [2024-07-24 22:34:26.355864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.934 [2024-07-24 22:34:26.355893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.934 qpair failed and we were unable to recover it. 00:25:00.934 [2024-07-24 22:34:26.356015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.934 [2024-07-24 22:34:26.356057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.934 qpair failed and we were unable to recover it. 00:25:00.934 [2024-07-24 22:34:26.356171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.934 [2024-07-24 22:34:26.356200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.934 qpair failed and we were unable to recover it. 00:25:00.934 [2024-07-24 22:34:26.356337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.934 [2024-07-24 22:34:26.356380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.934 qpair failed and we were unable to recover it. 00:25:00.934 [2024-07-24 22:34:26.356499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.934 [2024-07-24 22:34:26.356541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.934 qpair failed and we were unable to recover it. 00:25:00.934 [2024-07-24 22:34:26.356667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.934 [2024-07-24 22:34:26.356711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.934 qpair failed and we were unable to recover it. 00:25:00.934 [2024-07-24 22:34:26.356823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.934 [2024-07-24 22:34:26.356852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.934 qpair failed and we were unable to recover it. 00:25:00.934 [2024-07-24 22:34:26.356985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.934 [2024-07-24 22:34:26.357026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.934 qpair failed and we were unable to recover it. 00:25:00.934 [2024-07-24 22:34:26.357135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.934 [2024-07-24 22:34:26.357161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.934 qpair failed and we were unable to recover it. 00:25:00.934 [2024-07-24 22:34:26.357280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.934 [2024-07-24 22:34:26.357321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.934 qpair failed and we were unable to recover it. 00:25:00.934 [2024-07-24 22:34:26.357438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.934 [2024-07-24 22:34:26.357490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.934 qpair failed and we were unable to recover it. 00:25:00.934 [2024-07-24 22:34:26.357604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.934 [2024-07-24 22:34:26.357633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.934 qpair failed and we were unable to recover it. 00:25:00.934 [2024-07-24 22:34:26.357751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.934 [2024-07-24 22:34:26.357777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.934 qpair failed and we were unable to recover it. 00:25:00.934 [2024-07-24 22:34:26.357890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.934 [2024-07-24 22:34:26.357931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.934 qpair failed and we were unable to recover it. 00:25:00.934 [2024-07-24 22:34:26.358032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.934 [2024-07-24 22:34:26.358060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.934 qpair failed and we were unable to recover it. 00:25:00.934 [2024-07-24 22:34:26.358182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.934 [2024-07-24 22:34:26.358224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.934 qpair failed and we were unable to recover it. 00:25:00.934 [2024-07-24 22:34:26.358345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.934 [2024-07-24 22:34:26.358378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.934 qpair failed and we were unable to recover it. 00:25:00.934 [2024-07-24 22:34:26.358508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.934 [2024-07-24 22:34:26.358536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.934 qpair failed and we were unable to recover it. 00:25:00.934 [2024-07-24 22:34:26.358660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.934 [2024-07-24 22:34:26.358700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.934 qpair failed and we were unable to recover it. 00:25:00.934 [2024-07-24 22:34:26.358820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.934 [2024-07-24 22:34:26.358861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.934 qpair failed and we were unable to recover it. 00:25:00.934 [2024-07-24 22:34:26.358983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.934 [2024-07-24 22:34:26.359023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.934 qpair failed and we were unable to recover it. 00:25:00.934 [2024-07-24 22:34:26.359136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.934 [2024-07-24 22:34:26.359183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.934 qpair failed and we were unable to recover it. 00:25:00.934 [2024-07-24 22:34:26.359286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.934 [2024-07-24 22:34:26.359315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.934 qpair failed and we were unable to recover it. 00:25:00.934 [2024-07-24 22:34:26.359431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.934 [2024-07-24 22:34:26.359472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.934 qpair failed and we were unable to recover it. 00:25:00.934 [2024-07-24 22:34:26.359586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.934 [2024-07-24 22:34:26.359613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.934 qpair failed and we were unable to recover it. 00:25:00.934 [2024-07-24 22:34:26.359731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.934 [2024-07-24 22:34:26.359771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.934 qpair failed and we were unable to recover it. 00:25:00.934 [2024-07-24 22:34:26.359897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.934 [2024-07-24 22:34:26.359939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.934 qpair failed and we were unable to recover it. 00:25:00.934 [2024-07-24 22:34:26.360054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.934 [2024-07-24 22:34:26.360082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.934 qpair failed and we were unable to recover it. 00:25:00.934 [2024-07-24 22:34:26.360201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.934 [2024-07-24 22:34:26.360229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.934 qpair failed and we were unable to recover it. 00:25:00.934 [2024-07-24 22:34:26.360338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.934 [2024-07-24 22:34:26.360365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.934 qpair failed and we were unable to recover it. 00:25:00.934 [2024-07-24 22:34:26.360499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.934 [2024-07-24 22:34:26.360539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.934 qpair failed and we were unable to recover it. 00:25:00.934 [2024-07-24 22:34:26.360655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.934 [2024-07-24 22:34:26.360694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.934 qpair failed and we were unable to recover it. 00:25:00.934 [2024-07-24 22:34:26.360818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.934 [2024-07-24 22:34:26.360856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.934 qpair failed and we were unable to recover it. 00:25:00.935 [2024-07-24 22:34:26.361080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.935 [2024-07-24 22:34:26.361106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.935 qpair failed and we were unable to recover it. 00:25:00.935 [2024-07-24 22:34:26.361202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.935 [2024-07-24 22:34:26.361228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.935 qpair failed and we were unable to recover it. 00:25:00.935 [2024-07-24 22:34:26.361344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.935 [2024-07-24 22:34:26.361371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.935 qpair failed and we were unable to recover it. 00:25:00.935 [2024-07-24 22:34:26.361495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.935 [2024-07-24 22:34:26.361524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.935 qpair failed and we were unable to recover it. 00:25:00.935 [2024-07-24 22:34:26.361664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.935 [2024-07-24 22:34:26.361706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.935 qpair failed and we were unable to recover it. 00:25:00.935 [2024-07-24 22:34:26.361817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.935 [2024-07-24 22:34:26.361846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.935 qpair failed and we were unable to recover it. 00:25:00.935 [2024-07-24 22:34:26.361961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.935 [2024-07-24 22:34:26.362001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.935 qpair failed and we were unable to recover it. 00:25:00.935 [2024-07-24 22:34:26.362101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.935 [2024-07-24 22:34:26.362127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.935 qpair failed and we were unable to recover it. 00:25:00.935 [2024-07-24 22:34:26.362240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.935 [2024-07-24 22:34:26.362280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.935 qpair failed and we were unable to recover it. 00:25:00.935 [2024-07-24 22:34:26.362385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.935 [2024-07-24 22:34:26.362411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.935 qpair failed and we were unable to recover it. 00:25:00.935 [2024-07-24 22:34:26.362549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.935 [2024-07-24 22:34:26.362590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.935 qpair failed and we were unable to recover it. 00:25:00.935 [2024-07-24 22:34:26.362704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.935 [2024-07-24 22:34:26.362743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.935 qpair failed and we were unable to recover it. 00:25:00.935 [2024-07-24 22:34:26.362862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.935 [2024-07-24 22:34:26.362902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.935 qpair failed and we were unable to recover it. 00:25:00.935 [2024-07-24 22:34:26.363021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.935 [2024-07-24 22:34:26.363051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.935 qpair failed and we were unable to recover it. 00:25:00.935 [2024-07-24 22:34:26.363168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.935 [2024-07-24 22:34:26.363196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.935 qpair failed and we were unable to recover it. 00:25:00.935 [2024-07-24 22:34:26.363308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.935 [2024-07-24 22:34:26.363338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.935 qpair failed and we were unable to recover it. 00:25:00.935 [2024-07-24 22:34:26.363447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.935 [2024-07-24 22:34:26.363488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.935 qpair failed and we were unable to recover it. 00:25:00.935 [2024-07-24 22:34:26.363622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.935 [2024-07-24 22:34:26.363666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.935 qpair failed and we were unable to recover it. 00:25:00.935 [2024-07-24 22:34:26.363789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.935 [2024-07-24 22:34:26.363831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.935 qpair failed and we were unable to recover it. 00:25:00.935 [2024-07-24 22:34:26.363954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.935 [2024-07-24 22:34:26.363999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.935 qpair failed and we were unable to recover it. 00:25:00.935 [2024-07-24 22:34:26.364139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.935 [2024-07-24 22:34:26.364169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.935 qpair failed and we were unable to recover it. 00:25:00.935 [2024-07-24 22:34:26.364302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.935 [2024-07-24 22:34:26.364333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.935 qpair failed and we were unable to recover it. 00:25:00.935 [2024-07-24 22:34:26.364468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.935 [2024-07-24 22:34:26.364522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.935 qpair failed and we were unable to recover it. 00:25:00.935 [2024-07-24 22:34:26.364638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.935 [2024-07-24 22:34:26.364666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.935 qpair failed and we were unable to recover it. 00:25:00.935 [2024-07-24 22:34:26.364798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.935 [2024-07-24 22:34:26.364837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.935 qpair failed and we were unable to recover it. 00:25:00.935 [2024-07-24 22:34:26.364966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.935 [2024-07-24 22:34:26.365006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.935 qpair failed and we were unable to recover it. 00:25:00.935 [2024-07-24 22:34:26.365123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.935 [2024-07-24 22:34:26.365164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.935 qpair failed and we were unable to recover it. 00:25:00.935 [2024-07-24 22:34:26.365283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.935 [2024-07-24 22:34:26.365323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.935 qpair failed and we were unable to recover it. 00:25:00.935 [2024-07-24 22:34:26.365444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.935 [2024-07-24 22:34:26.365499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.935 qpair failed and we were unable to recover it. 00:25:00.935 [2024-07-24 22:34:26.365618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.935 [2024-07-24 22:34:26.365657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.935 qpair failed and we were unable to recover it. 00:25:00.935 [2024-07-24 22:34:26.365775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.935 [2024-07-24 22:34:26.365817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.935 qpair failed and we were unable to recover it. 00:25:00.935 [2024-07-24 22:34:26.365919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.935 [2024-07-24 22:34:26.365946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.935 qpair failed and we were unable to recover it. 00:25:00.935 [2024-07-24 22:34:26.366065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.935 [2024-07-24 22:34:26.366107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.935 qpair failed and we were unable to recover it. 00:25:00.935 [2024-07-24 22:34:26.366231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.935 [2024-07-24 22:34:26.366272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.935 qpair failed and we were unable to recover it. 00:25:00.935 [2024-07-24 22:34:26.366391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.935 [2024-07-24 22:34:26.366432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.935 qpair failed and we were unable to recover it. 00:25:00.935 [2024-07-24 22:34:26.366558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.935 [2024-07-24 22:34:26.366602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.935 qpair failed and we were unable to recover it. 00:25:00.935 [2024-07-24 22:34:26.366730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.935 [2024-07-24 22:34:26.366775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.935 qpair failed and we were unable to recover it. 00:25:00.935 [2024-07-24 22:34:26.366893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.935 [2024-07-24 22:34:26.366934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.935 qpair failed and we were unable to recover it. 00:25:00.936 [2024-07-24 22:34:26.367054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.936 [2024-07-24 22:34:26.367102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.936 qpair failed and we were unable to recover it. 00:25:00.936 [2024-07-24 22:34:26.367225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.936 [2024-07-24 22:34:26.367265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.936 qpair failed and we were unable to recover it. 00:25:00.936 [2024-07-24 22:34:26.367390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.936 [2024-07-24 22:34:26.367431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.936 qpair failed and we were unable to recover it. 00:25:00.936 [2024-07-24 22:34:26.367566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.936 [2024-07-24 22:34:26.367609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.936 qpair failed and we were unable to recover it. 00:25:00.936 [2024-07-24 22:34:26.367733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.936 [2024-07-24 22:34:26.367775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.936 qpair failed and we were unable to recover it. 00:25:00.936 [2024-07-24 22:34:26.367875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.936 [2024-07-24 22:34:26.367907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.936 qpair failed and we were unable to recover it. 00:25:00.936 [2024-07-24 22:34:26.368033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.936 [2024-07-24 22:34:26.368073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.936 qpair failed and we were unable to recover it. 00:25:00.936 [2024-07-24 22:34:26.368194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.936 [2024-07-24 22:34:26.368225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.936 qpair failed and we were unable to recover it. 00:25:00.936 [2024-07-24 22:34:26.368360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.936 [2024-07-24 22:34:26.368403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.936 qpair failed and we were unable to recover it. 00:25:00.936 [2024-07-24 22:34:26.368518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.936 [2024-07-24 22:34:26.368547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.936 qpair failed and we were unable to recover it. 00:25:00.936 [2024-07-24 22:34:26.368659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.936 [2024-07-24 22:34:26.368687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.936 qpair failed and we were unable to recover it. 00:25:00.936 [2024-07-24 22:34:26.368814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.936 [2024-07-24 22:34:26.368854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.936 qpair failed and we were unable to recover it. 00:25:00.936 [2024-07-24 22:34:26.368956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.936 [2024-07-24 22:34:26.368983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.936 qpair failed and we were unable to recover it. 00:25:00.936 [2024-07-24 22:34:26.369086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.937 [2024-07-24 22:34:26.369113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.937 qpair failed and we were unable to recover it. 00:25:00.937 [2024-07-24 22:34:26.369225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.937 [2024-07-24 22:34:26.369253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.937 qpair failed and we were unable to recover it. 00:25:00.937 [2024-07-24 22:34:26.369358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.937 [2024-07-24 22:34:26.369387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.937 qpair failed and we were unable to recover it. 00:25:00.937 [2024-07-24 22:34:26.369499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.937 [2024-07-24 22:34:26.369528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.937 qpair failed and we were unable to recover it. 00:25:00.937 [2024-07-24 22:34:26.369642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.937 [2024-07-24 22:34:26.369694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.937 qpair failed and we were unable to recover it. 00:25:00.937 [2024-07-24 22:34:26.369816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.937 [2024-07-24 22:34:26.369844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.937 qpair failed and we were unable to recover it. 00:25:00.937 [2024-07-24 22:34:26.369974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.937 [2024-07-24 22:34:26.370014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.937 qpair failed and we were unable to recover it. 00:25:00.937 [2024-07-24 22:34:26.370133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.937 [2024-07-24 22:34:26.370174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.937 qpair failed and we were unable to recover it. 00:25:00.937 [2024-07-24 22:34:26.370296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.937 [2024-07-24 22:34:26.370337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.937 qpair failed and we were unable to recover it. 00:25:00.937 [2024-07-24 22:34:26.370456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.937 [2024-07-24 22:34:26.370503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.937 qpair failed and we were unable to recover it. 00:25:00.937 [2024-07-24 22:34:26.370733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.937 [2024-07-24 22:34:26.370761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.937 qpair failed and we were unable to recover it. 00:25:00.938 [2024-07-24 22:34:26.370873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.938 [2024-07-24 22:34:26.370900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.938 qpair failed and we were unable to recover it. 00:25:00.938 [2024-07-24 22:34:26.371026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.938 [2024-07-24 22:34:26.371065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.938 qpair failed and we were unable to recover it. 00:25:00.938 [2024-07-24 22:34:26.371189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.938 [2024-07-24 22:34:26.371230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.938 qpair failed and we were unable to recover it. 00:25:00.938 [2024-07-24 22:34:26.371349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.938 [2024-07-24 22:34:26.371378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.938 qpair failed and we were unable to recover it. 00:25:00.938 [2024-07-24 22:34:26.371502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.938 [2024-07-24 22:34:26.371529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.938 qpair failed and we were unable to recover it. 00:25:00.938 [2024-07-24 22:34:26.371645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.938 [2024-07-24 22:34:26.371672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.938 qpair failed and we were unable to recover it. 00:25:00.938 [2024-07-24 22:34:26.371801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.938 [2024-07-24 22:34:26.371840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.938 qpair failed and we were unable to recover it. 00:25:00.938 [2024-07-24 22:34:26.371966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.938 [2024-07-24 22:34:26.372005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.938 qpair failed and we were unable to recover it. 00:25:00.938 [2024-07-24 22:34:26.372232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.938 [2024-07-24 22:34:26.372258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.938 qpair failed and we were unable to recover it. 00:25:00.938 [2024-07-24 22:34:26.372366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.938 [2024-07-24 22:34:26.372393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.938 qpair failed and we were unable to recover it. 00:25:00.938 [2024-07-24 22:34:26.372523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.938 [2024-07-24 22:34:26.372554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.938 qpair failed and we were unable to recover it. 00:25:00.938 [2024-07-24 22:34:26.372685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.938 [2024-07-24 22:34:26.372725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.938 qpair failed and we were unable to recover it. 00:25:00.938 [2024-07-24 22:34:26.372843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.938 [2024-07-24 22:34:26.372885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.938 qpair failed and we were unable to recover it. 00:25:00.938 [2024-07-24 22:34:26.373000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.938 [2024-07-24 22:34:26.373028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.938 qpair failed and we were unable to recover it. 00:25:00.938 [2024-07-24 22:34:26.373160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.938 [2024-07-24 22:34:26.373199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.938 qpair failed and we were unable to recover it. 00:25:00.938 [2024-07-24 22:34:26.373316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.938 [2024-07-24 22:34:26.373356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.938 qpair failed and we were unable to recover it. 00:25:00.938 [2024-07-24 22:34:26.373605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.938 [2024-07-24 22:34:26.373632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.938 qpair failed and we were unable to recover it. 00:25:00.938 [2024-07-24 22:34:26.373856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.938 [2024-07-24 22:34:26.373884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.938 qpair failed and we were unable to recover it. 00:25:00.938 [2024-07-24 22:34:26.373989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.938 [2024-07-24 22:34:26.374015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.938 qpair failed and we were unable to recover it. 00:25:00.938 [2024-07-24 22:34:26.374137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.938 [2024-07-24 22:34:26.374175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.938 qpair failed and we were unable to recover it. 00:25:00.938 [2024-07-24 22:34:26.374302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.938 [2024-07-24 22:34:26.374345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.938 qpair failed and we were unable to recover it. 00:25:00.938 [2024-07-24 22:34:26.374466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.938 [2024-07-24 22:34:26.374518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.938 qpair failed and we were unable to recover it. 00:25:00.938 [2024-07-24 22:34:26.374641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.938 [2024-07-24 22:34:26.374700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.938 qpair failed and we were unable to recover it. 00:25:00.938 [2024-07-24 22:34:26.374836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.938 [2024-07-24 22:34:26.374877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.938 qpair failed and we were unable to recover it. 00:25:00.938 [2024-07-24 22:34:26.375010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.938 [2024-07-24 22:34:26.375091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.938 qpair failed and we were unable to recover it. 00:25:00.938 [2024-07-24 22:34:26.375209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.938 [2024-07-24 22:34:26.375238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.938 qpair failed and we were unable to recover it. 00:25:00.938 [2024-07-24 22:34:26.375372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.938 [2024-07-24 22:34:26.375419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.938 qpair failed and we were unable to recover it. 00:25:00.938 [2024-07-24 22:34:26.375553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.938 [2024-07-24 22:34:26.375598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.938 qpair failed and we were unable to recover it. 00:25:00.938 [2024-07-24 22:34:26.375744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.938 [2024-07-24 22:34:26.375803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.938 qpair failed and we were unable to recover it. 00:25:00.938 [2024-07-24 22:34:26.375925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.938 [2024-07-24 22:34:26.375952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.938 qpair failed and we were unable to recover it. 00:25:00.938 [2024-07-24 22:34:26.376067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.938 [2024-07-24 22:34:26.376097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.938 qpair failed and we were unable to recover it. 00:25:00.938 [2024-07-24 22:34:26.376228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.938 [2024-07-24 22:34:26.376269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.938 qpair failed and we were unable to recover it. 00:25:00.938 [2024-07-24 22:34:26.376395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.938 [2024-07-24 22:34:26.376437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.938 qpair failed and we were unable to recover it. 00:25:00.938 [2024-07-24 22:34:26.376567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.938 [2024-07-24 22:34:26.376606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.938 qpair failed and we were unable to recover it. 00:25:00.938 [2024-07-24 22:34:26.376740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.938 [2024-07-24 22:34:26.376779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.938 qpair failed and we were unable to recover it. 00:25:00.938 [2024-07-24 22:34:26.376891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.938 [2024-07-24 22:34:26.376930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.938 qpair failed and we were unable to recover it. 00:25:00.938 [2024-07-24 22:34:26.377053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.938 [2024-07-24 22:34:26.377093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.938 qpair failed and we were unable to recover it. 00:25:00.938 [2024-07-24 22:34:26.377211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.938 [2024-07-24 22:34:26.377238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.938 qpair failed and we were unable to recover it. 00:25:00.939 [2024-07-24 22:34:26.377376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.939 [2024-07-24 22:34:26.377418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.939 qpair failed and we were unable to recover it. 00:25:00.939 [2024-07-24 22:34:26.377525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.939 [2024-07-24 22:34:26.377553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.939 qpair failed and we were unable to recover it. 00:25:00.939 [2024-07-24 22:34:26.377653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.939 [2024-07-24 22:34:26.377679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.939 qpair failed and we were unable to recover it. 00:25:00.939 [2024-07-24 22:34:26.377793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.939 [2024-07-24 22:34:26.377822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.939 qpair failed and we were unable to recover it. 00:25:00.939 [2024-07-24 22:34:26.377948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.939 [2024-07-24 22:34:26.377988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.939 qpair failed and we were unable to recover it. 00:25:00.939 [2024-07-24 22:34:26.378089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.939 [2024-07-24 22:34:26.378115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.939 qpair failed and we were unable to recover it. 00:25:00.939 [2024-07-24 22:34:26.378244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.939 [2024-07-24 22:34:26.378285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.939 qpair failed and we were unable to recover it. 00:25:00.939 [2024-07-24 22:34:26.378404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.939 [2024-07-24 22:34:26.378446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.939 qpair failed and we were unable to recover it. 00:25:00.939 [2024-07-24 22:34:26.378562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.939 [2024-07-24 22:34:26.378590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.939 qpair failed and we were unable to recover it. 00:25:00.939 [2024-07-24 22:34:26.378715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.939 [2024-07-24 22:34:26.378755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.939 qpair failed and we were unable to recover it. 00:25:00.939 [2024-07-24 22:34:26.378866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.939 [2024-07-24 22:34:26.378893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.939 qpair failed and we were unable to recover it. 00:25:00.939 [2024-07-24 22:34:26.379003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.939 [2024-07-24 22:34:26.379029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.939 qpair failed and we were unable to recover it. 00:25:00.939 [2024-07-24 22:34:26.379144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.939 [2024-07-24 22:34:26.379186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.939 qpair failed and we were unable to recover it. 00:25:00.939 [2024-07-24 22:34:26.379310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.939 [2024-07-24 22:34:26.379352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.939 qpair failed and we were unable to recover it. 00:25:00.939 [2024-07-24 22:34:26.379454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.939 [2024-07-24 22:34:26.379488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.939 qpair failed and we were unable to recover it. 00:25:00.939 [2024-07-24 22:34:26.379615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.939 [2024-07-24 22:34:26.379655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.939 qpair failed and we were unable to recover it. 00:25:00.939 [2024-07-24 22:34:26.379771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.939 [2024-07-24 22:34:26.379810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.939 qpair failed and we were unable to recover it. 00:25:00.939 [2024-07-24 22:34:26.379909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.939 [2024-07-24 22:34:26.379935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.939 qpair failed and we were unable to recover it. 00:25:00.939 [2024-07-24 22:34:26.380050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.939 [2024-07-24 22:34:26.380089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.939 qpair failed and we were unable to recover it. 00:25:00.939 [2024-07-24 22:34:26.380215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.939 [2024-07-24 22:34:26.380255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.939 qpair failed and we were unable to recover it. 00:25:00.939 [2024-07-24 22:34:26.380369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.939 [2024-07-24 22:34:26.380397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.939 qpair failed and we were unable to recover it. 00:25:00.939 [2024-07-24 22:34:26.380519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.939 [2024-07-24 22:34:26.380547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.939 qpair failed and we were unable to recover it. 00:25:00.939 [2024-07-24 22:34:26.380674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.939 [2024-07-24 22:34:26.380709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.939 qpair failed and we were unable to recover it. 00:25:00.939 [2024-07-24 22:34:26.380840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.939 [2024-07-24 22:34:26.380867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.939 qpair failed and we were unable to recover it. 00:25:00.939 [2024-07-24 22:34:26.380987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.939 [2024-07-24 22:34:26.381016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.939 qpair failed and we were unable to recover it. 00:25:00.939 [2024-07-24 22:34:26.381133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.939 [2024-07-24 22:34:26.381172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.939 qpair failed and we were unable to recover it. 00:25:00.939 [2024-07-24 22:34:26.381291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.939 [2024-07-24 22:34:26.381330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.939 qpair failed and we were unable to recover it. 00:25:00.939 [2024-07-24 22:34:26.381445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.939 [2024-07-24 22:34:26.381489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.939 qpair failed and we were unable to recover it. 00:25:00.939 [2024-07-24 22:34:26.381608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.939 [2024-07-24 22:34:26.381635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.939 qpair failed and we were unable to recover it. 00:25:00.939 [2024-07-24 22:34:26.381763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.939 [2024-07-24 22:34:26.381806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.939 qpair failed and we were unable to recover it. 00:25:00.939 [2024-07-24 22:34:26.381908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.939 [2024-07-24 22:34:26.381934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.939 qpair failed and we were unable to recover it. 00:25:00.939 [2024-07-24 22:34:26.382058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.939 [2024-07-24 22:34:26.382085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.939 qpair failed and we were unable to recover it. 00:25:00.939 [2024-07-24 22:34:26.382210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.939 [2024-07-24 22:34:26.382249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.939 qpair failed and we were unable to recover it. 00:25:00.939 [2024-07-24 22:34:26.382363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.939 [2024-07-24 22:34:26.382404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.939 qpair failed and we were unable to recover it. 00:25:00.939 [2024-07-24 22:34:26.382530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.939 [2024-07-24 22:34:26.382560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.939 qpair failed and we were unable to recover it. 00:25:00.939 [2024-07-24 22:34:26.382690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.939 [2024-07-24 22:34:26.382719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.939 qpair failed and we were unable to recover it. 00:25:00.939 [2024-07-24 22:34:26.382853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.939 [2024-07-24 22:34:26.382894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.939 qpair failed and we were unable to recover it. 00:25:00.939 [2024-07-24 22:34:26.383017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.939 [2024-07-24 22:34:26.383056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.939 qpair failed and we were unable to recover it. 00:25:00.940 [2024-07-24 22:34:26.383175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.940 [2024-07-24 22:34:26.383215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.940 qpair failed and we were unable to recover it. 00:25:00.940 [2024-07-24 22:34:26.383322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.940 [2024-07-24 22:34:26.383350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.940 qpair failed and we were unable to recover it. 00:25:00.940 [2024-07-24 22:34:26.383474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.940 [2024-07-24 22:34:26.383522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.940 qpair failed and we were unable to recover it. 00:25:00.940 [2024-07-24 22:34:26.383631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.940 [2024-07-24 22:34:26.383658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.940 qpair failed and we were unable to recover it. 00:25:00.940 [2024-07-24 22:34:26.383763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.940 [2024-07-24 22:34:26.383790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.940 qpair failed and we were unable to recover it. 00:25:00.940 [2024-07-24 22:34:26.383891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.940 [2024-07-24 22:34:26.383919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.940 qpair failed and we were unable to recover it. 00:25:00.940 [2024-07-24 22:34:26.384016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.940 [2024-07-24 22:34:26.384041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.940 qpair failed and we were unable to recover it. 00:25:00.940 [2024-07-24 22:34:26.384146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.940 [2024-07-24 22:34:26.384172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.940 qpair failed and we were unable to recover it. 00:25:00.940 [2024-07-24 22:34:26.384272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.940 [2024-07-24 22:34:26.384301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.940 qpair failed and we were unable to recover it. 00:25:00.940 [2024-07-24 22:34:26.384404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.940 [2024-07-24 22:34:26.384430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.940 qpair failed and we were unable to recover it. 00:25:00.940 [2024-07-24 22:34:26.384538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.940 [2024-07-24 22:34:26.384566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.940 qpair failed and we were unable to recover it. 00:25:00.940 [2024-07-24 22:34:26.384777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.940 [2024-07-24 22:34:26.384807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.940 qpair failed and we were unable to recover it. 00:25:00.940 [2024-07-24 22:34:26.384905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.940 [2024-07-24 22:34:26.384934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.940 qpair failed and we were unable to recover it. 00:25:00.940 [2024-07-24 22:34:26.385031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.940 [2024-07-24 22:34:26.385057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.940 qpair failed and we were unable to recover it. 00:25:00.940 [2024-07-24 22:34:26.385164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.940 [2024-07-24 22:34:26.385192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.940 qpair failed and we were unable to recover it. 00:25:00.940 [2024-07-24 22:34:26.385295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.940 [2024-07-24 22:34:26.385321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.940 qpair failed and we were unable to recover it. 00:25:00.940 [2024-07-24 22:34:26.385422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.940 [2024-07-24 22:34:26.385450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.940 qpair failed and we were unable to recover it. 00:25:00.940 [2024-07-24 22:34:26.385557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.940 [2024-07-24 22:34:26.385594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.940 qpair failed and we were unable to recover it. 00:25:00.940 [2024-07-24 22:34:26.385699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.940 [2024-07-24 22:34:26.385725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.940 qpair failed and we were unable to recover it. 00:25:00.940 [2024-07-24 22:34:26.385833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.940 [2024-07-24 22:34:26.385860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.940 qpair failed and we were unable to recover it. 00:25:00.940 [2024-07-24 22:34:26.385962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.940 [2024-07-24 22:34:26.385989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.940 qpair failed and we were unable to recover it. 00:25:00.940 [2024-07-24 22:34:26.386082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.940 [2024-07-24 22:34:26.386108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.940 qpair failed and we were unable to recover it. 00:25:00.940 [2024-07-24 22:34:26.386208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.940 [2024-07-24 22:34:26.386234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.940 qpair failed and we were unable to recover it. 00:25:00.940 [2024-07-24 22:34:26.386337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.940 [2024-07-24 22:34:26.386363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.940 qpair failed and we were unable to recover it. 00:25:00.940 [2024-07-24 22:34:26.386465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.940 [2024-07-24 22:34:26.386505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.940 qpair failed and we were unable to recover it. 00:25:00.940 [2024-07-24 22:34:26.386620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.940 [2024-07-24 22:34:26.386646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.940 qpair failed and we were unable to recover it. 00:25:00.940 [2024-07-24 22:34:26.386752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.940 [2024-07-24 22:34:26.386779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.940 qpair failed and we were unable to recover it. 00:25:00.940 [2024-07-24 22:34:26.386875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.940 [2024-07-24 22:34:26.386901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.940 qpair failed and we were unable to recover it. 00:25:00.940 [2024-07-24 22:34:26.387003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.940 [2024-07-24 22:34:26.387030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.940 qpair failed and we were unable to recover it. 00:25:00.940 [2024-07-24 22:34:26.387142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.940 [2024-07-24 22:34:26.387170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.940 qpair failed and we were unable to recover it. 00:25:00.940 [2024-07-24 22:34:26.387272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.940 [2024-07-24 22:34:26.387299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.940 qpair failed and we were unable to recover it. 00:25:00.940 [2024-07-24 22:34:26.387410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.940 [2024-07-24 22:34:26.387439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.940 qpair failed and we were unable to recover it. 00:25:00.940 [2024-07-24 22:34:26.387554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.940 [2024-07-24 22:34:26.387580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.940 qpair failed and we were unable to recover it. 00:25:00.940 [2024-07-24 22:34:26.387681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.940 [2024-07-24 22:34:26.387708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.940 qpair failed and we were unable to recover it. 00:25:00.940 [2024-07-24 22:34:26.387811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.940 [2024-07-24 22:34:26.387837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.940 qpair failed and we were unable to recover it. 00:25:00.940 [2024-07-24 22:34:26.387932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.940 [2024-07-24 22:34:26.387958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.940 qpair failed and we were unable to recover it. 00:25:00.940 [2024-07-24 22:34:26.388061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.940 [2024-07-24 22:34:26.388088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.940 qpair failed and we were unable to recover it. 00:25:00.940 [2024-07-24 22:34:26.388195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.940 [2024-07-24 22:34:26.388225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.940 qpair failed and we were unable to recover it. 00:25:00.941 [2024-07-24 22:34:26.388338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.941 [2024-07-24 22:34:26.388366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.941 qpair failed and we were unable to recover it. 00:25:00.941 [2024-07-24 22:34:26.388467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.941 [2024-07-24 22:34:26.388499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.941 qpair failed and we were unable to recover it. 00:25:00.941 [2024-07-24 22:34:26.388607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.941 [2024-07-24 22:34:26.388632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.941 qpair failed and we were unable to recover it. 00:25:00.941 [2024-07-24 22:34:26.388731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.941 [2024-07-24 22:34:26.388759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.941 qpair failed and we were unable to recover it. 00:25:00.941 [2024-07-24 22:34:26.388891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.941 [2024-07-24 22:34:26.388950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.941 qpair failed and we were unable to recover it. 00:25:00.941 [2024-07-24 22:34:26.389121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.941 [2024-07-24 22:34:26.389178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.941 qpair failed and we were unable to recover it. 00:25:00.941 [2024-07-24 22:34:26.389280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.941 [2024-07-24 22:34:26.389306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.941 qpair failed and we were unable to recover it. 00:25:00.941 [2024-07-24 22:34:26.389410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.941 [2024-07-24 22:34:26.389436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.941 qpair failed and we were unable to recover it. 00:25:00.941 [2024-07-24 22:34:26.389577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.941 [2024-07-24 22:34:26.389620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.941 qpair failed and we were unable to recover it. 00:25:00.941 [2024-07-24 22:34:26.389723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.941 [2024-07-24 22:34:26.389749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.941 qpair failed and we were unable to recover it. 00:25:00.941 [2024-07-24 22:34:26.389872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.941 [2024-07-24 22:34:26.389898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.941 qpair failed and we were unable to recover it. 00:25:00.941 [2024-07-24 22:34:26.390016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.941 [2024-07-24 22:34:26.390059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.941 qpair failed and we were unable to recover it. 00:25:00.941 [2024-07-24 22:34:26.390161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.941 [2024-07-24 22:34:26.390188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.941 qpair failed and we were unable to recover it. 00:25:00.941 [2024-07-24 22:34:26.390286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.941 [2024-07-24 22:34:26.390317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.941 qpair failed and we were unable to recover it. 00:25:00.941 [2024-07-24 22:34:26.390416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.941 [2024-07-24 22:34:26.390442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.941 qpair failed and we were unable to recover it. 00:25:00.941 [2024-07-24 22:34:26.390555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.941 [2024-07-24 22:34:26.390583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.941 qpair failed and we were unable to recover it. 00:25:00.941 [2024-07-24 22:34:26.390689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.941 [2024-07-24 22:34:26.390716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.941 qpair failed and we were unable to recover it. 00:25:00.941 [2024-07-24 22:34:26.390816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.941 [2024-07-24 22:34:26.390844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.941 qpair failed and we were unable to recover it. 00:25:00.941 [2024-07-24 22:34:26.391057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.941 [2024-07-24 22:34:26.391083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.941 qpair failed and we were unable to recover it. 00:25:00.941 [2024-07-24 22:34:26.391293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.941 [2024-07-24 22:34:26.391320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.941 qpair failed and we were unable to recover it. 00:25:00.941 [2024-07-24 22:34:26.391429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.941 [2024-07-24 22:34:26.391457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.941 qpair failed and we were unable to recover it. 00:25:00.941 [2024-07-24 22:34:26.391572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.941 [2024-07-24 22:34:26.391599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.941 qpair failed and we were unable to recover it. 00:25:00.941 [2024-07-24 22:34:26.391709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.941 [2024-07-24 22:34:26.391736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.941 qpair failed and we were unable to recover it. 00:25:00.941 [2024-07-24 22:34:26.391835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.941 [2024-07-24 22:34:26.391861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.941 qpair failed and we were unable to recover it. 00:25:00.941 [2024-07-24 22:34:26.391960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.941 [2024-07-24 22:34:26.391987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.941 qpair failed and we were unable to recover it. 00:25:00.941 [2024-07-24 22:34:26.392090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.941 [2024-07-24 22:34:26.392116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.941 qpair failed and we were unable to recover it. 00:25:00.941 [2024-07-24 22:34:26.392226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.941 [2024-07-24 22:34:26.392252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.941 qpair failed and we were unable to recover it. 00:25:00.941 [2024-07-24 22:34:26.392371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.941 [2024-07-24 22:34:26.392400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.941 qpair failed and we were unable to recover it. 00:25:00.941 [2024-07-24 22:34:26.392514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.941 [2024-07-24 22:34:26.392545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.941 qpair failed and we were unable to recover it. 00:25:00.941 [2024-07-24 22:34:26.392648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.941 [2024-07-24 22:34:26.392675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.941 qpair failed and we were unable to recover it. 00:25:00.941 [2024-07-24 22:34:26.392792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.941 [2024-07-24 22:34:26.392819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.941 qpair failed and we were unable to recover it. 00:25:00.941 [2024-07-24 22:34:26.392916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.941 [2024-07-24 22:34:26.392942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.941 qpair failed and we were unable to recover it. 00:25:00.941 [2024-07-24 22:34:26.393046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.941 [2024-07-24 22:34:26.393072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.941 qpair failed and we were unable to recover it. 00:25:00.941 [2024-07-24 22:34:26.393172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.941 [2024-07-24 22:34:26.393199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.941 qpair failed and we were unable to recover it. 00:25:00.941 [2024-07-24 22:34:26.393302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.941 [2024-07-24 22:34:26.393329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.941 qpair failed and we were unable to recover it. 00:25:00.941 [2024-07-24 22:34:26.393427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.941 [2024-07-24 22:34:26.393453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.941 qpair failed and we were unable to recover it. 00:25:00.941 [2024-07-24 22:34:26.393562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.941 [2024-07-24 22:34:26.393588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.941 qpair failed and we were unable to recover it. 00:25:00.941 [2024-07-24 22:34:26.393695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.941 [2024-07-24 22:34:26.393721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.942 qpair failed and we were unable to recover it. 00:25:00.942 [2024-07-24 22:34:26.393819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.942 [2024-07-24 22:34:26.393845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.942 qpair failed and we were unable to recover it. 00:25:00.942 [2024-07-24 22:34:26.393958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.942 [2024-07-24 22:34:26.393987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.942 qpair failed and we were unable to recover it. 00:25:00.942 [2024-07-24 22:34:26.394098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.942 [2024-07-24 22:34:26.394129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.942 qpair failed and we were unable to recover it. 00:25:00.942 [2024-07-24 22:34:26.394237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.942 [2024-07-24 22:34:26.394264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.942 qpair failed and we were unable to recover it. 00:25:00.942 [2024-07-24 22:34:26.394363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.942 [2024-07-24 22:34:26.394388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.942 qpair failed and we were unable to recover it. 00:25:00.942 [2024-07-24 22:34:26.394599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.942 [2024-07-24 22:34:26.394625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.942 qpair failed and we were unable to recover it. 00:25:00.942 [2024-07-24 22:34:26.394833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.942 [2024-07-24 22:34:26.394859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.942 qpair failed and we were unable to recover it. 00:25:00.942 [2024-07-24 22:34:26.394958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.942 [2024-07-24 22:34:26.394983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.942 qpair failed and we were unable to recover it. 00:25:00.942 [2024-07-24 22:34:26.395082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.942 [2024-07-24 22:34:26.395108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.942 qpair failed and we were unable to recover it. 00:25:00.942 [2024-07-24 22:34:26.395213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.942 [2024-07-24 22:34:26.395240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.942 qpair failed and we were unable to recover it. 00:25:00.942 [2024-07-24 22:34:26.395342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.942 [2024-07-24 22:34:26.395370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.942 qpair failed and we were unable to recover it. 00:25:00.942 [2024-07-24 22:34:26.395469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.942 [2024-07-24 22:34:26.395503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.942 qpair failed and we were unable to recover it. 00:25:00.942 [2024-07-24 22:34:26.395610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.942 [2024-07-24 22:34:26.395636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.942 qpair failed and we were unable to recover it. 00:25:00.942 [2024-07-24 22:34:26.395737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.942 [2024-07-24 22:34:26.395764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.942 qpair failed and we were unable to recover it. 00:25:00.942 [2024-07-24 22:34:26.395879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.942 [2024-07-24 22:34:26.395906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.942 qpair failed and we were unable to recover it. 00:25:00.942 [2024-07-24 22:34:26.396015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.942 [2024-07-24 22:34:26.396041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.942 qpair failed and we were unable to recover it. 00:25:00.942 [2024-07-24 22:34:26.396152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.942 [2024-07-24 22:34:26.396180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.942 qpair failed and we were unable to recover it. 00:25:00.942 [2024-07-24 22:34:26.396288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.942 [2024-07-24 22:34:26.396317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.942 qpair failed and we were unable to recover it. 00:25:00.942 [2024-07-24 22:34:26.396431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.942 [2024-07-24 22:34:26.396467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.942 qpair failed and we were unable to recover it. 00:25:00.942 [2024-07-24 22:34:26.396582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.942 [2024-07-24 22:34:26.396611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.942 qpair failed and we were unable to recover it. 00:25:00.942 [2024-07-24 22:34:26.396713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.942 [2024-07-24 22:34:26.396739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.942 qpair failed and we were unable to recover it. 00:25:00.942 [2024-07-24 22:34:26.396844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.942 [2024-07-24 22:34:26.396869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.942 qpair failed and we were unable to recover it. 00:25:00.942 [2024-07-24 22:34:26.396969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.942 [2024-07-24 22:34:26.396996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.942 qpair failed and we were unable to recover it. 00:25:00.942 [2024-07-24 22:34:26.397096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.942 [2024-07-24 22:34:26.397122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.942 qpair failed and we were unable to recover it. 00:25:00.942 [2024-07-24 22:34:26.397230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.942 [2024-07-24 22:34:26.397259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.942 qpair failed and we were unable to recover it. 00:25:00.942 [2024-07-24 22:34:26.397358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.942 [2024-07-24 22:34:26.397385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.942 qpair failed and we were unable to recover it. 00:25:00.942 [2024-07-24 22:34:26.397490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.942 [2024-07-24 22:34:26.397518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.942 qpair failed and we were unable to recover it. 00:25:00.942 [2024-07-24 22:34:26.397619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.942 [2024-07-24 22:34:26.397644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.942 qpair failed and we were unable to recover it. 00:25:00.942 [2024-07-24 22:34:26.397741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.942 [2024-07-24 22:34:26.397767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.942 qpair failed and we were unable to recover it. 00:25:00.942 [2024-07-24 22:34:26.397871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.942 [2024-07-24 22:34:26.397904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.942 qpair failed and we were unable to recover it. 00:25:00.942 [2024-07-24 22:34:26.398013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.942 [2024-07-24 22:34:26.398041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.942 qpair failed and we were unable to recover it. 00:25:00.942 [2024-07-24 22:34:26.398144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.942 [2024-07-24 22:34:26.398170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.942 qpair failed and we were unable to recover it. 00:25:00.942 [2024-07-24 22:34:26.398269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.942 [2024-07-24 22:34:26.398295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.943 qpair failed and we were unable to recover it. 00:25:00.943 [2024-07-24 22:34:26.398398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.943 [2024-07-24 22:34:26.398424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.943 qpair failed and we were unable to recover it. 00:25:00.943 [2024-07-24 22:34:26.398585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.943 [2024-07-24 22:34:26.398639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.943 qpair failed and we were unable to recover it. 00:25:00.943 [2024-07-24 22:34:26.398760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.943 [2024-07-24 22:34:26.398804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.943 qpair failed and we were unable to recover it. 00:25:00.943 [2024-07-24 22:34:26.398938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.943 [2024-07-24 22:34:26.399019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.943 qpair failed and we were unable to recover it. 00:25:00.943 [2024-07-24 22:34:26.399190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.943 [2024-07-24 22:34:26.399222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.943 qpair failed and we were unable to recover it. 00:25:00.943 [2024-07-24 22:34:26.399327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.943 [2024-07-24 22:34:26.399360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.943 qpair failed and we were unable to recover it. 00:25:00.943 [2024-07-24 22:34:26.399465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.943 [2024-07-24 22:34:26.399499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.943 qpair failed and we were unable to recover it. 00:25:00.943 [2024-07-24 22:34:26.399604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.943 [2024-07-24 22:34:26.399629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.943 qpair failed and we were unable to recover it. 00:25:00.943 [2024-07-24 22:34:26.399728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.943 [2024-07-24 22:34:26.399754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.943 qpair failed and we were unable to recover it. 00:25:00.943 [2024-07-24 22:34:26.399847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.943 [2024-07-24 22:34:26.399873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.943 qpair failed and we were unable to recover it. 00:25:00.943 [2024-07-24 22:34:26.399977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.943 [2024-07-24 22:34:26.400003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.943 qpair failed and we were unable to recover it. 00:25:00.943 [2024-07-24 22:34:26.400110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.943 [2024-07-24 22:34:26.400140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.943 qpair failed and we were unable to recover it. 00:25:00.943 [2024-07-24 22:34:26.400243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.943 [2024-07-24 22:34:26.400269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.943 qpair failed and we were unable to recover it. 00:25:00.943 [2024-07-24 22:34:26.400390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.943 [2024-07-24 22:34:26.400416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.943 qpair failed and we were unable to recover it. 00:25:00.943 [2024-07-24 22:34:26.400519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.943 [2024-07-24 22:34:26.400546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.943 qpair failed and we were unable to recover it. 00:25:00.943 [2024-07-24 22:34:26.400646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.943 [2024-07-24 22:34:26.400672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.943 qpair failed and we were unable to recover it. 00:25:00.943 [2024-07-24 22:34:26.400778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.943 [2024-07-24 22:34:26.400806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.943 qpair failed and we were unable to recover it. 00:25:00.943 [2024-07-24 22:34:26.400921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.943 [2024-07-24 22:34:26.400947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.943 qpair failed and we were unable to recover it. 00:25:00.943 [2024-07-24 22:34:26.401057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.943 [2024-07-24 22:34:26.401083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.943 qpair failed and we were unable to recover it. 00:25:00.943 [2024-07-24 22:34:26.401189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.943 [2024-07-24 22:34:26.401218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.943 qpair failed and we were unable to recover it. 00:25:00.943 [2024-07-24 22:34:26.401329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.943 [2024-07-24 22:34:26.401356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.943 qpair failed and we were unable to recover it. 00:25:00.943 [2024-07-24 22:34:26.401455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.943 [2024-07-24 22:34:26.401486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.943 qpair failed and we were unable to recover it. 00:25:00.943 [2024-07-24 22:34:26.401590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.943 [2024-07-24 22:34:26.401615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.943 qpair failed and we were unable to recover it. 00:25:00.943 [2024-07-24 22:34:26.401726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.943 [2024-07-24 22:34:26.401753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.943 qpair failed and we were unable to recover it. 00:25:00.943 [2024-07-24 22:34:26.401851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.943 [2024-07-24 22:34:26.401877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.943 qpair failed and we were unable to recover it. 00:25:00.943 [2024-07-24 22:34:26.401972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.943 [2024-07-24 22:34:26.401997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.943 qpair failed and we were unable to recover it. 00:25:00.943 [2024-07-24 22:34:26.402100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.943 [2024-07-24 22:34:26.402127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.943 qpair failed and we were unable to recover it. 00:25:00.943 [2024-07-24 22:34:26.402229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.943 [2024-07-24 22:34:26.402255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.943 qpair failed and we were unable to recover it. 00:25:00.943 [2024-07-24 22:34:26.402358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.943 [2024-07-24 22:34:26.402386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.943 qpair failed and we were unable to recover it. 00:25:00.943 [2024-07-24 22:34:26.402495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.943 [2024-07-24 22:34:26.402523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.943 qpair failed and we were unable to recover it. 00:25:00.943 [2024-07-24 22:34:26.402630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.943 [2024-07-24 22:34:26.402659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.943 qpair failed and we were unable to recover it. 00:25:00.943 [2024-07-24 22:34:26.402760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.943 [2024-07-24 22:34:26.402786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.943 qpair failed and we were unable to recover it. 00:25:00.943 [2024-07-24 22:34:26.402888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.943 [2024-07-24 22:34:26.402913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.943 qpair failed and we were unable to recover it. 00:25:00.943 [2024-07-24 22:34:26.403015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.943 [2024-07-24 22:34:26.403042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.943 qpair failed and we were unable to recover it. 00:25:00.943 [2024-07-24 22:34:26.403139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.943 [2024-07-24 22:34:26.403167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.943 qpair failed and we were unable to recover it. 00:25:00.943 [2024-07-24 22:34:26.403265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.943 [2024-07-24 22:34:26.403293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.943 qpair failed and we were unable to recover it. 00:25:00.943 [2024-07-24 22:34:26.403396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.943 [2024-07-24 22:34:26.403430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.943 qpair failed and we were unable to recover it. 00:25:00.943 [2024-07-24 22:34:26.403552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.943 [2024-07-24 22:34:26.403582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.944 qpair failed and we were unable to recover it. 00:25:00.944 [2024-07-24 22:34:26.403692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.944 [2024-07-24 22:34:26.403718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.944 qpair failed and we were unable to recover it. 00:25:00.944 [2024-07-24 22:34:26.403826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.944 [2024-07-24 22:34:26.403852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.944 qpair failed and we were unable to recover it. 00:25:00.944 [2024-07-24 22:34:26.403951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.944 [2024-07-24 22:34:26.403977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.944 qpair failed and we were unable to recover it. 00:25:00.944 [2024-07-24 22:34:26.404078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.944 [2024-07-24 22:34:26.404106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.944 qpair failed and we were unable to recover it. 00:25:00.944 [2024-07-24 22:34:26.404207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.944 [2024-07-24 22:34:26.404233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.944 qpair failed and we were unable to recover it. 00:25:00.944 [2024-07-24 22:34:26.404340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.944 [2024-07-24 22:34:26.404367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.944 qpair failed and we were unable to recover it. 00:25:00.944 [2024-07-24 22:34:26.404471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.944 [2024-07-24 22:34:26.404507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.944 qpair failed and we were unable to recover it. 00:25:00.944 [2024-07-24 22:34:26.404608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.944 [2024-07-24 22:34:26.404635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.944 qpair failed and we were unable to recover it. 00:25:00.944 [2024-07-24 22:34:26.404733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.944 [2024-07-24 22:34:26.404760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.944 qpair failed and we were unable to recover it. 00:25:00.944 [2024-07-24 22:34:26.404868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.944 [2024-07-24 22:34:26.404896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.944 qpair failed and we were unable to recover it. 00:25:00.944 [2024-07-24 22:34:26.405013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.944 [2024-07-24 22:34:26.405040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.944 qpair failed and we were unable to recover it. 00:25:00.944 [2024-07-24 22:34:26.405135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.944 [2024-07-24 22:34:26.405161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.944 qpair failed and we were unable to recover it. 00:25:00.944 [2024-07-24 22:34:26.405269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.944 [2024-07-24 22:34:26.405296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.944 qpair failed and we were unable to recover it. 00:25:00.944 [2024-07-24 22:34:26.405390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.944 [2024-07-24 22:34:26.405416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.944 qpair failed and we were unable to recover it. 00:25:00.944 [2024-07-24 22:34:26.405523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.944 [2024-07-24 22:34:26.405553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.944 qpair failed and we were unable to recover it. 00:25:00.944 [2024-07-24 22:34:26.405659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.944 [2024-07-24 22:34:26.405686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.944 qpair failed and we were unable to recover it. 00:25:00.944 [2024-07-24 22:34:26.405787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.944 [2024-07-24 22:34:26.405813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.944 qpair failed and we were unable to recover it. 00:25:00.944 [2024-07-24 22:34:26.405919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.944 [2024-07-24 22:34:26.405945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.944 qpair failed and we were unable to recover it. 00:25:00.944 [2024-07-24 22:34:26.406051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.944 [2024-07-24 22:34:26.406077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.944 qpair failed and we were unable to recover it. 00:25:00.944 [2024-07-24 22:34:26.406187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.944 [2024-07-24 22:34:26.406214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.944 qpair failed and we were unable to recover it. 00:25:00.944 [2024-07-24 22:34:26.406327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.944 [2024-07-24 22:34:26.406354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.944 qpair failed and we were unable to recover it. 00:25:00.944 [2024-07-24 22:34:26.406464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.944 [2024-07-24 22:34:26.406504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.944 qpair failed and we were unable to recover it. 00:25:00.944 [2024-07-24 22:34:26.406605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.944 [2024-07-24 22:34:26.406632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.944 qpair failed and we were unable to recover it. 00:25:00.944 [2024-07-24 22:34:26.406730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.944 [2024-07-24 22:34:26.406755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.944 qpair failed and we were unable to recover it. 00:25:00.944 [2024-07-24 22:34:26.406871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.944 [2024-07-24 22:34:26.406897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.944 qpair failed and we were unable to recover it. 00:25:00.944 [2024-07-24 22:34:26.406994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.944 [2024-07-24 22:34:26.407025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.944 qpair failed and we were unable to recover it. 00:25:00.944 [2024-07-24 22:34:26.407134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.944 [2024-07-24 22:34:26.407163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.944 qpair failed and we were unable to recover it. 00:25:00.944 [2024-07-24 22:34:26.407264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.944 [2024-07-24 22:34:26.407290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.944 qpair failed and we were unable to recover it. 00:25:00.944 [2024-07-24 22:34:26.407393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.944 [2024-07-24 22:34:26.407420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.944 qpair failed and we were unable to recover it. 00:25:00.944 [2024-07-24 22:34:26.407525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.944 [2024-07-24 22:34:26.407552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.944 qpair failed and we were unable to recover it. 00:25:00.944 [2024-07-24 22:34:26.407659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.944 [2024-07-24 22:34:26.407685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.944 qpair failed and we were unable to recover it. 00:25:00.944 [2024-07-24 22:34:26.407790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.944 [2024-07-24 22:34:26.407817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.944 qpair failed and we were unable to recover it. 00:25:00.944 [2024-07-24 22:34:26.407926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.944 [2024-07-24 22:34:26.407952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.944 qpair failed and we were unable to recover it. 00:25:00.944 [2024-07-24 22:34:26.408060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.944 [2024-07-24 22:34:26.408088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.944 qpair failed and we were unable to recover it. 00:25:00.944 [2024-07-24 22:34:26.408189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.944 [2024-07-24 22:34:26.408215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.944 qpair failed and we were unable to recover it. 00:25:00.944 [2024-07-24 22:34:26.408318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.944 [2024-07-24 22:34:26.408349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.944 qpair failed and we were unable to recover it. 00:25:00.944 [2024-07-24 22:34:26.408453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.944 [2024-07-24 22:34:26.408486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.945 qpair failed and we were unable to recover it. 00:25:00.945 [2024-07-24 22:34:26.408588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.945 [2024-07-24 22:34:26.408613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.945 qpair failed and we were unable to recover it. 00:25:00.945 [2024-07-24 22:34:26.408718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.945 [2024-07-24 22:34:26.408747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.945 qpair failed and we were unable to recover it. 00:25:00.945 [2024-07-24 22:34:26.408859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.945 [2024-07-24 22:34:26.408885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.945 qpair failed and we were unable to recover it. 00:25:00.945 [2024-07-24 22:34:26.408981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.945 [2024-07-24 22:34:26.409007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.945 qpair failed and we were unable to recover it. 00:25:00.945 [2024-07-24 22:34:26.409107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.945 [2024-07-24 22:34:26.409134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.945 qpair failed and we were unable to recover it. 00:25:00.945 [2024-07-24 22:34:26.409235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.945 [2024-07-24 22:34:26.409261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.945 qpair failed and we were unable to recover it. 00:25:00.945 [2024-07-24 22:34:26.409371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.945 [2024-07-24 22:34:26.409405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.945 qpair failed and we were unable to recover it. 00:25:00.945 [2024-07-24 22:34:26.409520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.945 [2024-07-24 22:34:26.409548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.945 qpair failed and we were unable to recover it. 00:25:00.945 [2024-07-24 22:34:26.409655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.945 [2024-07-24 22:34:26.409681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.945 qpair failed and we were unable to recover it. 00:25:00.945 [2024-07-24 22:34:26.409780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.945 [2024-07-24 22:34:26.409806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.945 qpair failed and we were unable to recover it. 00:25:00.945 [2024-07-24 22:34:26.409908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.945 [2024-07-24 22:34:26.409933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.945 qpair failed and we were unable to recover it. 00:25:00.945 [2024-07-24 22:34:26.410040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.945 [2024-07-24 22:34:26.410069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.945 qpair failed and we were unable to recover it. 00:25:00.945 [2024-07-24 22:34:26.410170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.945 [2024-07-24 22:34:26.410196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.945 qpair failed and we were unable to recover it. 00:25:00.945 [2024-07-24 22:34:26.410298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.945 [2024-07-24 22:34:26.410324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.945 qpair failed and we were unable to recover it. 00:25:00.945 [2024-07-24 22:34:26.410421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.945 [2024-07-24 22:34:26.410447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.945 qpair failed and we were unable to recover it. 00:25:00.945 [2024-07-24 22:34:26.410558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.945 [2024-07-24 22:34:26.410586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.945 qpair failed and we were unable to recover it. 00:25:00.945 [2024-07-24 22:34:26.410695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.945 [2024-07-24 22:34:26.410722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.945 qpair failed and we were unable to recover it. 00:25:00.945 [2024-07-24 22:34:26.410828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.945 [2024-07-24 22:34:26.410856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.945 qpair failed and we were unable to recover it. 00:25:00.945 [2024-07-24 22:34:26.410966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.945 [2024-07-24 22:34:26.410993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.945 qpair failed and we were unable to recover it. 00:25:00.945 [2024-07-24 22:34:26.411101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.945 [2024-07-24 22:34:26.411128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.945 qpair failed and we were unable to recover it. 00:25:00.945 [2024-07-24 22:34:26.411233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.945 [2024-07-24 22:34:26.411259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.945 qpair failed and we were unable to recover it. 00:25:00.945 [2024-07-24 22:34:26.411355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.945 [2024-07-24 22:34:26.411381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.945 qpair failed and we were unable to recover it. 00:25:00.945 [2024-07-24 22:34:26.411495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.945 [2024-07-24 22:34:26.411523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.945 qpair failed and we were unable to recover it. 00:25:00.945 [2024-07-24 22:34:26.411625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.945 [2024-07-24 22:34:26.411653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.945 qpair failed and we were unable to recover it. 00:25:00.945 [2024-07-24 22:34:26.411756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.945 [2024-07-24 22:34:26.411783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.945 qpair failed and we were unable to recover it. 00:25:00.945 [2024-07-24 22:34:26.411887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.945 [2024-07-24 22:34:26.411913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.945 qpair failed and we were unable to recover it. 00:25:00.945 [2024-07-24 22:34:26.412019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.945 [2024-07-24 22:34:26.412045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.945 qpair failed and we were unable to recover it. 00:25:00.945 [2024-07-24 22:34:26.412145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.945 [2024-07-24 22:34:26.412175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.945 qpair failed and we were unable to recover it. 00:25:00.945 [2024-07-24 22:34:26.412281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.945 [2024-07-24 22:34:26.412311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.945 qpair failed and we were unable to recover it. 00:25:00.945 [2024-07-24 22:34:26.412415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.945 [2024-07-24 22:34:26.412441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.945 qpair failed and we were unable to recover it. 00:25:00.945 [2024-07-24 22:34:26.412567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.945 [2024-07-24 22:34:26.412594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.945 qpair failed and we were unable to recover it. 00:25:00.945 [2024-07-24 22:34:26.412692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.945 [2024-07-24 22:34:26.412719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.945 qpair failed and we were unable to recover it. 00:25:00.945 [2024-07-24 22:34:26.412824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.945 [2024-07-24 22:34:26.412851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.945 qpair failed and we were unable to recover it. 00:25:00.945 [2024-07-24 22:34:26.412949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.945 [2024-07-24 22:34:26.412976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.945 qpair failed and we were unable to recover it. 00:25:00.945 [2024-07-24 22:34:26.413073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.945 [2024-07-24 22:34:26.413099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.945 qpair failed and we were unable to recover it. 00:25:00.945 [2024-07-24 22:34:26.413200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.945 [2024-07-24 22:34:26.413227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.945 qpair failed and we were unable to recover it. 00:25:00.945 [2024-07-24 22:34:26.413321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.945 [2024-07-24 22:34:26.413347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.945 qpair failed and we were unable to recover it. 00:25:00.945 [2024-07-24 22:34:26.413449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.946 [2024-07-24 22:34:26.413475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.946 qpair failed and we were unable to recover it. 00:25:00.946 [2024-07-24 22:34:26.413586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.946 [2024-07-24 22:34:26.413612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.946 qpair failed and we were unable to recover it. 00:25:00.946 [2024-07-24 22:34:26.413738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.946 [2024-07-24 22:34:26.413782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.946 qpair failed and we were unable to recover it. 00:25:00.946 [2024-07-24 22:34:26.413961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.946 [2024-07-24 22:34:26.414011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.946 qpair failed and we were unable to recover it. 00:25:00.946 [2024-07-24 22:34:26.414231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.946 [2024-07-24 22:34:26.414265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.946 qpair failed and we were unable to recover it. 00:25:00.946 [2024-07-24 22:34:26.414374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.946 [2024-07-24 22:34:26.414401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.946 qpair failed and we were unable to recover it. 00:25:00.946 [2024-07-24 22:34:26.414512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.946 [2024-07-24 22:34:26.414538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.946 qpair failed and we were unable to recover it. 00:25:00.946 [2024-07-24 22:34:26.414634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.946 [2024-07-24 22:34:26.414660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.946 qpair failed and we were unable to recover it. 00:25:00.946 [2024-07-24 22:34:26.414763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.946 [2024-07-24 22:34:26.414792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.946 qpair failed and we were unable to recover it. 00:25:00.946 [2024-07-24 22:34:26.414903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.946 [2024-07-24 22:34:26.414929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.946 qpair failed and we were unable to recover it. 00:25:00.946 [2024-07-24 22:34:26.415027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.946 [2024-07-24 22:34:26.415053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.946 qpair failed and we were unable to recover it. 00:25:00.946 [2024-07-24 22:34:26.415152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.946 [2024-07-24 22:34:26.415178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.946 qpair failed and we were unable to recover it. 00:25:00.946 [2024-07-24 22:34:26.415270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.946 [2024-07-24 22:34:26.415296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.946 qpair failed and we were unable to recover it. 00:25:00.946 [2024-07-24 22:34:26.415395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.946 [2024-07-24 22:34:26.415422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.946 qpair failed and we were unable to recover it. 00:25:00.946 [2024-07-24 22:34:26.415540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.946 [2024-07-24 22:34:26.415568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.946 qpair failed and we were unable to recover it. 00:25:00.946 [2024-07-24 22:34:26.415702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.946 [2024-07-24 22:34:26.415732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.946 qpair failed and we were unable to recover it. 00:25:00.946 [2024-07-24 22:34:26.415842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.946 [2024-07-24 22:34:26.415871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.946 qpair failed and we were unable to recover it. 00:25:00.946 [2024-07-24 22:34:26.415972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.946 [2024-07-24 22:34:26.416000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.946 qpair failed and we were unable to recover it. 00:25:00.946 [2024-07-24 22:34:26.416098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.946 [2024-07-24 22:34:26.416129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.946 qpair failed and we were unable to recover it. 00:25:00.946 [2024-07-24 22:34:26.416228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.946 [2024-07-24 22:34:26.416254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.946 qpair failed and we were unable to recover it. 00:25:00.946 [2024-07-24 22:34:26.416353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.946 [2024-07-24 22:34:26.416380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.946 qpair failed and we were unable to recover it. 00:25:00.946 [2024-07-24 22:34:26.416486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.946 [2024-07-24 22:34:26.416512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.946 qpair failed and we were unable to recover it. 00:25:00.946 [2024-07-24 22:34:26.416613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.946 [2024-07-24 22:34:26.416641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.946 qpair failed and we were unable to recover it. 00:25:00.946 [2024-07-24 22:34:26.416740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.946 [2024-07-24 22:34:26.416766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.946 qpair failed and we were unable to recover it. 00:25:00.946 [2024-07-24 22:34:26.416873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.946 [2024-07-24 22:34:26.416899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.946 qpair failed and we were unable to recover it. 00:25:00.946 [2024-07-24 22:34:26.417028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.946 [2024-07-24 22:34:26.417054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.946 qpair failed and we were unable to recover it. 00:25:00.946 [2024-07-24 22:34:26.417180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.946 [2024-07-24 22:34:26.417205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.946 qpair failed and we were unable to recover it. 00:25:00.946 [2024-07-24 22:34:26.417314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.946 [2024-07-24 22:34:26.417341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.946 qpair failed and we were unable to recover it. 00:25:00.946 [2024-07-24 22:34:26.417446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.946 [2024-07-24 22:34:26.417473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.946 qpair failed and we were unable to recover it. 00:25:00.946 [2024-07-24 22:34:26.417581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.946 [2024-07-24 22:34:26.417608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.946 qpair failed and we were unable to recover it. 00:25:00.946 [2024-07-24 22:34:26.417729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.946 [2024-07-24 22:34:26.417756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.946 qpair failed and we were unable to recover it. 00:25:00.946 [2024-07-24 22:34:26.417854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.946 [2024-07-24 22:34:26.417880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.946 qpair failed and we were unable to recover it. 00:25:00.946 [2024-07-24 22:34:26.417995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.946 [2024-07-24 22:34:26.418021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.946 qpair failed and we were unable to recover it. 00:25:00.946 [2024-07-24 22:34:26.418121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.947 [2024-07-24 22:34:26.418147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.947 qpair failed and we were unable to recover it. 00:25:00.947 [2024-07-24 22:34:26.418242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.947 [2024-07-24 22:34:26.418268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.947 qpair failed and we were unable to recover it. 00:25:00.947 [2024-07-24 22:34:26.418364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.947 [2024-07-24 22:34:26.418390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.947 qpair failed and we were unable to recover it. 00:25:00.947 [2024-07-24 22:34:26.418502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.947 [2024-07-24 22:34:26.418528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.947 qpair failed and we were unable to recover it. 00:25:00.947 [2024-07-24 22:34:26.418627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.947 [2024-07-24 22:34:26.418653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.947 qpair failed and we were unable to recover it. 00:25:00.947 [2024-07-24 22:34:26.418751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.947 [2024-07-24 22:34:26.418777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.947 qpair failed and we were unable to recover it. 00:25:00.947 [2024-07-24 22:34:26.418882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.947 [2024-07-24 22:34:26.418908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.947 qpair failed and we were unable to recover it. 00:25:00.947 [2024-07-24 22:34:26.419013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.947 [2024-07-24 22:34:26.419041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.947 qpair failed and we were unable to recover it. 00:25:00.947 [2024-07-24 22:34:26.419148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.947 [2024-07-24 22:34:26.419179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.947 qpair failed and we were unable to recover it. 00:25:00.947 [2024-07-24 22:34:26.419303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.947 [2024-07-24 22:34:26.419333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.947 qpair failed and we were unable to recover it. 00:25:00.947 [2024-07-24 22:34:26.419448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.947 [2024-07-24 22:34:26.419476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.947 qpair failed and we were unable to recover it. 00:25:00.947 [2024-07-24 22:34:26.419598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.947 [2024-07-24 22:34:26.419625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.947 qpair failed and we were unable to recover it. 00:25:00.947 [2024-07-24 22:34:26.419729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.947 [2024-07-24 22:34:26.419756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.947 qpair failed and we were unable to recover it. 00:25:00.947 [2024-07-24 22:34:26.419854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.947 [2024-07-24 22:34:26.419883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.947 qpair failed and we were unable to recover it. 00:25:00.947 [2024-07-24 22:34:26.419992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.947 [2024-07-24 22:34:26.420020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.947 qpair failed and we were unable to recover it. 00:25:00.947 [2024-07-24 22:34:26.420122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.947 [2024-07-24 22:34:26.420149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.947 qpair failed and we were unable to recover it. 00:25:00.947 [2024-07-24 22:34:26.420252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.947 [2024-07-24 22:34:26.420278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.947 qpair failed and we were unable to recover it. 00:25:00.947 [2024-07-24 22:34:26.420386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.947 [2024-07-24 22:34:26.420412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.947 qpair failed and we were unable to recover it. 00:25:00.947 [2024-07-24 22:34:26.420527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.947 [2024-07-24 22:34:26.420556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.947 qpair failed and we were unable to recover it. 00:25:00.947 [2024-07-24 22:34:26.420659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.947 [2024-07-24 22:34:26.420686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.947 qpair failed and we were unable to recover it. 00:25:00.947 [2024-07-24 22:34:26.420816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.947 [2024-07-24 22:34:26.420882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.947 qpair failed and we were unable to recover it. 00:25:00.947 [2024-07-24 22:34:26.421083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.947 [2024-07-24 22:34:26.421132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.947 qpair failed and we were unable to recover it. 00:25:00.947 [2024-07-24 22:34:26.421309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.947 [2024-07-24 22:34:26.421358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.947 qpair failed and we were unable to recover it. 00:25:00.947 [2024-07-24 22:34:26.421459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.947 [2024-07-24 22:34:26.421496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.947 qpair failed and we were unable to recover it. 00:25:00.947 [2024-07-24 22:34:26.421626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.947 [2024-07-24 22:34:26.421666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.947 qpair failed and we were unable to recover it. 00:25:00.947 [2024-07-24 22:34:26.421789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.947 [2024-07-24 22:34:26.421839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.947 qpair failed and we were unable to recover it. 00:25:00.947 [2024-07-24 22:34:26.421956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.947 [2024-07-24 22:34:26.421998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.947 qpair failed and we were unable to recover it. 00:25:00.947 [2024-07-24 22:34:26.422097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.947 [2024-07-24 22:34:26.422124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.947 qpair failed and we were unable to recover it. 00:25:00.947 [2024-07-24 22:34:26.422229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.947 [2024-07-24 22:34:26.422255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.947 qpair failed and we were unable to recover it. 00:25:00.947 [2024-07-24 22:34:26.422366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.947 [2024-07-24 22:34:26.422392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.947 qpair failed and we were unable to recover it. 00:25:00.947 [2024-07-24 22:34:26.422505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.947 [2024-07-24 22:34:26.422535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.947 qpair failed and we were unable to recover it. 00:25:00.947 [2024-07-24 22:34:26.422642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.947 [2024-07-24 22:34:26.422668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.947 qpair failed and we were unable to recover it. 00:25:00.947 [2024-07-24 22:34:26.422766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.947 [2024-07-24 22:34:26.422792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.947 qpair failed and we were unable to recover it. 00:25:00.947 [2024-07-24 22:34:26.422888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.947 [2024-07-24 22:34:26.422914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.947 qpair failed and we were unable to recover it. 00:25:00.947 [2024-07-24 22:34:26.423022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.947 [2024-07-24 22:34:26.423051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.947 qpair failed and we were unable to recover it. 00:25:00.947 [2024-07-24 22:34:26.423152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.947 [2024-07-24 22:34:26.423182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.947 qpair failed and we were unable to recover it. 00:25:00.947 [2024-07-24 22:34:26.423288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.947 [2024-07-24 22:34:26.423316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.947 qpair failed and we were unable to recover it. 00:25:00.947 [2024-07-24 22:34:26.423421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.947 [2024-07-24 22:34:26.423448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.947 qpair failed and we were unable to recover it. 00:25:00.948 [2024-07-24 22:34:26.423559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.948 [2024-07-24 22:34:26.423586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.948 qpair failed and we were unable to recover it. 00:25:00.948 [2024-07-24 22:34:26.423694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.948 [2024-07-24 22:34:26.423721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.948 qpair failed and we were unable to recover it. 00:25:00.948 [2024-07-24 22:34:26.423843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.948 [2024-07-24 22:34:26.423871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.948 qpair failed and we were unable to recover it. 00:25:00.948 [2024-07-24 22:34:26.423976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.948 [2024-07-24 22:34:26.424004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.948 qpair failed and we were unable to recover it. 00:25:00.948 [2024-07-24 22:34:26.424103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.948 [2024-07-24 22:34:26.424131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.948 qpair failed and we were unable to recover it. 00:25:00.948 [2024-07-24 22:34:26.424237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.948 [2024-07-24 22:34:26.424265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.948 qpair failed and we were unable to recover it. 00:25:00.948 [2024-07-24 22:34:26.424369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.948 [2024-07-24 22:34:26.424396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.948 qpair failed and we were unable to recover it. 00:25:00.948 [2024-07-24 22:34:26.424503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.948 [2024-07-24 22:34:26.424530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.948 qpair failed and we were unable to recover it. 00:25:00.948 [2024-07-24 22:34:26.424635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.948 [2024-07-24 22:34:26.424662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.948 qpair failed and we were unable to recover it. 00:25:00.948 [2024-07-24 22:34:26.424766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.948 [2024-07-24 22:34:26.424793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.948 qpair failed and we were unable to recover it. 00:25:00.948 [2024-07-24 22:34:26.424894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.948 [2024-07-24 22:34:26.424920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.948 qpair failed and we were unable to recover it. 00:25:00.948 [2024-07-24 22:34:26.425026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.948 [2024-07-24 22:34:26.425052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.948 qpair failed and we were unable to recover it. 00:25:00.948 [2024-07-24 22:34:26.425154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.948 [2024-07-24 22:34:26.425179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.948 qpair failed and we were unable to recover it. 00:25:00.948 [2024-07-24 22:34:26.425276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.948 [2024-07-24 22:34:26.425302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.948 qpair failed and we were unable to recover it. 00:25:00.948 [2024-07-24 22:34:26.425406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.948 [2024-07-24 22:34:26.425434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.948 qpair failed and we were unable to recover it. 00:25:00.948 [2024-07-24 22:34:26.425546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.948 [2024-07-24 22:34:26.425576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.948 qpair failed and we were unable to recover it. 00:25:00.948 [2024-07-24 22:34:26.425674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.948 [2024-07-24 22:34:26.425700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.948 qpair failed and we were unable to recover it. 00:25:00.948 [2024-07-24 22:34:26.425799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.948 [2024-07-24 22:34:26.425825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.948 qpair failed and we were unable to recover it. 00:25:00.948 [2024-07-24 22:34:26.426037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.948 [2024-07-24 22:34:26.426062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.948 qpair failed and we were unable to recover it. 00:25:00.948 [2024-07-24 22:34:26.426167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.948 [2024-07-24 22:34:26.426192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.948 qpair failed and we were unable to recover it. 00:25:00.948 [2024-07-24 22:34:26.426315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.948 [2024-07-24 22:34:26.426341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.948 qpair failed and we were unable to recover it. 00:25:00.948 [2024-07-24 22:34:26.426551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.948 [2024-07-24 22:34:26.426577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.948 qpair failed and we were unable to recover it. 00:25:00.948 [2024-07-24 22:34:26.426694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.948 [2024-07-24 22:34:26.426724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.948 qpair failed and we were unable to recover it. 00:25:00.948 [2024-07-24 22:34:26.426825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.948 [2024-07-24 22:34:26.426852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.948 qpair failed and we were unable to recover it. 00:25:00.948 [2024-07-24 22:34:26.426964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.948 [2024-07-24 22:34:26.426992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.948 qpair failed and we were unable to recover it. 00:25:00.948 [2024-07-24 22:34:26.427093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.948 [2024-07-24 22:34:26.427119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.948 qpair failed and we were unable to recover it. 00:25:00.948 [2024-07-24 22:34:26.427221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.948 [2024-07-24 22:34:26.427251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.948 qpair failed and we were unable to recover it. 00:25:00.948 [2024-07-24 22:34:26.427366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.948 [2024-07-24 22:34:26.427396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.948 qpair failed and we were unable to recover it. 00:25:00.948 [2024-07-24 22:34:26.427504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.948 [2024-07-24 22:34:26.427531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.948 qpair failed and we were unable to recover it. 00:25:00.948 [2024-07-24 22:34:26.427667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.948 [2024-07-24 22:34:26.427693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.948 qpair failed and we were unable to recover it. 00:25:00.948 [2024-07-24 22:34:26.427794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.948 [2024-07-24 22:34:26.427820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.948 qpair failed and we were unable to recover it. 00:25:00.948 [2024-07-24 22:34:26.427936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.948 [2024-07-24 22:34:26.427998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.948 qpair failed and we were unable to recover it. 00:25:00.948 [2024-07-24 22:34:26.428129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.948 [2024-07-24 22:34:26.428158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.948 qpair failed and we were unable to recover it. 00:25:00.948 [2024-07-24 22:34:26.428290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.948 [2024-07-24 22:34:26.428317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.948 qpair failed and we were unable to recover it. 00:25:00.948 [2024-07-24 22:34:26.428437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.948 [2024-07-24 22:34:26.428488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.948 qpair failed and we were unable to recover it. 00:25:00.948 [2024-07-24 22:34:26.428617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.948 [2024-07-24 22:34:26.428681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.948 qpair failed and we were unable to recover it. 00:25:00.948 [2024-07-24 22:34:26.428807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.948 [2024-07-24 22:34:26.428836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.948 qpair failed and we were unable to recover it. 00:25:00.948 [2024-07-24 22:34:26.428946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.948 [2024-07-24 22:34:26.428973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.948 qpair failed and we were unable to recover it. 00:25:00.949 [2024-07-24 22:34:26.429086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.949 [2024-07-24 22:34:26.429113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.949 qpair failed and we were unable to recover it. 00:25:00.949 [2024-07-24 22:34:26.429234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.949 [2024-07-24 22:34:26.429261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.949 qpair failed and we were unable to recover it. 00:25:00.949 [2024-07-24 22:34:26.429364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.949 [2024-07-24 22:34:26.429392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.949 qpair failed and we were unable to recover it. 00:25:00.949 [2024-07-24 22:34:26.429502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.949 [2024-07-24 22:34:26.429530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.949 qpair failed and we were unable to recover it. 00:25:00.949 [2024-07-24 22:34:26.429631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.949 [2024-07-24 22:34:26.429657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.949 qpair failed and we were unable to recover it. 00:25:00.949 [2024-07-24 22:34:26.429766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.949 [2024-07-24 22:34:26.429793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.949 qpair failed and we were unable to recover it. 00:25:00.949 [2024-07-24 22:34:26.429936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.949 [2024-07-24 22:34:26.429962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.949 qpair failed and we were unable to recover it. 00:25:00.949 [2024-07-24 22:34:26.430098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.949 [2024-07-24 22:34:26.430124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.949 qpair failed and we were unable to recover it. 00:25:00.949 [2024-07-24 22:34:26.430233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.949 [2024-07-24 22:34:26.430261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.949 qpair failed and we were unable to recover it. 00:25:00.949 [2024-07-24 22:34:26.430396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.949 [2024-07-24 22:34:26.430424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.949 qpair failed and we were unable to recover it. 00:25:00.949 [2024-07-24 22:34:26.430532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.949 [2024-07-24 22:34:26.430559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.949 qpair failed and we were unable to recover it. 00:25:00.949 [2024-07-24 22:34:26.430656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.949 [2024-07-24 22:34:26.430682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.949 qpair failed and we were unable to recover it. 00:25:00.949 [2024-07-24 22:34:26.430784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.949 [2024-07-24 22:34:26.430810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.949 qpair failed and we were unable to recover it. 00:25:00.949 [2024-07-24 22:34:26.430914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.949 [2024-07-24 22:34:26.430941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.949 qpair failed and we were unable to recover it. 00:25:00.949 [2024-07-24 22:34:26.431042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.949 [2024-07-24 22:34:26.431068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.949 qpair failed and we were unable to recover it. 00:25:00.949 [2024-07-24 22:34:26.431179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.949 [2024-07-24 22:34:26.431206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.949 qpair failed and we were unable to recover it. 00:25:00.949 [2024-07-24 22:34:26.431322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.949 [2024-07-24 22:34:26.431360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.949 qpair failed and we were unable to recover it. 00:25:00.949 [2024-07-24 22:34:26.431486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.949 [2024-07-24 22:34:26.431516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.949 qpair failed and we were unable to recover it. 00:25:00.949 [2024-07-24 22:34:26.431617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.949 [2024-07-24 22:34:26.431644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.949 qpair failed and we were unable to recover it. 00:25:00.949 [2024-07-24 22:34:26.431740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.949 [2024-07-24 22:34:26.431767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.949 qpair failed and we were unable to recover it. 00:25:00.949 [2024-07-24 22:34:26.431875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.949 [2024-07-24 22:34:26.431902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.949 qpair failed and we were unable to recover it. 00:25:00.949 [2024-07-24 22:34:26.432002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.949 [2024-07-24 22:34:26.432028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.949 qpair failed and we were unable to recover it. 00:25:00.949 [2024-07-24 22:34:26.432127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.949 [2024-07-24 22:34:26.432154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.949 qpair failed and we were unable to recover it. 00:25:00.949 [2024-07-24 22:34:26.432272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.949 [2024-07-24 22:34:26.432299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.949 qpair failed and we were unable to recover it. 00:25:00.949 [2024-07-24 22:34:26.432397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.949 [2024-07-24 22:34:26.432423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.949 qpair failed and we were unable to recover it. 00:25:00.949 [2024-07-24 22:34:26.432524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.949 [2024-07-24 22:34:26.432551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.949 qpair failed and we were unable to recover it. 00:25:00.949 [2024-07-24 22:34:26.432657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.949 [2024-07-24 22:34:26.432684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.949 qpair failed and we were unable to recover it. 00:25:00.949 [2024-07-24 22:34:26.432796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.949 [2024-07-24 22:34:26.432822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.949 qpair failed and we were unable to recover it. 00:25:00.949 [2024-07-24 22:34:26.432925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.949 [2024-07-24 22:34:26.432950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.949 qpair failed and we were unable to recover it. 00:25:00.949 [2024-07-24 22:34:26.433200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.949 [2024-07-24 22:34:26.433228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.949 qpair failed and we were unable to recover it. 00:25:00.949 [2024-07-24 22:34:26.433361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.949 [2024-07-24 22:34:26.433427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.949 qpair failed and we were unable to recover it. 00:25:00.949 [2024-07-24 22:34:26.433630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.949 [2024-07-24 22:34:26.433679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.949 qpair failed and we were unable to recover it. 00:25:00.949 [2024-07-24 22:34:26.433779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.949 [2024-07-24 22:34:26.433804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.949 qpair failed and we were unable to recover it. 00:25:00.949 [2024-07-24 22:34:26.433903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.949 [2024-07-24 22:34:26.433930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.949 qpair failed and we were unable to recover it. 00:25:00.949 [2024-07-24 22:34:26.434034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.949 [2024-07-24 22:34:26.434060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.949 qpair failed and we were unable to recover it. 00:25:00.949 [2024-07-24 22:34:26.434171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.949 [2024-07-24 22:34:26.434205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.949 qpair failed and we were unable to recover it. 00:25:00.949 [2024-07-24 22:34:26.434327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.949 [2024-07-24 22:34:26.434353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.949 qpair failed and we were unable to recover it. 00:25:00.949 [2024-07-24 22:34:26.434456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.949 [2024-07-24 22:34:26.434491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.950 qpair failed and we were unable to recover it. 00:25:00.950 [2024-07-24 22:34:26.434600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.950 [2024-07-24 22:34:26.434628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.950 qpair failed and we were unable to recover it. 00:25:00.950 [2024-07-24 22:34:26.434733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.950 [2024-07-24 22:34:26.434761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.950 qpair failed and we were unable to recover it. 00:25:00.950 [2024-07-24 22:34:26.434868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.950 [2024-07-24 22:34:26.434895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.950 qpair failed and we were unable to recover it. 00:25:00.950 [2024-07-24 22:34:26.435014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.950 [2024-07-24 22:34:26.435040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.950 qpair failed and we were unable to recover it. 00:25:00.950 [2024-07-24 22:34:26.435140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.950 [2024-07-24 22:34:26.435165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.950 qpair failed and we were unable to recover it. 00:25:00.950 [2024-07-24 22:34:26.435265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.950 [2024-07-24 22:34:26.435292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.950 qpair failed and we were unable to recover it. 00:25:00.950 [2024-07-24 22:34:26.435395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.950 [2024-07-24 22:34:26.435423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.950 qpair failed and we were unable to recover it. 00:25:00.950 [2024-07-24 22:34:26.435536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.950 [2024-07-24 22:34:26.435567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.950 qpair failed and we were unable to recover it. 00:25:00.950 [2024-07-24 22:34:26.435684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.950 [2024-07-24 22:34:26.435711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.950 qpair failed and we were unable to recover it. 00:25:00.950 [2024-07-24 22:34:26.435811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.950 [2024-07-24 22:34:26.435839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.950 qpair failed and we were unable to recover it. 00:25:00.950 [2024-07-24 22:34:26.435939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.950 [2024-07-24 22:34:26.435965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.950 qpair failed and we were unable to recover it. 00:25:00.950 [2024-07-24 22:34:26.436069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.950 [2024-07-24 22:34:26.436096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.950 qpair failed and we were unable to recover it. 00:25:00.950 [2024-07-24 22:34:26.436211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.950 [2024-07-24 22:34:26.436237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.950 qpair failed and we were unable to recover it. 00:25:00.950 [2024-07-24 22:34:26.436339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.950 [2024-07-24 22:34:26.436367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.950 qpair failed and we were unable to recover it. 00:25:00.950 [2024-07-24 22:34:26.436580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.950 [2024-07-24 22:34:26.436607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.950 qpair failed and we were unable to recover it. 00:25:00.950 [2024-07-24 22:34:26.436720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.950 [2024-07-24 22:34:26.436746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.950 qpair failed and we were unable to recover it. 00:25:00.950 [2024-07-24 22:34:26.436850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.950 [2024-07-24 22:34:26.436875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.950 qpair failed and we were unable to recover it. 00:25:00.950 [2024-07-24 22:34:26.436979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.950 [2024-07-24 22:34:26.437005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.950 qpair failed and we were unable to recover it. 00:25:00.950 [2024-07-24 22:34:26.437110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.950 [2024-07-24 22:34:26.437137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.950 qpair failed and we were unable to recover it. 00:25:00.950 [2024-07-24 22:34:26.437243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.950 [2024-07-24 22:34:26.437268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.950 qpair failed and we were unable to recover it. 00:25:00.950 [2024-07-24 22:34:26.437361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.950 [2024-07-24 22:34:26.437387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.950 qpair failed and we were unable to recover it. 00:25:00.950 [2024-07-24 22:34:26.437490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.950 [2024-07-24 22:34:26.437516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.950 qpair failed and we were unable to recover it. 00:25:00.950 [2024-07-24 22:34:26.437614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.950 [2024-07-24 22:34:26.437639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.950 qpair failed and we were unable to recover it. 00:25:00.950 [2024-07-24 22:34:26.437743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.950 [2024-07-24 22:34:26.437768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.950 qpair failed and we were unable to recover it. 00:25:00.950 [2024-07-24 22:34:26.437877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.950 [2024-07-24 22:34:26.437902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.950 qpair failed and we were unable to recover it. 00:25:00.950 [2024-07-24 22:34:26.438012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.950 [2024-07-24 22:34:26.438037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.950 qpair failed and we were unable to recover it. 00:25:00.950 [2024-07-24 22:34:26.438140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.950 [2024-07-24 22:34:26.438169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.950 qpair failed and we were unable to recover it. 00:25:00.950 [2024-07-24 22:34:26.438273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.950 [2024-07-24 22:34:26.438298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.950 qpair failed and we were unable to recover it. 00:25:00.950 [2024-07-24 22:34:26.438399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.950 [2024-07-24 22:34:26.438425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.950 qpair failed and we were unable to recover it. 00:25:00.950 [2024-07-24 22:34:26.438526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.950 [2024-07-24 22:34:26.438556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.950 qpair failed and we were unable to recover it. 00:25:00.950 [2024-07-24 22:34:26.438655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.950 [2024-07-24 22:34:26.438681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.950 qpair failed and we were unable to recover it. 00:25:00.950 [2024-07-24 22:34:26.438786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.950 [2024-07-24 22:34:26.438814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.950 qpair failed and we were unable to recover it. 00:25:00.950 [2024-07-24 22:34:26.438923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.950 [2024-07-24 22:34:26.438949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.950 qpair failed and we were unable to recover it. 00:25:00.950 [2024-07-24 22:34:26.439055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.950 [2024-07-24 22:34:26.439081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.950 qpair failed and we were unable to recover it. 00:25:00.950 [2024-07-24 22:34:26.439199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.950 [2024-07-24 22:34:26.439225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.950 qpair failed and we were unable to recover it. 00:25:00.950 [2024-07-24 22:34:26.439326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.950 [2024-07-24 22:34:26.439352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.950 qpair failed and we were unable to recover it. 00:25:00.950 [2024-07-24 22:34:26.439454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.950 [2024-07-24 22:34:26.439554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.950 qpair failed and we were unable to recover it. 00:25:00.950 [2024-07-24 22:34:26.439656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.950 [2024-07-24 22:34:26.439682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.950 qpair failed and we were unable to recover it. 00:25:00.951 [2024-07-24 22:34:26.439786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.951 [2024-07-24 22:34:26.439814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.951 qpair failed and we were unable to recover it. 00:25:00.951 [2024-07-24 22:34:26.439912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.951 [2024-07-24 22:34:26.439938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.951 qpair failed and we were unable to recover it. 00:25:00.951 [2024-07-24 22:34:26.440044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.951 [2024-07-24 22:34:26.440069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.951 qpair failed and we were unable to recover it. 00:25:00.951 [2024-07-24 22:34:26.440174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.951 [2024-07-24 22:34:26.440201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.951 qpair failed and we were unable to recover it. 00:25:00.951 [2024-07-24 22:34:26.440305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.951 [2024-07-24 22:34:26.440331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.951 qpair failed and we were unable to recover it. 00:25:00.951 [2024-07-24 22:34:26.440446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.951 [2024-07-24 22:34:26.440473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.951 qpair failed and we were unable to recover it. 00:25:00.951 [2024-07-24 22:34:26.440593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.951 [2024-07-24 22:34:26.440620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.951 qpair failed and we were unable to recover it. 00:25:00.951 [2024-07-24 22:34:26.440718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.951 [2024-07-24 22:34:26.440744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.951 qpair failed and we were unable to recover it. 00:25:00.951 [2024-07-24 22:34:26.440861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.951 [2024-07-24 22:34:26.440892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.951 qpair failed and we were unable to recover it. 00:25:00.951 [2024-07-24 22:34:26.440989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.951 [2024-07-24 22:34:26.441016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.951 qpair failed and we were unable to recover it. 00:25:00.951 [2024-07-24 22:34:26.441133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.951 [2024-07-24 22:34:26.441159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.951 qpair failed and we were unable to recover it. 00:25:00.951 [2024-07-24 22:34:26.441262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.951 [2024-07-24 22:34:26.441288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.951 qpair failed and we were unable to recover it. 00:25:00.951 [2024-07-24 22:34:26.441393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.951 [2024-07-24 22:34:26.441421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.951 qpair failed and we were unable to recover it. 00:25:00.951 [2024-07-24 22:34:26.441523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.951 [2024-07-24 22:34:26.441551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.951 qpair failed and we were unable to recover it. 00:25:00.951 [2024-07-24 22:34:26.441652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.951 [2024-07-24 22:34:26.441678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.951 qpair failed and we were unable to recover it. 00:25:00.951 [2024-07-24 22:34:26.441788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.951 [2024-07-24 22:34:26.441816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.951 qpair failed and we were unable to recover it. 00:25:00.951 [2024-07-24 22:34:26.441923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.951 [2024-07-24 22:34:26.441950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.951 qpair failed and we were unable to recover it. 00:25:00.951 [2024-07-24 22:34:26.442053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.951 [2024-07-24 22:34:26.442080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.951 qpair failed and we were unable to recover it. 00:25:00.951 [2024-07-24 22:34:26.442187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.951 [2024-07-24 22:34:26.442213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.951 qpair failed and we were unable to recover it. 00:25:00.951 [2024-07-24 22:34:26.442314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.951 [2024-07-24 22:34:26.442339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.951 qpair failed and we were unable to recover it. 00:25:00.951 [2024-07-24 22:34:26.442449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.951 [2024-07-24 22:34:26.442501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.951 qpair failed and we were unable to recover it. 00:25:00.951 [2024-07-24 22:34:26.442660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.951 [2024-07-24 22:34:26.442714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.951 qpair failed and we were unable to recover it. 00:25:00.951 [2024-07-24 22:34:26.442836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.951 [2024-07-24 22:34:26.442902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.951 qpair failed and we were unable to recover it. 00:25:00.951 [2024-07-24 22:34:26.443023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.951 [2024-07-24 22:34:26.443051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.951 qpair failed and we were unable to recover it. 00:25:00.951 [2024-07-24 22:34:26.443158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.951 [2024-07-24 22:34:26.443186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.951 qpair failed and we were unable to recover it. 00:25:00.951 [2024-07-24 22:34:26.443287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.951 [2024-07-24 22:34:26.443313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.951 qpair failed and we were unable to recover it. 00:25:00.951 [2024-07-24 22:34:26.443429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.951 [2024-07-24 22:34:26.443456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.951 qpair failed and we were unable to recover it. 00:25:00.951 [2024-07-24 22:34:26.443564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.951 [2024-07-24 22:34:26.443594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.951 qpair failed and we were unable to recover it. 00:25:00.951 [2024-07-24 22:34:26.443701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.951 [2024-07-24 22:34:26.443728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.951 qpair failed and we were unable to recover it. 00:25:00.951 [2024-07-24 22:34:26.443830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.951 [2024-07-24 22:34:26.443856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.951 qpair failed and we were unable to recover it. 00:25:00.951 [2024-07-24 22:34:26.443959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.951 [2024-07-24 22:34:26.443986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.951 qpair failed and we were unable to recover it. 00:25:00.951 [2024-07-24 22:34:26.444088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.951 [2024-07-24 22:34:26.444115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.951 qpair failed and we were unable to recover it. 00:25:00.951 [2024-07-24 22:34:26.444211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.951 [2024-07-24 22:34:26.444237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.951 qpair failed and we were unable to recover it. 00:25:00.952 [2024-07-24 22:34:26.444340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.952 [2024-07-24 22:34:26.444366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.952 qpair failed and we were unable to recover it. 00:25:00.952 [2024-07-24 22:34:26.444468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.952 [2024-07-24 22:34:26.444511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.952 qpair failed and we were unable to recover it. 00:25:00.952 [2024-07-24 22:34:26.444622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.952 [2024-07-24 22:34:26.444649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.952 qpair failed and we were unable to recover it. 00:25:00.952 [2024-07-24 22:34:26.444765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.952 [2024-07-24 22:34:26.444791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.952 qpair failed and we were unable to recover it. 00:25:00.952 [2024-07-24 22:34:26.444893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.952 [2024-07-24 22:34:26.444919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.952 qpair failed and we were unable to recover it. 00:25:00.952 [2024-07-24 22:34:26.445025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.952 [2024-07-24 22:34:26.445054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.952 qpair failed and we were unable to recover it. 00:25:00.952 [2024-07-24 22:34:26.445156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.952 [2024-07-24 22:34:26.445182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.952 qpair failed and we were unable to recover it. 00:25:00.952 [2024-07-24 22:34:26.445283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.952 [2024-07-24 22:34:26.445309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.952 qpair failed and we were unable to recover it. 00:25:00.952 [2024-07-24 22:34:26.445411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.952 [2024-07-24 22:34:26.445436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.952 qpair failed and we were unable to recover it. 00:25:00.952 [2024-07-24 22:34:26.445538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.952 [2024-07-24 22:34:26.445564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.952 qpair failed and we were unable to recover it. 00:25:00.952 [2024-07-24 22:34:26.445664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.952 [2024-07-24 22:34:26.445690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.952 qpair failed and we were unable to recover it. 00:25:00.952 [2024-07-24 22:34:26.445792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.952 [2024-07-24 22:34:26.445819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.952 qpair failed and we were unable to recover it. 00:25:00.952 [2024-07-24 22:34:26.445922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.952 [2024-07-24 22:34:26.445951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.952 qpair failed and we were unable to recover it. 00:25:00.952 [2024-07-24 22:34:26.446051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.952 [2024-07-24 22:34:26.446077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.952 qpair failed and we were unable to recover it. 00:25:00.952 [2024-07-24 22:34:26.446180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.952 [2024-07-24 22:34:26.446208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.952 qpair failed and we were unable to recover it. 00:25:00.952 [2024-07-24 22:34:26.446321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.952 [2024-07-24 22:34:26.446349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.952 qpair failed and we were unable to recover it. 00:25:00.952 [2024-07-24 22:34:26.446450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.952 [2024-07-24 22:34:26.446476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.952 qpair failed and we were unable to recover it. 00:25:00.952 [2024-07-24 22:34:26.446590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.952 [2024-07-24 22:34:26.446616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.952 qpair failed and we were unable to recover it. 00:25:00.952 [2024-07-24 22:34:26.446718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.952 [2024-07-24 22:34:26.446745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.952 qpair failed and we were unable to recover it. 00:25:00.952 [2024-07-24 22:34:26.446846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.952 [2024-07-24 22:34:26.446873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.952 qpair failed and we were unable to recover it. 00:25:00.952 [2024-07-24 22:34:26.446971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.952 [2024-07-24 22:34:26.446997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.952 qpair failed and we were unable to recover it. 00:25:00.952 [2024-07-24 22:34:26.447103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.952 [2024-07-24 22:34:26.447130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.952 qpair failed and we were unable to recover it. 00:25:00.952 [2024-07-24 22:34:26.447227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.952 [2024-07-24 22:34:26.447253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.952 qpair failed and we were unable to recover it. 00:25:00.952 [2024-07-24 22:34:26.447356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.952 [2024-07-24 22:34:26.447385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.952 qpair failed and we were unable to recover it. 00:25:00.952 [2024-07-24 22:34:26.447498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.952 [2024-07-24 22:34:26.447527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.952 qpair failed and we were unable to recover it. 00:25:00.952 [2024-07-24 22:34:26.447630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.952 [2024-07-24 22:34:26.447657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.952 qpair failed and we were unable to recover it. 00:25:00.952 [2024-07-24 22:34:26.447760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.952 [2024-07-24 22:34:26.447785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.952 qpair failed and we were unable to recover it. 00:25:00.952 [2024-07-24 22:34:26.447886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.952 [2024-07-24 22:34:26.447911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.952 qpair failed and we were unable to recover it. 00:25:00.952 [2024-07-24 22:34:26.448017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.952 [2024-07-24 22:34:26.448049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.952 qpair failed and we were unable to recover it. 00:25:00.952 [2024-07-24 22:34:26.448148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.952 [2024-07-24 22:34:26.448176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.952 qpair failed and we were unable to recover it. 00:25:00.952 [2024-07-24 22:34:26.448277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.952 [2024-07-24 22:34:26.448303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.952 qpair failed and we were unable to recover it. 00:25:00.952 [2024-07-24 22:34:26.448410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.952 [2024-07-24 22:34:26.448441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.952 qpair failed and we were unable to recover it. 00:25:00.952 [2024-07-24 22:34:26.448560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.952 [2024-07-24 22:34:26.448587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.952 qpair failed and we were unable to recover it. 00:25:00.952 [2024-07-24 22:34:26.448704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.952 [2024-07-24 22:34:26.448738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.952 qpair failed and we were unable to recover it. 00:25:00.952 [2024-07-24 22:34:26.448849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.952 [2024-07-24 22:34:26.448876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.952 qpair failed and we were unable to recover it. 00:25:00.952 [2024-07-24 22:34:26.448981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.952 [2024-07-24 22:34:26.449009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.952 qpair failed and we were unable to recover it. 00:25:00.952 [2024-07-24 22:34:26.449114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.952 [2024-07-24 22:34:26.449142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.952 qpair failed and we were unable to recover it. 00:25:00.952 [2024-07-24 22:34:26.449252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.952 [2024-07-24 22:34:26.449278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.952 qpair failed and we were unable to recover it. 00:25:00.953 [2024-07-24 22:34:26.449386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.953 [2024-07-24 22:34:26.449414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.953 qpair failed and we were unable to recover it. 00:25:00.953 [2024-07-24 22:34:26.449517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.953 [2024-07-24 22:34:26.449544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.953 qpair failed and we were unable to recover it. 00:25:00.953 [2024-07-24 22:34:26.449642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.953 [2024-07-24 22:34:26.449667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.953 qpair failed and we were unable to recover it. 00:25:00.953 [2024-07-24 22:34:26.449768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.953 [2024-07-24 22:34:26.449794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.953 qpair failed and we were unable to recover it. 00:25:00.953 [2024-07-24 22:34:26.449902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.953 [2024-07-24 22:34:26.449928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.953 qpair failed and we were unable to recover it. 00:25:00.953 [2024-07-24 22:34:26.450030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.953 [2024-07-24 22:34:26.450055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.953 qpair failed and we were unable to recover it. 00:25:00.953 [2024-07-24 22:34:26.450162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.953 [2024-07-24 22:34:26.450189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.953 qpair failed and we were unable to recover it. 00:25:00.953 [2024-07-24 22:34:26.450297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.953 [2024-07-24 22:34:26.450323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.953 qpair failed and we were unable to recover it. 00:25:00.953 [2024-07-24 22:34:26.450428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.953 [2024-07-24 22:34:26.450457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.953 qpair failed and we were unable to recover it. 00:25:00.953 [2024-07-24 22:34:26.450570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.953 [2024-07-24 22:34:26.450599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.953 qpair failed and we were unable to recover it. 00:25:00.953 [2024-07-24 22:34:26.450713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.953 [2024-07-24 22:34:26.450743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.953 qpair failed and we were unable to recover it. 00:25:00.953 [2024-07-24 22:34:26.450848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.953 [2024-07-24 22:34:26.450874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.953 qpair failed and we were unable to recover it. 00:25:00.953 [2024-07-24 22:34:26.450978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.953 [2024-07-24 22:34:26.451004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.953 qpair failed and we were unable to recover it. 00:25:00.953 [2024-07-24 22:34:26.451109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.953 [2024-07-24 22:34:26.451137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.953 qpair failed and we were unable to recover it. 00:25:00.953 [2024-07-24 22:34:26.451244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.953 [2024-07-24 22:34:26.451271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.953 qpair failed and we were unable to recover it. 00:25:00.953 [2024-07-24 22:34:26.451368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.953 [2024-07-24 22:34:26.451396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.953 qpair failed and we were unable to recover it. 00:25:00.953 [2024-07-24 22:34:26.451505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.953 [2024-07-24 22:34:26.451534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.953 qpair failed and we were unable to recover it. 00:25:00.953 [2024-07-24 22:34:26.451641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.953 [2024-07-24 22:34:26.451670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.953 qpair failed and we were unable to recover it. 00:25:00.953 [2024-07-24 22:34:26.451778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.953 [2024-07-24 22:34:26.451804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.953 qpair failed and we were unable to recover it. 00:25:00.953 [2024-07-24 22:34:26.451902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.953 [2024-07-24 22:34:26.451928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.953 qpair failed and we were unable to recover it. 00:25:00.953 [2024-07-24 22:34:26.452035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.953 [2024-07-24 22:34:26.452063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.953 qpair failed and we were unable to recover it. 00:25:00.953 [2024-07-24 22:34:26.452166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.953 [2024-07-24 22:34:26.452194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.953 qpair failed and we were unable to recover it. 00:25:00.953 [2024-07-24 22:34:26.452299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.953 [2024-07-24 22:34:26.452325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.953 qpair failed and we were unable to recover it. 00:25:00.953 [2024-07-24 22:34:26.452421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.953 [2024-07-24 22:34:26.452446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.953 qpair failed and we were unable to recover it. 00:25:00.953 [2024-07-24 22:34:26.452562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.953 [2024-07-24 22:34:26.452589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.953 qpair failed and we were unable to recover it. 00:25:00.953 [2024-07-24 22:34:26.452695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.953 [2024-07-24 22:34:26.452722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.953 qpair failed and we were unable to recover it. 00:25:00.953 [2024-07-24 22:34:26.452828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.953 [2024-07-24 22:34:26.452854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.953 qpair failed and we were unable to recover it. 00:25:00.953 [2024-07-24 22:34:26.452951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.953 [2024-07-24 22:34:26.452977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.953 qpair failed and we were unable to recover it. 00:25:00.953 [2024-07-24 22:34:26.453082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.953 [2024-07-24 22:34:26.453108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.953 qpair failed and we were unable to recover it. 00:25:00.953 [2024-07-24 22:34:26.453227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.953 [2024-07-24 22:34:26.453254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.953 qpair failed and we were unable to recover it. 00:25:00.953 [2024-07-24 22:34:26.453357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.953 [2024-07-24 22:34:26.453390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.953 qpair failed and we were unable to recover it. 00:25:00.953 [2024-07-24 22:34:26.453501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.953 [2024-07-24 22:34:26.453527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.953 qpair failed and we were unable to recover it. 00:25:00.953 [2024-07-24 22:34:26.453633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.953 [2024-07-24 22:34:26.453660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.953 qpair failed and we were unable to recover it. 00:25:00.953 [2024-07-24 22:34:26.453760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.953 [2024-07-24 22:34:26.453787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.953 qpair failed and we were unable to recover it. 00:25:00.953 [2024-07-24 22:34:26.453886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.953 [2024-07-24 22:34:26.453915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.953 qpair failed and we were unable to recover it. 00:25:00.953 [2024-07-24 22:34:26.454020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.953 [2024-07-24 22:34:26.454047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.953 qpair failed and we were unable to recover it. 00:25:00.953 [2024-07-24 22:34:26.454156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.953 [2024-07-24 22:34:26.454184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.953 qpair failed and we were unable to recover it. 00:25:00.953 [2024-07-24 22:34:26.454288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.954 [2024-07-24 22:34:26.454314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.954 qpair failed and we were unable to recover it. 00:25:00.954 [2024-07-24 22:34:26.454413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.954 [2024-07-24 22:34:26.454439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.954 qpair failed and we were unable to recover it. 00:25:00.954 [2024-07-24 22:34:26.454546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.954 [2024-07-24 22:34:26.454573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.954 qpair failed and we were unable to recover it. 00:25:00.954 [2024-07-24 22:34:26.454700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.954 [2024-07-24 22:34:26.454753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.954 qpair failed and we were unable to recover it. 00:25:00.954 [2024-07-24 22:34:26.454865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.954 [2024-07-24 22:34:26.454908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.954 qpair failed and we were unable to recover it. 00:25:00.954 [2024-07-24 22:34:26.455035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.954 [2024-07-24 22:34:26.455092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.954 qpair failed and we were unable to recover it. 00:25:00.954 [2024-07-24 22:34:26.455218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.954 [2024-07-24 22:34:26.455259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.954 qpair failed and we were unable to recover it. 00:25:00.954 [2024-07-24 22:34:26.455367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.954 [2024-07-24 22:34:26.455393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.954 qpair failed and we were unable to recover it. 00:25:00.954 [2024-07-24 22:34:26.455505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.954 [2024-07-24 22:34:26.455549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.954 qpair failed and we were unable to recover it. 00:25:00.954 [2024-07-24 22:34:26.455670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.954 [2024-07-24 22:34:26.455710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.954 qpair failed and we were unable to recover it. 00:25:00.954 [2024-07-24 22:34:26.455816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.954 [2024-07-24 22:34:26.455842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.954 qpair failed and we were unable to recover it. 00:25:00.954 [2024-07-24 22:34:26.455949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.954 [2024-07-24 22:34:26.455976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.954 qpair failed and we were unable to recover it. 00:25:00.954 [2024-07-24 22:34:26.456087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.954 [2024-07-24 22:34:26.456115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.954 qpair failed and we were unable to recover it. 00:25:00.954 [2024-07-24 22:34:26.456231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.954 [2024-07-24 22:34:26.456256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.954 qpair failed and we were unable to recover it. 00:25:00.954 [2024-07-24 22:34:26.456372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.954 [2024-07-24 22:34:26.456400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.954 qpair failed and we were unable to recover it. 00:25:00.954 [2024-07-24 22:34:26.456515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.954 [2024-07-24 22:34:26.456542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.954 qpair failed and we were unable to recover it. 00:25:00.954 [2024-07-24 22:34:26.456647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.954 [2024-07-24 22:34:26.456673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.954 qpair failed and we were unable to recover it. 00:25:00.954 [2024-07-24 22:34:26.456775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.954 [2024-07-24 22:34:26.456801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.954 qpair failed and we were unable to recover it. 00:25:00.954 [2024-07-24 22:34:26.456898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.954 [2024-07-24 22:34:26.456924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.954 qpair failed and we were unable to recover it. 00:25:00.954 [2024-07-24 22:34:26.457069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.954 [2024-07-24 22:34:26.457095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.954 qpair failed and we were unable to recover it. 00:25:00.954 [2024-07-24 22:34:26.457199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.954 [2024-07-24 22:34:26.457232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.954 qpair failed and we were unable to recover it. 00:25:00.954 [2024-07-24 22:34:26.457334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.954 [2024-07-24 22:34:26.457360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.954 qpair failed and we were unable to recover it. 00:25:00.954 [2024-07-24 22:34:26.457464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.954 [2024-07-24 22:34:26.457497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.954 qpair failed and we were unable to recover it. 00:25:00.954 [2024-07-24 22:34:26.457603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.954 [2024-07-24 22:34:26.457630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.954 qpair failed and we were unable to recover it. 00:25:00.954 [2024-07-24 22:34:26.457732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.954 [2024-07-24 22:34:26.457761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.954 qpair failed and we were unable to recover it. 00:25:00.954 [2024-07-24 22:34:26.457860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.954 [2024-07-24 22:34:26.457886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.954 qpair failed and we were unable to recover it. 00:25:00.954 [2024-07-24 22:34:26.457986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.954 [2024-07-24 22:34:26.458012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.954 qpair failed and we were unable to recover it. 00:25:00.954 [2024-07-24 22:34:26.458109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.954 [2024-07-24 22:34:26.458135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.954 qpair failed and we were unable to recover it. 00:25:00.954 [2024-07-24 22:34:26.458239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.954 [2024-07-24 22:34:26.458265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.954 qpair failed and we were unable to recover it. 00:25:00.954 [2024-07-24 22:34:26.458361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.954 [2024-07-24 22:34:26.458388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.954 qpair failed and we were unable to recover it. 00:25:00.954 [2024-07-24 22:34:26.458518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.954 [2024-07-24 22:34:26.458562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.954 qpair failed and we were unable to recover it. 00:25:00.954 [2024-07-24 22:34:26.458731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.954 [2024-07-24 22:34:26.458784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.954 qpair failed and we were unable to recover it. 00:25:00.954 [2024-07-24 22:34:26.458896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.954 [2024-07-24 22:34:26.458937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.954 qpair failed and we were unable to recover it. 00:25:00.954 [2024-07-24 22:34:26.459057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.954 [2024-07-24 22:34:26.459101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.954 qpair failed and we were unable to recover it. 00:25:00.954 [2024-07-24 22:34:26.459224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.954 [2024-07-24 22:34:26.459285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.954 qpair failed and we were unable to recover it. 00:25:00.954 [2024-07-24 22:34:26.459439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.954 [2024-07-24 22:34:26.459527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.954 qpair failed and we were unable to recover it. 00:25:00.954 [2024-07-24 22:34:26.459664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.954 [2024-07-24 22:34:26.459707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.954 qpair failed and we were unable to recover it. 00:25:00.954 [2024-07-24 22:34:26.459841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.954 [2024-07-24 22:34:26.459867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.954 qpair failed and we were unable to recover it. 00:25:00.954 [2024-07-24 22:34:26.459965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.955 [2024-07-24 22:34:26.459991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.955 qpair failed and we were unable to recover it. 00:25:00.955 [2024-07-24 22:34:26.460097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.955 [2024-07-24 22:34:26.460125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.955 qpair failed and we were unable to recover it. 00:25:00.955 [2024-07-24 22:34:26.460232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.955 [2024-07-24 22:34:26.460259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.955 qpair failed and we were unable to recover it. 00:25:00.955 [2024-07-24 22:34:26.460358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.955 [2024-07-24 22:34:26.460385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.955 qpair failed and we were unable to recover it. 00:25:00.955 [2024-07-24 22:34:26.460533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.955 [2024-07-24 22:34:26.460563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.955 qpair failed and we were unable to recover it. 00:25:00.955 [2024-07-24 22:34:26.460668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.955 [2024-07-24 22:34:26.460694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.955 qpair failed and we were unable to recover it. 00:25:00.955 [2024-07-24 22:34:26.460814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.955 [2024-07-24 22:34:26.460840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.955 qpair failed and we were unable to recover it. 00:25:00.955 [2024-07-24 22:34:26.460938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.955 [2024-07-24 22:34:26.460964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.955 qpair failed and we were unable to recover it. 00:25:00.955 [2024-07-24 22:34:26.461064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.955 [2024-07-24 22:34:26.461091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.955 qpair failed and we were unable to recover it. 00:25:00.955 [2024-07-24 22:34:26.461187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.955 [2024-07-24 22:34:26.461219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.955 qpair failed and we were unable to recover it. 00:25:00.955 [2024-07-24 22:34:26.461331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.955 [2024-07-24 22:34:26.461360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.955 qpair failed and we were unable to recover it. 00:25:00.955 [2024-07-24 22:34:26.461461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.955 [2024-07-24 22:34:26.461496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.955 qpair failed and we were unable to recover it. 00:25:00.955 [2024-07-24 22:34:26.461601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.955 [2024-07-24 22:34:26.461627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.955 qpair failed and we were unable to recover it. 00:25:00.955 [2024-07-24 22:34:26.461738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.955 [2024-07-24 22:34:26.461764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.955 qpair failed and we were unable to recover it. 00:25:00.955 [2024-07-24 22:34:26.461863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.955 [2024-07-24 22:34:26.461889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.955 qpair failed and we were unable to recover it. 00:25:00.955 [2024-07-24 22:34:26.461993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.955 [2024-07-24 22:34:26.462019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.955 qpair failed and we were unable to recover it. 00:25:00.955 [2024-07-24 22:34:26.462120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.955 [2024-07-24 22:34:26.462148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.955 qpair failed and we were unable to recover it. 00:25:00.955 [2024-07-24 22:34:26.462252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.955 [2024-07-24 22:34:26.462280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.955 qpair failed and we were unable to recover it. 00:25:00.955 [2024-07-24 22:34:26.462381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.955 [2024-07-24 22:34:26.462408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.955 qpair failed and we were unable to recover it. 00:25:00.955 [2024-07-24 22:34:26.462515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.955 [2024-07-24 22:34:26.462542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.955 qpair failed and we were unable to recover it. 00:25:00.955 [2024-07-24 22:34:26.462659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.955 [2024-07-24 22:34:26.462685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.955 qpair failed and we were unable to recover it. 00:25:00.955 [2024-07-24 22:34:26.462795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.955 [2024-07-24 22:34:26.462823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.955 qpair failed and we were unable to recover it. 00:25:00.955 [2024-07-24 22:34:26.462927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.955 [2024-07-24 22:34:26.462953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.955 qpair failed and we were unable to recover it. 00:25:00.955 [2024-07-24 22:34:26.463073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.955 [2024-07-24 22:34:26.463103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.955 qpair failed and we were unable to recover it. 00:25:00.955 [2024-07-24 22:34:26.463205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.955 [2024-07-24 22:34:26.463233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.955 qpair failed and we were unable to recover it. 00:25:00.955 [2024-07-24 22:34:26.463338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.955 [2024-07-24 22:34:26.463366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.955 qpair failed and we were unable to recover it. 00:25:00.955 [2024-07-24 22:34:26.463471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.955 [2024-07-24 22:34:26.463505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.955 qpair failed and we were unable to recover it. 00:25:00.955 [2024-07-24 22:34:26.463613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.955 [2024-07-24 22:34:26.463642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.955 qpair failed and we were unable to recover it. 00:25:00.955 [2024-07-24 22:34:26.463751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.955 [2024-07-24 22:34:26.463777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.955 qpair failed and we were unable to recover it. 00:25:00.955 [2024-07-24 22:34:26.463880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.955 [2024-07-24 22:34:26.463907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.955 qpair failed and we were unable to recover it. 00:25:00.955 [2024-07-24 22:34:26.464011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.955 [2024-07-24 22:34:26.464037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.955 qpair failed and we were unable to recover it. 00:25:00.955 [2024-07-24 22:34:26.464138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.955 [2024-07-24 22:34:26.464165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.955 qpair failed and we were unable to recover it. 00:25:00.955 [2024-07-24 22:34:26.464265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.955 [2024-07-24 22:34:26.464293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.955 qpair failed and we were unable to recover it. 00:25:00.955 [2024-07-24 22:34:26.464390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.955 [2024-07-24 22:34:26.464417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.955 qpair failed and we were unable to recover it. 00:25:00.955 [2024-07-24 22:34:26.464518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.955 [2024-07-24 22:34:26.464545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.955 qpair failed and we were unable to recover it. 00:25:00.955 [2024-07-24 22:34:26.464652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.955 [2024-07-24 22:34:26.464679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.955 qpair failed and we were unable to recover it. 00:25:00.955 [2024-07-24 22:34:26.464778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.955 [2024-07-24 22:34:26.464806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.955 qpair failed and we were unable to recover it. 00:25:00.955 [2024-07-24 22:34:26.464913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.955 [2024-07-24 22:34:26.464939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.955 qpair failed and we were unable to recover it. 00:25:00.955 [2024-07-24 22:34:26.465035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.956 [2024-07-24 22:34:26.465061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.956 qpair failed and we were unable to recover it. 00:25:00.956 [2024-07-24 22:34:26.465167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.956 [2024-07-24 22:34:26.465194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.956 qpair failed and we were unable to recover it. 00:25:00.956 [2024-07-24 22:34:26.465311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.956 [2024-07-24 22:34:26.465343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.956 qpair failed and we were unable to recover it. 00:25:00.956 [2024-07-24 22:34:26.465461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.956 [2024-07-24 22:34:26.465495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.956 qpair failed and we were unable to recover it. 00:25:00.956 [2024-07-24 22:34:26.465600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.956 [2024-07-24 22:34:26.465627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.956 qpair failed and we were unable to recover it. 00:25:00.956 [2024-07-24 22:34:26.465732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.956 [2024-07-24 22:34:26.465758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.956 qpair failed and we were unable to recover it. 00:25:00.956 [2024-07-24 22:34:26.465864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.956 [2024-07-24 22:34:26.465890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.956 qpair failed and we were unable to recover it. 00:25:00.956 [2024-07-24 22:34:26.466006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.956 [2024-07-24 22:34:26.466032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.956 qpair failed and we were unable to recover it. 00:25:00.956 [2024-07-24 22:34:26.466137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.956 [2024-07-24 22:34:26.466164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.956 qpair failed and we were unable to recover it. 00:25:00.956 [2024-07-24 22:34:26.466263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.956 [2024-07-24 22:34:26.466289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.956 qpair failed and we were unable to recover it. 00:25:00.956 [2024-07-24 22:34:26.466392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.956 [2024-07-24 22:34:26.466421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.956 qpair failed and we were unable to recover it. 00:25:00.956 [2024-07-24 22:34:26.466527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.956 [2024-07-24 22:34:26.466562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.956 qpair failed and we were unable to recover it. 00:25:00.956 [2024-07-24 22:34:26.466691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.956 [2024-07-24 22:34:26.466717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.956 qpair failed and we were unable to recover it. 00:25:00.956 [2024-07-24 22:34:26.466838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.956 [2024-07-24 22:34:26.466878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.956 qpair failed and we were unable to recover it. 00:25:00.956 [2024-07-24 22:34:26.466980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.956 [2024-07-24 22:34:26.467006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.956 qpair failed and we were unable to recover it. 00:25:00.956 [2024-07-24 22:34:26.467171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.956 [2024-07-24 22:34:26.467225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.956 qpair failed and we were unable to recover it. 00:25:00.956 [2024-07-24 22:34:26.467341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.956 [2024-07-24 22:34:26.467382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.956 qpair failed and we were unable to recover it. 00:25:00.956 [2024-07-24 22:34:26.467489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.956 [2024-07-24 22:34:26.467517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.956 qpair failed and we were unable to recover it. 00:25:00.956 [2024-07-24 22:34:26.467630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.956 [2024-07-24 22:34:26.467660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.956 qpair failed and we were unable to recover it. 00:25:00.956 [2024-07-24 22:34:26.467764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.956 [2024-07-24 22:34:26.467792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.956 qpair failed and we were unable to recover it. 00:25:00.956 [2024-07-24 22:34:26.467898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.956 [2024-07-24 22:34:26.467926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.956 qpair failed and we were unable to recover it. 00:25:00.956 [2024-07-24 22:34:26.468036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.956 [2024-07-24 22:34:26.468062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.956 qpair failed and we were unable to recover it. 00:25:00.956 [2024-07-24 22:34:26.468166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.956 [2024-07-24 22:34:26.468192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.956 qpair failed and we were unable to recover it. 00:25:00.956 [2024-07-24 22:34:26.468315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.956 [2024-07-24 22:34:26.468357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.956 qpair failed and we were unable to recover it. 00:25:00.956 [2024-07-24 22:34:26.468455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.956 [2024-07-24 22:34:26.468488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.956 qpair failed and we were unable to recover it. 00:25:00.956 [2024-07-24 22:34:26.468600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.956 [2024-07-24 22:34:26.468627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.956 qpair failed and we were unable to recover it. 00:25:00.956 [2024-07-24 22:34:26.468748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.956 [2024-07-24 22:34:26.468811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.956 qpair failed and we were unable to recover it. 00:25:00.956 [2024-07-24 22:34:26.468952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.956 [2024-07-24 22:34:26.469020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.956 qpair failed and we were unable to recover it. 00:25:00.956 [2024-07-24 22:34:26.469138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.956 [2024-07-24 22:34:26.469166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.956 qpair failed and we were unable to recover it. 00:25:00.956 [2024-07-24 22:34:26.469288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.956 [2024-07-24 22:34:26.469331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.956 qpair failed and we were unable to recover it. 00:25:00.956 [2024-07-24 22:34:26.469431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.956 [2024-07-24 22:34:26.469456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.956 qpair failed and we were unable to recover it. 00:25:00.956 [2024-07-24 22:34:26.469607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.956 [2024-07-24 22:34:26.469637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.956 qpair failed and we were unable to recover it. 00:25:00.956 [2024-07-24 22:34:26.469747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.956 [2024-07-24 22:34:26.469776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.957 qpair failed and we were unable to recover it. 00:25:00.957 [2024-07-24 22:34:26.469884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.957 [2024-07-24 22:34:26.469913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.957 qpair failed and we were unable to recover it. 00:25:00.957 [2024-07-24 22:34:26.470014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.957 [2024-07-24 22:34:26.470041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.957 qpair failed and we were unable to recover it. 00:25:00.957 [2024-07-24 22:34:26.470134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.957 [2024-07-24 22:34:26.470161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.957 qpair failed and we were unable to recover it. 00:25:00.957 [2024-07-24 22:34:26.470265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.957 [2024-07-24 22:34:26.470292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.957 qpair failed and we were unable to recover it. 00:25:00.957 [2024-07-24 22:34:26.470397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.957 [2024-07-24 22:34:26.470425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.957 qpair failed and we were unable to recover it. 00:25:00.957 [2024-07-24 22:34:26.470535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.957 [2024-07-24 22:34:26.470567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.957 qpair failed and we were unable to recover it. 00:25:00.957 [2024-07-24 22:34:26.470673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.957 [2024-07-24 22:34:26.470698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.957 qpair failed and we were unable to recover it. 00:25:00.957 [2024-07-24 22:34:26.470842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.957 [2024-07-24 22:34:26.470867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.957 qpair failed and we were unable to recover it. 00:25:00.957 [2024-07-24 22:34:26.470969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.957 [2024-07-24 22:34:26.470994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.957 qpair failed and we were unable to recover it. 00:25:00.957 [2024-07-24 22:34:26.471089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.957 [2024-07-24 22:34:26.471115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.957 qpair failed and we were unable to recover it. 00:25:00.957 [2024-07-24 22:34:26.471222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.957 [2024-07-24 22:34:26.471250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.957 qpair failed and we were unable to recover it. 00:25:00.957 [2024-07-24 22:34:26.471358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.957 [2024-07-24 22:34:26.471383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.957 qpair failed and we were unable to recover it. 00:25:00.957 [2024-07-24 22:34:26.471490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.957 [2024-07-24 22:34:26.471519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.957 qpair failed and we were unable to recover it. 00:25:00.957 [2024-07-24 22:34:26.471625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.957 [2024-07-24 22:34:26.471654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.957 qpair failed and we were unable to recover it. 00:25:00.957 [2024-07-24 22:34:26.471754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.957 [2024-07-24 22:34:26.471780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.957 qpair failed and we were unable to recover it. 00:25:00.957 [2024-07-24 22:34:26.471881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.957 [2024-07-24 22:34:26.471907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.957 qpair failed and we were unable to recover it. 00:25:00.957 [2024-07-24 22:34:26.472008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.957 [2024-07-24 22:34:26.472036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.957 qpair failed and we were unable to recover it. 00:25:00.957 [2024-07-24 22:34:26.472131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.957 [2024-07-24 22:34:26.472157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.957 qpair failed and we were unable to recover it. 00:25:00.957 [2024-07-24 22:34:26.472257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.957 [2024-07-24 22:34:26.472286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.957 qpair failed and we were unable to recover it. 00:25:00.957 [2024-07-24 22:34:26.472391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.957 [2024-07-24 22:34:26.472417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.957 qpair failed and we were unable to recover it. 00:25:00.957 [2024-07-24 22:34:26.472529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.957 [2024-07-24 22:34:26.472557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.957 qpair failed and we were unable to recover it. 00:25:00.957 [2024-07-24 22:34:26.472661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.957 [2024-07-24 22:34:26.472687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.957 qpair failed and we were unable to recover it. 00:25:00.957 [2024-07-24 22:34:26.472790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.957 [2024-07-24 22:34:26.472816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.957 qpair failed and we were unable to recover it. 00:25:00.957 [2024-07-24 22:34:26.472934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.957 [2024-07-24 22:34:26.472959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.957 qpair failed and we were unable to recover it. 00:25:00.957 [2024-07-24 22:34:26.473060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.957 [2024-07-24 22:34:26.473086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.957 qpair failed and we were unable to recover it. 00:25:00.957 [2024-07-24 22:34:26.473194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.957 [2024-07-24 22:34:26.473224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.957 qpair failed and we were unable to recover it. 00:25:00.957 [2024-07-24 22:34:26.473334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.957 [2024-07-24 22:34:26.473360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.957 qpair failed and we were unable to recover it. 00:25:00.957 [2024-07-24 22:34:26.473459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.957 [2024-07-24 22:34:26.473496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.957 qpair failed and we were unable to recover it. 00:25:00.957 [2024-07-24 22:34:26.473601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.957 [2024-07-24 22:34:26.473628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.957 qpair failed and we were unable to recover it. 00:25:00.957 [2024-07-24 22:34:26.473731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.957 [2024-07-24 22:34:26.473757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.957 qpair failed and we were unable to recover it. 00:25:00.957 [2024-07-24 22:34:26.473857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.957 [2024-07-24 22:34:26.473884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.957 qpair failed and we were unable to recover it. 00:25:00.957 [2024-07-24 22:34:26.473982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.957 [2024-07-24 22:34:26.474010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.957 qpair failed and we were unable to recover it. 00:25:00.957 [2024-07-24 22:34:26.474140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.957 [2024-07-24 22:34:26.474226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.957 qpair failed and we were unable to recover it. 00:25:00.957 [2024-07-24 22:34:26.474353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.957 [2024-07-24 22:34:26.474398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.957 qpair failed and we were unable to recover it. 00:25:00.957 [2024-07-24 22:34:26.474508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.957 [2024-07-24 22:34:26.474536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.957 qpair failed and we were unable to recover it. 00:25:00.957 [2024-07-24 22:34:26.474656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.957 [2024-07-24 22:34:26.474719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.957 qpair failed and we were unable to recover it. 00:25:00.957 [2024-07-24 22:34:26.474842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.957 [2024-07-24 22:34:26.474869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.957 qpair failed and we were unable to recover it. 00:25:00.957 [2024-07-24 22:34:26.474986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.957 [2024-07-24 22:34:26.475028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.958 qpair failed and we were unable to recover it. 00:25:00.958 [2024-07-24 22:34:26.475161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.958 [2024-07-24 22:34:26.475186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.958 qpair failed and we were unable to recover it. 00:25:00.958 [2024-07-24 22:34:26.475306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.958 [2024-07-24 22:34:26.475348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.958 qpair failed and we were unable to recover it. 00:25:00.958 [2024-07-24 22:34:26.475467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.958 [2024-07-24 22:34:26.475520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.958 qpair failed and we were unable to recover it. 00:25:00.958 [2024-07-24 22:34:26.475630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.958 [2024-07-24 22:34:26.475656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.958 qpair failed and we were unable to recover it. 00:25:00.958 [2024-07-24 22:34:26.475754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.958 [2024-07-24 22:34:26.475780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.958 qpair failed and we were unable to recover it. 00:25:00.958 [2024-07-24 22:34:26.475883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.958 [2024-07-24 22:34:26.475909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.958 qpair failed and we were unable to recover it. 00:25:00.958 [2024-07-24 22:34:26.476026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.958 [2024-07-24 22:34:26.476051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.958 qpair failed and we were unable to recover it. 00:25:00.958 [2024-07-24 22:34:26.476150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.958 [2024-07-24 22:34:26.476182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.958 qpair failed and we were unable to recover it. 00:25:00.958 [2024-07-24 22:34:26.476288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.958 [2024-07-24 22:34:26.476313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.958 qpair failed and we were unable to recover it. 00:25:00.958 [2024-07-24 22:34:26.476431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.958 [2024-07-24 22:34:26.476460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.958 qpair failed and we were unable to recover it. 00:25:00.958 [2024-07-24 22:34:26.476572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.958 [2024-07-24 22:34:26.476599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.958 qpair failed and we were unable to recover it. 00:25:00.958 [2024-07-24 22:34:26.476720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.958 [2024-07-24 22:34:26.476762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.958 qpair failed and we were unable to recover it. 00:25:00.958 [2024-07-24 22:34:26.476874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.958 [2024-07-24 22:34:26.476906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.958 qpair failed and we were unable to recover it. 00:25:00.958 [2024-07-24 22:34:26.477036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.958 [2024-07-24 22:34:26.477077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.958 qpair failed and we were unable to recover it. 00:25:00.958 [2024-07-24 22:34:26.477192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.958 [2024-07-24 22:34:26.477250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.958 qpair failed and we were unable to recover it. 00:25:00.958 [2024-07-24 22:34:26.477374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.958 [2024-07-24 22:34:26.477443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.958 qpair failed and we were unable to recover it. 00:25:00.958 [2024-07-24 22:34:26.477576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.958 [2024-07-24 22:34:26.477603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.958 qpair failed and we were unable to recover it. 00:25:00.958 [2024-07-24 22:34:26.477721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.958 [2024-07-24 22:34:26.477783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.958 qpair failed and we were unable to recover it. 00:25:00.958 [2024-07-24 22:34:26.477923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.958 [2024-07-24 22:34:26.477964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.958 qpair failed and we were unable to recover it. 00:25:00.958 [2024-07-24 22:34:26.478159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.958 [2024-07-24 22:34:26.478201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.958 qpair failed and we were unable to recover it. 00:25:00.958 [2024-07-24 22:34:26.478308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.958 [2024-07-24 22:34:26.478334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.958 qpair failed and we were unable to recover it. 00:25:00.958 [2024-07-24 22:34:26.478443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.958 [2024-07-24 22:34:26.478471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.958 qpair failed and we were unable to recover it. 00:25:00.958 [2024-07-24 22:34:26.478591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.958 [2024-07-24 22:34:26.478617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.958 qpair failed and we were unable to recover it. 00:25:00.958 [2024-07-24 22:34:26.478716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.958 [2024-07-24 22:34:26.478745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.958 qpair failed and we were unable to recover it. 00:25:00.958 [2024-07-24 22:34:26.478842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.958 [2024-07-24 22:34:26.478868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.958 qpair failed and we were unable to recover it. 00:25:00.958 [2024-07-24 22:34:26.478968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.958 [2024-07-24 22:34:26.478994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.958 qpair failed and we were unable to recover it. 00:25:00.958 [2024-07-24 22:34:26.479091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.958 [2024-07-24 22:34:26.479117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.958 qpair failed and we were unable to recover it. 00:25:00.958 [2024-07-24 22:34:26.479218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.958 [2024-07-24 22:34:26.479244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.958 qpair failed and we were unable to recover it. 00:25:00.958 [2024-07-24 22:34:26.479348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.958 [2024-07-24 22:34:26.479374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.958 qpair failed and we were unable to recover it. 00:25:00.958 [2024-07-24 22:34:26.479489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.958 [2024-07-24 22:34:26.479516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.958 qpair failed and we were unable to recover it. 00:25:00.958 [2024-07-24 22:34:26.479618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.958 [2024-07-24 22:34:26.479646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.958 qpair failed and we were unable to recover it. 00:25:00.958 [2024-07-24 22:34:26.479795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.958 [2024-07-24 22:34:26.479821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.958 qpair failed and we were unable to recover it. 00:25:00.958 [2024-07-24 22:34:26.479936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.958 [2024-07-24 22:34:26.479978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.958 qpair failed and we were unable to recover it. 00:25:00.958 [2024-07-24 22:34:26.480099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.958 [2024-07-24 22:34:26.480139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.958 qpair failed and we were unable to recover it. 00:25:00.958 [2024-07-24 22:34:26.480274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.958 [2024-07-24 22:34:26.480334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.958 qpair failed and we were unable to recover it. 00:25:00.958 [2024-07-24 22:34:26.480493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.958 [2024-07-24 22:34:26.480551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.958 qpair failed and we were unable to recover it. 00:25:00.958 [2024-07-24 22:34:26.480681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.958 [2024-07-24 22:34:26.480763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.958 qpair failed and we were unable to recover it. 00:25:00.958 [2024-07-24 22:34:26.480892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.959 [2024-07-24 22:34:26.480919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.959 qpair failed and we were unable to recover it. 00:25:00.959 [2024-07-24 22:34:26.481022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.959 [2024-07-24 22:34:26.481048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.959 qpair failed and we were unable to recover it. 00:25:00.959 [2024-07-24 22:34:26.481193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.959 [2024-07-24 22:34:26.481248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.959 qpair failed and we were unable to recover it. 00:25:00.959 [2024-07-24 22:34:26.481350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.959 [2024-07-24 22:34:26.481377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.959 qpair failed and we were unable to recover it. 00:25:00.959 [2024-07-24 22:34:26.481510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.959 [2024-07-24 22:34:26.481592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.959 qpair failed and we were unable to recover it. 00:25:00.959 [2024-07-24 22:34:26.481751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.959 [2024-07-24 22:34:26.481776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.959 qpair failed and we were unable to recover it. 00:25:00.959 [2024-07-24 22:34:26.481906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.959 [2024-07-24 22:34:26.481949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.959 qpair failed and we were unable to recover it. 00:25:00.959 [2024-07-24 22:34:26.482063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.959 [2024-07-24 22:34:26.482105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.959 qpair failed and we were unable to recover it. 00:25:00.959 [2024-07-24 22:34:26.482229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.959 [2024-07-24 22:34:26.482293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.959 qpair failed and we were unable to recover it. 00:25:00.959 [2024-07-24 22:34:26.482410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.959 [2024-07-24 22:34:26.482436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.959 qpair failed and we were unable to recover it. 00:25:00.959 [2024-07-24 22:34:26.482568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.959 [2024-07-24 22:34:26.482608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.959 qpair failed and we were unable to recover it. 00:25:00.959 [2024-07-24 22:34:26.482753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.959 [2024-07-24 22:34:26.482805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.959 qpair failed and we were unable to recover it. 00:25:00.959 [2024-07-24 22:34:26.482922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.959 [2024-07-24 22:34:26.482964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.959 qpair failed and we were unable to recover it. 00:25:00.959 [2024-07-24 22:34:26.483088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.959 [2024-07-24 22:34:26.483143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.959 qpair failed and we were unable to recover it. 00:25:00.959 [2024-07-24 22:34:26.483267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.959 [2024-07-24 22:34:26.483308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.959 qpair failed and we were unable to recover it. 00:25:00.959 [2024-07-24 22:34:26.483438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.959 [2024-07-24 22:34:26.483487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.959 qpair failed and we were unable to recover it. 00:25:00.959 [2024-07-24 22:34:26.483644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.959 [2024-07-24 22:34:26.483698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.959 qpair failed and we were unable to recover it. 00:25:00.959 [2024-07-24 22:34:26.483793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.959 [2024-07-24 22:34:26.483818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.959 qpair failed and we were unable to recover it. 00:25:00.959 [2024-07-24 22:34:26.483978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.959 [2024-07-24 22:34:26.484034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.959 qpair failed and we were unable to recover it. 00:25:00.959 [2024-07-24 22:34:26.484181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.959 [2024-07-24 22:34:26.484231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.959 qpair failed and we were unable to recover it. 00:25:00.959 [2024-07-24 22:34:26.484415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.959 [2024-07-24 22:34:26.484442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.959 qpair failed and we were unable to recover it. 00:25:00.959 [2024-07-24 22:34:26.484606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.959 [2024-07-24 22:34:26.484659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.959 qpair failed and we were unable to recover it. 00:25:00.959 [2024-07-24 22:34:26.484790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.959 [2024-07-24 22:34:26.484871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.959 qpair failed and we were unable to recover it. 00:25:00.959 [2024-07-24 22:34:26.485008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.959 [2024-07-24 22:34:26.485061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.959 qpair failed and we were unable to recover it. 00:25:00.959 [2024-07-24 22:34:26.485159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.959 [2024-07-24 22:34:26.485190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.959 qpair failed and we were unable to recover it. 00:25:00.959 [2024-07-24 22:34:26.485328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.959 [2024-07-24 22:34:26.485412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.959 qpair failed and we were unable to recover it. 00:25:00.959 [2024-07-24 22:34:26.485518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.959 [2024-07-24 22:34:26.485545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.959 qpair failed and we were unable to recover it. 00:25:00.959 [2024-07-24 22:34:26.485670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.959 [2024-07-24 22:34:26.485735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.959 qpair failed and we were unable to recover it. 00:25:00.959 [2024-07-24 22:34:26.485867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.959 [2024-07-24 22:34:26.485911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.959 qpair failed and we were unable to recover it. 00:25:00.959 [2024-07-24 22:34:26.486030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.959 [2024-07-24 22:34:26.486072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.959 qpair failed and we were unable to recover it. 00:25:00.959 [2024-07-24 22:34:26.486250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.959 [2024-07-24 22:34:26.486300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.959 qpair failed and we were unable to recover it. 00:25:00.959 [2024-07-24 22:34:26.486403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.959 [2024-07-24 22:34:26.486431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.959 qpair failed and we were unable to recover it. 00:25:00.959 [2024-07-24 22:34:26.486616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.959 [2024-07-24 22:34:26.486643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.959 qpair failed and we were unable to recover it. 00:25:00.959 [2024-07-24 22:34:26.486771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.959 [2024-07-24 22:34:26.486826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.959 qpair failed and we were unable to recover it. 00:25:00.959 [2024-07-24 22:34:26.486980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.959 [2024-07-24 22:34:26.487034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.959 qpair failed and we were unable to recover it. 00:25:00.959 [2024-07-24 22:34:26.487149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.959 [2024-07-24 22:34:26.487191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.959 qpair failed and we were unable to recover it. 00:25:00.959 [2024-07-24 22:34:26.487343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.959 [2024-07-24 22:34:26.487396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.959 qpair failed and we were unable to recover it. 00:25:00.959 [2024-07-24 22:34:26.487490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.959 [2024-07-24 22:34:26.487524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.959 qpair failed and we were unable to recover it. 00:25:00.959 [2024-07-24 22:34:26.487660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.960 [2024-07-24 22:34:26.487740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.960 qpair failed and we were unable to recover it. 00:25:00.960 [2024-07-24 22:34:26.487841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.960 [2024-07-24 22:34:26.487867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.960 qpair failed and we were unable to recover it. 00:25:00.960 [2024-07-24 22:34:26.487991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.960 [2024-07-24 22:34:26.488056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.960 qpair failed and we were unable to recover it. 00:25:00.960 [2024-07-24 22:34:26.488231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.960 [2024-07-24 22:34:26.488284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.960 qpair failed and we were unable to recover it. 00:25:00.960 [2024-07-24 22:34:26.488391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.960 [2024-07-24 22:34:26.488424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.960 qpair failed and we were unable to recover it. 00:25:00.960 [2024-07-24 22:34:26.488587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.960 [2024-07-24 22:34:26.488640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.960 qpair failed and we were unable to recover it. 00:25:00.960 [2024-07-24 22:34:26.488771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.960 [2024-07-24 22:34:26.488824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.960 qpair failed and we were unable to recover it. 00:25:00.960 [2024-07-24 22:34:26.488943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.960 [2024-07-24 22:34:26.489005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.960 qpair failed and we were unable to recover it. 00:25:00.960 [2024-07-24 22:34:26.489191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.960 [2024-07-24 22:34:26.489217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.960 qpair failed and we were unable to recover it. 00:25:00.960 [2024-07-24 22:34:26.489339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.960 [2024-07-24 22:34:26.489383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.960 qpair failed and we were unable to recover it. 00:25:00.960 [2024-07-24 22:34:26.489520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.960 [2024-07-24 22:34:26.489557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.960 qpair failed and we were unable to recover it. 00:25:00.960 [2024-07-24 22:34:26.489687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.960 [2024-07-24 22:34:26.489713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.960 qpair failed and we were unable to recover it. 00:25:00.960 [2024-07-24 22:34:26.489843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.960 [2024-07-24 22:34:26.489884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.960 qpair failed and we were unable to recover it. 00:25:00.960 [2024-07-24 22:34:26.490011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.960 [2024-07-24 22:34:26.490095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.960 qpair failed and we were unable to recover it. 00:25:00.960 [2024-07-24 22:34:26.490221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.960 [2024-07-24 22:34:26.490289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.960 qpair failed and we were unable to recover it. 00:25:00.960 [2024-07-24 22:34:26.490429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.960 [2024-07-24 22:34:26.490515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.960 qpair failed and we were unable to recover it. 00:25:00.960 [2024-07-24 22:34:26.490665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.960 [2024-07-24 22:34:26.490718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.960 qpair failed and we were unable to recover it. 00:25:00.960 [2024-07-24 22:34:26.490848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.960 [2024-07-24 22:34:26.490902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.960 qpair failed and we were unable to recover it. 00:25:00.960 [2024-07-24 22:34:26.491002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.960 [2024-07-24 22:34:26.491029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.960 qpair failed and we were unable to recover it. 00:25:00.960 [2024-07-24 22:34:26.491127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.960 [2024-07-24 22:34:26.491153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.960 qpair failed and we were unable to recover it. 00:25:00.960 [2024-07-24 22:34:26.491282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.960 [2024-07-24 22:34:26.491335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.960 qpair failed and we were unable to recover it. 00:25:00.960 [2024-07-24 22:34:26.491519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.960 [2024-07-24 22:34:26.491559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.960 qpair failed and we were unable to recover it. 00:25:00.960 [2024-07-24 22:34:26.491686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.960 [2024-07-24 22:34:26.491765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.960 qpair failed and we were unable to recover it. 00:25:00.960 [2024-07-24 22:34:26.491891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.960 [2024-07-24 22:34:26.491973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.960 qpair failed and we were unable to recover it. 00:25:00.960 [2024-07-24 22:34:26.492106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.960 [2024-07-24 22:34:26.492131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.960 qpair failed and we were unable to recover it. 00:25:00.960 [2024-07-24 22:34:26.492327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.960 [2024-07-24 22:34:26.492376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.960 qpair failed and we were unable to recover it. 00:25:00.960 [2024-07-24 22:34:26.492500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.960 [2024-07-24 22:34:26.492565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.960 qpair failed and we were unable to recover it. 00:25:00.960 [2024-07-24 22:34:26.492688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.960 [2024-07-24 22:34:26.492714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.960 qpair failed and we were unable to recover it. 00:25:00.960 [2024-07-24 22:34:26.492837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.960 [2024-07-24 22:34:26.492890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.960 qpair failed and we were unable to recover it. 00:25:00.960 [2024-07-24 22:34:26.493082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.960 [2024-07-24 22:34:26.493107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.960 qpair failed and we were unable to recover it. 00:25:00.960 [2024-07-24 22:34:26.493232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.960 [2024-07-24 22:34:26.493286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.960 qpair failed and we were unable to recover it. 00:25:00.960 [2024-07-24 22:34:26.493383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.960 [2024-07-24 22:34:26.493409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.960 qpair failed and we were unable to recover it. 00:25:00.960 [2024-07-24 22:34:26.493525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.960 [2024-07-24 22:34:26.493553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.960 qpair failed and we were unable to recover it. 00:25:00.960 [2024-07-24 22:34:26.493684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.960 [2024-07-24 22:34:26.493736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.960 qpair failed and we were unable to recover it. 00:25:00.960 [2024-07-24 22:34:26.493863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.960 [2024-07-24 22:34:26.493888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.960 qpair failed and we were unable to recover it. 00:25:00.961 [2024-07-24 22:34:26.494035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.961 [2024-07-24 22:34:26.494067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.961 qpair failed and we were unable to recover it. 00:25:00.961 [2024-07-24 22:34:26.494180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.961 [2024-07-24 22:34:26.494206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.961 qpair failed and we were unable to recover it. 00:25:00.961 [2024-07-24 22:34:26.494323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.961 [2024-07-24 22:34:26.494366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.961 qpair failed and we were unable to recover it. 00:25:00.961 [2024-07-24 22:34:26.494532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.961 [2024-07-24 22:34:26.494562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.961 qpair failed and we were unable to recover it. 00:25:00.961 [2024-07-24 22:34:26.494705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.961 [2024-07-24 22:34:26.494758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.961 qpair failed and we were unable to recover it. 00:25:00.961 [2024-07-24 22:34:26.494901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.961 [2024-07-24 22:34:26.494958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.961 qpair failed and we were unable to recover it. 00:25:00.961 [2024-07-24 22:34:26.495075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.961 [2024-07-24 22:34:26.495120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.961 qpair failed and we were unable to recover it. 00:25:00.961 [2024-07-24 22:34:26.495249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.961 [2024-07-24 22:34:26.495303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.961 qpair failed and we were unable to recover it. 00:25:00.961 [2024-07-24 22:34:26.495421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.961 [2024-07-24 22:34:26.495463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.961 qpair failed and we were unable to recover it. 00:25:00.961 [2024-07-24 22:34:26.495634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.961 [2024-07-24 22:34:26.495662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.961 qpair failed and we were unable to recover it. 00:25:00.961 [2024-07-24 22:34:26.495844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.961 [2024-07-24 22:34:26.495870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.961 qpair failed and we were unable to recover it. 00:25:00.961 [2024-07-24 22:34:26.495987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.961 [2024-07-24 22:34:26.496030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.961 qpair failed and we were unable to recover it. 00:25:00.961 [2024-07-24 22:34:26.496190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.961 [2024-07-24 22:34:26.496216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.961 qpair failed and we were unable to recover it. 00:25:00.961 [2024-07-24 22:34:26.496354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.961 [2024-07-24 22:34:26.496408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.961 qpair failed and we were unable to recover it. 00:25:00.961 [2024-07-24 22:34:26.496521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.961 [2024-07-24 22:34:26.496548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.961 qpair failed and we were unable to recover it. 00:25:00.961 [2024-07-24 22:34:26.496702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.961 [2024-07-24 22:34:26.496761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.961 qpair failed and we were unable to recover it. 00:25:00.961 [2024-07-24 22:34:26.496863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.961 [2024-07-24 22:34:26.496889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.961 qpair failed and we were unable to recover it. 00:25:00.961 [2024-07-24 22:34:26.497020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.961 [2024-07-24 22:34:26.497066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.961 qpair failed and we were unable to recover it. 00:25:00.961 [2024-07-24 22:34:26.497212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.961 [2024-07-24 22:34:26.497279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.961 qpair failed and we were unable to recover it. 00:25:00.961 [2024-07-24 22:34:26.497430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.961 [2024-07-24 22:34:26.497488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.961 qpair failed and we were unable to recover it. 00:25:00.961 [2024-07-24 22:34:26.497621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.961 [2024-07-24 22:34:26.497663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.961 qpair failed and we were unable to recover it. 00:25:00.961 [2024-07-24 22:34:26.497798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.961 [2024-07-24 22:34:26.497853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.961 qpair failed and we were unable to recover it. 00:25:00.961 [2024-07-24 22:34:26.497974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.961 [2024-07-24 22:34:26.498027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.961 qpair failed and we were unable to recover it. 00:25:00.961 [2024-07-24 22:34:26.498134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.961 [2024-07-24 22:34:26.498164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.961 qpair failed and we were unable to recover it. 00:25:00.961 [2024-07-24 22:34:26.498304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.961 [2024-07-24 22:34:26.498360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.961 qpair failed and we were unable to recover it. 00:25:00.961 [2024-07-24 22:34:26.498508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.961 [2024-07-24 22:34:26.498536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.961 qpair failed and we were unable to recover it. 00:25:00.961 [2024-07-24 22:34:26.498709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.961 [2024-07-24 22:34:26.498757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.961 qpair failed and we were unable to recover it. 00:25:00.961 [2024-07-24 22:34:26.498886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.961 [2024-07-24 22:34:26.498930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.961 qpair failed and we were unable to recover it. 00:25:00.961 [2024-07-24 22:34:26.499052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.961 [2024-07-24 22:34:26.499098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.961 qpair failed and we were unable to recover it. 00:25:00.961 [2024-07-24 22:34:26.499226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.961 [2024-07-24 22:34:26.499283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.961 qpair failed and we were unable to recover it. 00:25:00.961 [2024-07-24 22:34:26.499410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.961 [2024-07-24 22:34:26.499499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.961 qpair failed and we were unable to recover it. 00:25:00.961 [2024-07-24 22:34:26.499631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.961 [2024-07-24 22:34:26.499672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.961 qpair failed and we were unable to recover it. 00:25:00.961 [2024-07-24 22:34:26.499797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.961 [2024-07-24 22:34:26.499841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.961 qpair failed and we were unable to recover it. 00:25:00.961 [2024-07-24 22:34:26.499968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.961 [2024-07-24 22:34:26.500036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.961 qpair failed and we were unable to recover it. 00:25:00.961 [2024-07-24 22:34:26.500189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.961 [2024-07-24 22:34:26.500239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.961 qpair failed and we were unable to recover it. 00:25:00.961 [2024-07-24 22:34:26.500362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.961 [2024-07-24 22:34:26.500431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.961 qpair failed and we were unable to recover it. 00:25:00.961 [2024-07-24 22:34:26.500585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.961 [2024-07-24 22:34:26.500668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.961 qpair failed and we were unable to recover it. 00:25:00.961 [2024-07-24 22:34:26.500858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.961 [2024-07-24 22:34:26.500906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.961 qpair failed and we were unable to recover it. 00:25:00.961 [2024-07-24 22:34:26.501038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.961 [2024-07-24 22:34:26.501092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.961 qpair failed and we were unable to recover it. 00:25:00.961 [2024-07-24 22:34:26.501217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.961 [2024-07-24 22:34:26.501256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.961 qpair failed and we were unable to recover it. 00:25:00.961 [2024-07-24 22:34:26.501379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.961 [2024-07-24 22:34:26.501405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.961 qpair failed and we were unable to recover it. 00:25:00.961 [2024-07-24 22:34:26.501515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.962 [2024-07-24 22:34:26.501544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.962 qpair failed and we were unable to recover it. 00:25:00.962 [2024-07-24 22:34:26.501694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.962 [2024-07-24 22:34:26.501759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.962 qpair failed and we were unable to recover it. 00:25:00.962 [2024-07-24 22:34:26.501878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.962 [2024-07-24 22:34:26.501925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.962 qpair failed and we were unable to recover it. 00:25:00.962 [2024-07-24 22:34:26.502023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.962 [2024-07-24 22:34:26.502049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.962 qpair failed and we were unable to recover it. 00:25:00.962 [2024-07-24 22:34:26.502176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.962 [2024-07-24 22:34:26.502264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.962 qpair failed and we were unable to recover it. 00:25:00.962 [2024-07-24 22:34:26.502426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.962 [2024-07-24 22:34:26.502486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.962 qpair failed and we were unable to recover it. 00:25:00.962 [2024-07-24 22:34:26.502627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.962 [2024-07-24 22:34:26.502683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.962 qpair failed and we were unable to recover it. 00:25:00.962 [2024-07-24 22:34:26.502809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.962 [2024-07-24 22:34:26.502862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.962 qpair failed and we were unable to recover it. 00:25:00.962 [2024-07-24 22:34:26.502986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.962 [2024-07-24 22:34:26.503045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.962 qpair failed and we were unable to recover it. 00:25:00.962 [2024-07-24 22:34:26.503142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.962 [2024-07-24 22:34:26.503169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.962 qpair failed and we were unable to recover it. 00:25:00.962 [2024-07-24 22:34:26.503349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.962 [2024-07-24 22:34:26.503400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.962 qpair failed and we were unable to recover it. 00:25:00.962 [2024-07-24 22:34:26.503535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.962 [2024-07-24 22:34:26.503563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.962 qpair failed and we were unable to recover it. 00:25:00.962 [2024-07-24 22:34:26.503689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.962 [2024-07-24 22:34:26.503744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.962 qpair failed and we were unable to recover it. 00:25:00.962 [2024-07-24 22:34:26.503868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.962 [2024-07-24 22:34:26.503937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.962 qpair failed and we were unable to recover it. 00:25:00.962 [2024-07-24 22:34:26.504071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.962 [2024-07-24 22:34:26.504115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.962 qpair failed and we were unable to recover it. 00:25:00.962 [2024-07-24 22:34:26.504243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.962 [2024-07-24 22:34:26.504327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.962 qpair failed and we were unable to recover it. 00:25:00.962 [2024-07-24 22:34:26.504473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.962 [2024-07-24 22:34:26.504544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.962 qpair failed and we were unable to recover it. 00:25:00.962 [2024-07-24 22:34:26.504735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.962 [2024-07-24 22:34:26.504764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.962 qpair failed and we were unable to recover it. 00:25:00.962 [2024-07-24 22:34:26.504893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.962 [2024-07-24 22:34:26.504934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.962 qpair failed and we were unable to recover it. 00:25:00.962 [2024-07-24 22:34:26.505095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.962 [2024-07-24 22:34:26.505149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.962 qpair failed and we were unable to recover it. 00:25:00.962 [2024-07-24 22:34:26.505250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.962 [2024-07-24 22:34:26.505277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.962 qpair failed and we were unable to recover it. 00:25:00.962 [2024-07-24 22:34:26.505407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.962 [2024-07-24 22:34:26.505434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.962 qpair failed and we were unable to recover it. 00:25:00.962 [2024-07-24 22:34:26.505568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.962 [2024-07-24 22:34:26.505615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.962 qpair failed and we were unable to recover it. 00:25:00.962 [2024-07-24 22:34:26.505736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.962 [2024-07-24 22:34:26.505781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.962 qpair failed and we were unable to recover it. 00:25:00.962 [2024-07-24 22:34:26.505909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.962 [2024-07-24 22:34:26.505961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.962 qpair failed and we were unable to recover it. 00:25:00.962 [2024-07-24 22:34:26.506137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.962 [2024-07-24 22:34:26.506186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.962 qpair failed and we were unable to recover it. 00:25:00.962 [2024-07-24 22:34:26.506288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.962 [2024-07-24 22:34:26.506315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.962 qpair failed and we were unable to recover it. 00:25:00.962 [2024-07-24 22:34:26.506463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.962 [2024-07-24 22:34:26.506537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.962 qpair failed and we were unable to recover it. 00:25:00.962 [2024-07-24 22:34:26.506667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.962 [2024-07-24 22:34:26.506720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.962 qpair failed and we were unable to recover it. 00:25:00.962 [2024-07-24 22:34:26.506823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.962 [2024-07-24 22:34:26.506849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.962 qpair failed and we were unable to recover it. 00:25:00.962 [2024-07-24 22:34:26.507009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.962 [2024-07-24 22:34:26.507036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.962 qpair failed and we were unable to recover it. 00:25:00.962 [2024-07-24 22:34:26.507166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.962 [2024-07-24 22:34:26.507250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.962 qpair failed and we were unable to recover it. 00:25:00.962 [2024-07-24 22:34:26.507388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.962 [2024-07-24 22:34:26.507470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.962 qpair failed and we were unable to recover it. 00:25:00.962 [2024-07-24 22:34:26.507577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.962 [2024-07-24 22:34:26.507604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.962 qpair failed and we were unable to recover it. 00:25:00.962 [2024-07-24 22:34:26.507746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.962 [2024-07-24 22:34:26.507773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.962 qpair failed and we were unable to recover it. 00:25:00.962 [2024-07-24 22:34:26.507874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.962 [2024-07-24 22:34:26.507902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.962 qpair failed and we were unable to recover it. 00:25:00.962 [2024-07-24 22:34:26.508028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.962 [2024-07-24 22:34:26.508112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.962 qpair failed and we were unable to recover it. 00:25:00.962 [2024-07-24 22:34:26.508245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.962 [2024-07-24 22:34:26.508298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.962 qpair failed and we were unable to recover it. 00:25:00.962 [2024-07-24 22:34:26.508438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.962 [2024-07-24 22:34:26.508500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.962 qpair failed and we were unable to recover it. 00:25:00.962 [2024-07-24 22:34:26.508695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.962 [2024-07-24 22:34:26.508722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.962 qpair failed and we were unable to recover it. 00:25:00.962 [2024-07-24 22:34:26.508902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.962 [2024-07-24 22:34:26.508930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.963 qpair failed and we were unable to recover it. 00:25:00.963 [2024-07-24 22:34:26.509059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.963 [2024-07-24 22:34:26.509114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.963 qpair failed and we were unable to recover it. 00:25:00.963 [2024-07-24 22:34:26.509211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.963 [2024-07-24 22:34:26.509237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.963 qpair failed and we were unable to recover it. 00:25:00.963 [2024-07-24 22:34:26.509417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.963 [2024-07-24 22:34:26.509467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.963 qpair failed and we were unable to recover it. 00:25:00.963 [2024-07-24 22:34:26.509662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.963 [2024-07-24 22:34:26.509694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.963 qpair failed and we were unable to recover it. 00:25:00.963 [2024-07-24 22:34:26.509819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.963 [2024-07-24 22:34:26.509846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.963 qpair failed and we were unable to recover it. 00:25:00.963 [2024-07-24 22:34:26.509995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.963 [2024-07-24 22:34:26.510022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.963 qpair failed and we were unable to recover it. 00:25:00.963 [2024-07-24 22:34:26.510205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.963 [2024-07-24 22:34:26.510234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.963 qpair failed and we were unable to recover it. 00:25:00.963 [2024-07-24 22:34:26.510350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.963 [2024-07-24 22:34:26.510394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.963 qpair failed and we were unable to recover it. 00:25:00.963 [2024-07-24 22:34:26.510543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.963 [2024-07-24 22:34:26.510612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.963 qpair failed and we were unable to recover it. 00:25:00.963 [2024-07-24 22:34:26.510740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.963 [2024-07-24 22:34:26.510780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.963 qpair failed and we were unable to recover it. 00:25:00.963 [2024-07-24 22:34:26.510910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.963 [2024-07-24 22:34:26.510965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.963 qpair failed and we were unable to recover it. 00:25:00.963 [2024-07-24 22:34:26.511092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.963 [2024-07-24 22:34:26.511174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.963 qpair failed and we were unable to recover it. 00:25:00.963 [2024-07-24 22:34:26.511303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.963 [2024-07-24 22:34:26.511329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.963 qpair failed and we were unable to recover it. 00:25:00.963 [2024-07-24 22:34:26.511493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.963 [2024-07-24 22:34:26.511521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.963 qpair failed and we were unable to recover it. 00:25:00.963 [2024-07-24 22:34:26.511624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.963 [2024-07-24 22:34:26.511650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.963 qpair failed and we were unable to recover it. 00:25:00.963 [2024-07-24 22:34:26.511802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.963 [2024-07-24 22:34:26.511866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.963 qpair failed and we were unable to recover it. 00:25:00.963 [2024-07-24 22:34:26.511972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.963 [2024-07-24 22:34:26.512002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.963 qpair failed and we were unable to recover it. 00:25:00.963 [2024-07-24 22:34:26.512137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.963 [2024-07-24 22:34:26.512219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.963 qpair failed and we were unable to recover it. 00:25:00.963 [2024-07-24 22:34:26.512332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.963 [2024-07-24 22:34:26.512377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.963 qpair failed and we were unable to recover it. 00:25:00.963 [2024-07-24 22:34:26.512486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.963 [2024-07-24 22:34:26.512519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.963 qpair failed and we were unable to recover it. 00:25:00.963 [2024-07-24 22:34:26.512640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.963 [2024-07-24 22:34:26.512686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.963 qpair failed and we were unable to recover it. 00:25:00.963 [2024-07-24 22:34:26.512807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.963 [2024-07-24 22:34:26.512860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.963 qpair failed and we were unable to recover it. 00:25:00.963 [2024-07-24 22:34:26.513033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.963 [2024-07-24 22:34:26.513060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.963 qpair failed and we were unable to recover it. 00:25:00.963 [2024-07-24 22:34:26.513191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.963 [2024-07-24 22:34:26.513244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.963 qpair failed and we were unable to recover it. 00:25:00.963 [2024-07-24 22:34:26.513377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.963 [2024-07-24 22:34:26.513433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.963 qpair failed and we were unable to recover it. 00:25:00.963 [2024-07-24 22:34:26.513549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.963 [2024-07-24 22:34:26.513579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.963 qpair failed and we were unable to recover it. 00:25:00.963 [2024-07-24 22:34:26.513709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.963 [2024-07-24 22:34:26.513755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.963 qpair failed and we were unable to recover it. 00:25:00.963 [2024-07-24 22:34:26.513921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.963 [2024-07-24 22:34:26.513976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.963 qpair failed and we were unable to recover it. 00:25:00.963 [2024-07-24 22:34:26.514095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.963 [2024-07-24 22:34:26.514141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.963 qpair failed and we were unable to recover it. 00:25:00.963 [2024-07-24 22:34:26.514271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.963 [2024-07-24 22:34:26.514351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.963 qpair failed and we were unable to recover it. 00:25:00.963 [2024-07-24 22:34:26.514478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.963 [2024-07-24 22:34:26.514574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.963 qpair failed and we were unable to recover it. 00:25:00.963 [2024-07-24 22:34:26.514680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.963 [2024-07-24 22:34:26.514707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.963 qpair failed and we were unable to recover it. 00:25:00.963 [2024-07-24 22:34:26.514812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.963 [2024-07-24 22:34:26.514839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.963 qpair failed and we were unable to recover it. 00:25:00.963 [2024-07-24 22:34:26.515018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.963 [2024-07-24 22:34:26.515067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.963 qpair failed and we were unable to recover it. 00:25:00.963 [2024-07-24 22:34:26.515198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.963 [2024-07-24 22:34:26.515224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.963 qpair failed and we were unable to recover it. 00:25:00.963 [2024-07-24 22:34:26.515350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.963 [2024-07-24 22:34:26.515394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.963 qpair failed and we were unable to recover it. 00:25:00.963 [2024-07-24 22:34:26.515528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.963 [2024-07-24 22:34:26.515575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.963 qpair failed and we were unable to recover it. 00:25:00.963 [2024-07-24 22:34:26.515702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.963 [2024-07-24 22:34:26.515754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.963 qpair failed and we were unable to recover it. 00:25:00.963 [2024-07-24 22:34:26.515938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.963 [2024-07-24 22:34:26.515967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.963 qpair failed and we were unable to recover it. 00:25:00.963 [2024-07-24 22:34:26.516075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.963 [2024-07-24 22:34:26.516102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.963 qpair failed and we were unable to recover it. 00:25:00.964 [2024-07-24 22:34:26.516207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.964 [2024-07-24 22:34:26.516234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.964 qpair failed and we were unable to recover it. 00:25:00.964 [2024-07-24 22:34:26.516334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.964 [2024-07-24 22:34:26.516360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.964 qpair failed and we were unable to recover it. 00:25:00.964 [2024-07-24 22:34:26.516460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.964 [2024-07-24 22:34:26.516495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.964 qpair failed and we were unable to recover it. 00:25:00.964 [2024-07-24 22:34:26.516606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.964 [2024-07-24 22:34:26.516634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.964 qpair failed and we were unable to recover it. 00:25:00.964 [2024-07-24 22:34:26.516743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.964 [2024-07-24 22:34:26.516769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.964 qpair failed and we were unable to recover it. 00:25:00.964 [2024-07-24 22:34:26.516873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.964 [2024-07-24 22:34:26.516901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.964 qpair failed and we were unable to recover it. 00:25:00.964 [2024-07-24 22:34:26.517010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.964 [2024-07-24 22:34:26.517036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.964 qpair failed and we were unable to recover it. 00:25:00.964 [2024-07-24 22:34:26.517173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.964 [2024-07-24 22:34:26.517220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.964 qpair failed and we were unable to recover it. 00:25:00.964 [2024-07-24 22:34:26.517341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.964 [2024-07-24 22:34:26.517387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.964 qpair failed and we were unable to recover it. 00:25:00.964 [2024-07-24 22:34:26.517497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.964 [2024-07-24 22:34:26.517524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.964 qpair failed and we were unable to recover it. 00:25:00.964 [2024-07-24 22:34:26.517650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.964 [2024-07-24 22:34:26.517677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.964 qpair failed and we were unable to recover it. 00:25:00.964 [2024-07-24 22:34:26.517787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.964 [2024-07-24 22:34:26.517814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.964 qpair failed and we were unable to recover it. 00:25:00.964 [2024-07-24 22:34:26.517943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.964 [2024-07-24 22:34:26.517983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.964 qpair failed and we were unable to recover it. 00:25:00.964 [2024-07-24 22:34:26.518107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.964 [2024-07-24 22:34:26.518152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.964 qpair failed and we were unable to recover it. 00:25:00.964 [2024-07-24 22:34:26.518283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.964 [2024-07-24 22:34:26.518328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.964 qpair failed and we were unable to recover it. 00:25:00.964 [2024-07-24 22:34:26.518462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.964 [2024-07-24 22:34:26.518510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.964 qpair failed and we were unable to recover it. 00:25:00.964 [2024-07-24 22:34:26.518629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.964 [2024-07-24 22:34:26.518662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.964 qpair failed and we were unable to recover it. 00:25:00.964 [2024-07-24 22:34:26.518816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.964 [2024-07-24 22:34:26.518872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.964 qpair failed and we were unable to recover it. 00:25:00.964 [2024-07-24 22:34:26.518980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.964 [2024-07-24 22:34:26.519008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.964 qpair failed and we were unable to recover it. 00:25:00.964 [2024-07-24 22:34:26.519173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.964 [2024-07-24 22:34:26.519217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.964 qpair failed and we were unable to recover it. 00:25:00.964 [2024-07-24 22:34:26.519341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.964 [2024-07-24 22:34:26.519387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.964 qpair failed and we were unable to recover it. 00:25:00.964 [2024-07-24 22:34:26.519557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.964 [2024-07-24 22:34:26.519584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.964 qpair failed and we were unable to recover it. 00:25:00.964 [2024-07-24 22:34:26.519702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.964 [2024-07-24 22:34:26.519744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.964 qpair failed and we were unable to recover it. 00:25:00.964 [2024-07-24 22:34:26.519840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.964 [2024-07-24 22:34:26.519866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.964 qpair failed and we were unable to recover it. 00:25:00.964 [2024-07-24 22:34:26.520000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.964 [2024-07-24 22:34:26.520047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.964 qpair failed and we were unable to recover it. 00:25:00.964 [2024-07-24 22:34:26.520184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.964 [2024-07-24 22:34:26.520240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.964 qpair failed and we were unable to recover it. 00:25:00.964 [2024-07-24 22:34:26.520433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.964 [2024-07-24 22:34:26.520495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.964 qpair failed and we were unable to recover it. 00:25:00.964 [2024-07-24 22:34:26.520627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.964 [2024-07-24 22:34:26.520672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.964 qpair failed and we were unable to recover it. 00:25:00.964 [2024-07-24 22:34:26.520819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.964 [2024-07-24 22:34:26.520866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.964 qpair failed and we were unable to recover it. 00:25:00.964 [2024-07-24 22:34:26.520976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.964 [2024-07-24 22:34:26.521005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.964 qpair failed and we were unable to recover it. 00:25:00.964 [2024-07-24 22:34:26.521188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.964 [2024-07-24 22:34:26.521242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.964 qpair failed and we were unable to recover it. 00:25:00.964 [2024-07-24 22:34:26.521414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.964 [2024-07-24 22:34:26.521462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.964 qpair failed and we were unable to recover it. 00:25:00.964 [2024-07-24 22:34:26.521571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.964 [2024-07-24 22:34:26.521598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.964 qpair failed and we were unable to recover it. 00:25:00.964 [2024-07-24 22:34:26.521729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.964 [2024-07-24 22:34:26.521783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.964 qpair failed and we were unable to recover it. 00:25:00.964 [2024-07-24 22:34:26.521902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.964 [2024-07-24 22:34:26.521930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.964 qpair failed and we were unable to recover it. 00:25:00.964 [2024-07-24 22:34:26.522032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.964 [2024-07-24 22:34:26.522058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.964 qpair failed and we were unable to recover it. 00:25:00.964 [2024-07-24 22:34:26.522184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.964 [2024-07-24 22:34:26.522231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.964 qpair failed and we were unable to recover it. 00:25:00.965 [2024-07-24 22:34:26.522349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.965 [2024-07-24 22:34:26.522398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.965 qpair failed and we were unable to recover it. 00:25:00.965 [2024-07-24 22:34:26.522501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.965 [2024-07-24 22:34:26.522528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.965 qpair failed and we were unable to recover it. 00:25:00.965 [2024-07-24 22:34:26.522653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.965 [2024-07-24 22:34:26.522723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.965 qpair failed and we were unable to recover it. 00:25:00.965 [2024-07-24 22:34:26.522919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.965 [2024-07-24 22:34:26.522980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.965 qpair failed and we were unable to recover it. 00:25:00.965 [2024-07-24 22:34:26.523108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.965 [2024-07-24 22:34:26.523151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.965 qpair failed and we were unable to recover it. 00:25:00.965 [2024-07-24 22:34:26.523276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.965 [2024-07-24 22:34:26.523326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.965 qpair failed and we were unable to recover it. 00:25:00.965 [2024-07-24 22:34:26.523471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.965 [2024-07-24 22:34:26.523503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.965 qpair failed and we were unable to recover it. 00:25:00.965 [2024-07-24 22:34:26.523635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.965 [2024-07-24 22:34:26.523689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.965 qpair failed and we were unable to recover it. 00:25:00.965 [2024-07-24 22:34:26.523824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.965 [2024-07-24 22:34:26.523867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.965 qpair failed and we were unable to recover it. 00:25:00.965 [2024-07-24 22:34:26.523996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.965 [2024-07-24 22:34:26.524045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.965 qpair failed and we were unable to recover it. 00:25:00.965 [2024-07-24 22:34:26.524146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.965 [2024-07-24 22:34:26.524173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.965 qpair failed and we were unable to recover it. 00:25:00.965 [2024-07-24 22:34:26.524353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.965 [2024-07-24 22:34:26.524417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.965 qpair failed and we were unable to recover it. 00:25:00.965 [2024-07-24 22:34:26.524550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.965 [2024-07-24 22:34:26.524598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.965 qpair failed and we were unable to recover it. 00:25:00.965 [2024-07-24 22:34:26.524722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.965 [2024-07-24 22:34:26.524769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.965 qpair failed and we were unable to recover it. 00:25:00.965 [2024-07-24 22:34:26.524894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.965 [2024-07-24 22:34:26.524938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.965 qpair failed and we were unable to recover it. 00:25:00.965 [2024-07-24 22:34:26.525093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.965 [2024-07-24 22:34:26.525151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.965 qpair failed and we were unable to recover it. 00:25:00.965 [2024-07-24 22:34:26.525280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.965 [2024-07-24 22:34:26.525339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.965 qpair failed and we were unable to recover it. 00:25:00.965 [2024-07-24 22:34:26.525477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.965 [2024-07-24 22:34:26.525543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.965 qpair failed and we were unable to recover it. 00:25:00.965 [2024-07-24 22:34:26.525688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.965 [2024-07-24 22:34:26.525737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.965 qpair failed and we were unable to recover it. 00:25:00.965 [2024-07-24 22:34:26.525887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.965 [2024-07-24 22:34:26.525914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.965 qpair failed and we were unable to recover it. 00:25:00.965 [2024-07-24 22:34:26.526050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.965 [2024-07-24 22:34:26.526097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.965 qpair failed and we were unable to recover it. 00:25:00.965 [2024-07-24 22:34:26.526205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.965 [2024-07-24 22:34:26.526234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.965 qpair failed and we were unable to recover it. 00:25:00.965 [2024-07-24 22:34:26.526389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.965 [2024-07-24 22:34:26.526458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.965 qpair failed and we were unable to recover it. 00:25:00.965 [2024-07-24 22:34:26.526612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.965 [2024-07-24 22:34:26.526661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.965 qpair failed and we were unable to recover it. 00:25:00.965 [2024-07-24 22:34:26.526790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.965 [2024-07-24 22:34:26.526835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.965 qpair failed and we were unable to recover it. 00:25:00.965 [2024-07-24 22:34:26.526937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.965 [2024-07-24 22:34:26.526966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.965 qpair failed and we were unable to recover it. 00:25:00.965 [2024-07-24 22:34:26.527064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.965 [2024-07-24 22:34:26.527091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.965 qpair failed and we were unable to recover it. 00:25:00.965 [2024-07-24 22:34:26.527207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.965 [2024-07-24 22:34:26.527259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.965 qpair failed and we were unable to recover it. 00:25:00.965 [2024-07-24 22:34:26.527376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.965 [2024-07-24 22:34:26.527404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.965 qpair failed and we were unable to recover it. 00:25:00.965 [2024-07-24 22:34:26.527516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.965 [2024-07-24 22:34:26.527543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.965 qpair failed and we were unable to recover it. 00:25:00.965 [2024-07-24 22:34:26.527698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.965 [2024-07-24 22:34:26.527761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.965 qpair failed and we were unable to recover it. 00:25:00.965 [2024-07-24 22:34:26.527905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.965 [2024-07-24 22:34:26.527960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.965 qpair failed and we were unable to recover it. 00:25:00.965 [2024-07-24 22:34:26.528090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.965 [2024-07-24 22:34:26.528146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.965 qpair failed and we were unable to recover it. 00:25:00.965 [2024-07-24 22:34:26.528298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.965 [2024-07-24 22:34:26.528358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.965 qpair failed and we were unable to recover it. 00:25:00.965 [2024-07-24 22:34:26.528475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.965 [2024-07-24 22:34:26.528516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.965 qpair failed and we were unable to recover it. 00:25:00.965 [2024-07-24 22:34:26.528640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.965 [2024-07-24 22:34:26.528695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.965 qpair failed and we were unable to recover it. 00:25:00.965 [2024-07-24 22:34:26.528823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.965 [2024-07-24 22:34:26.528851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.965 qpair failed and we were unable to recover it. 00:25:00.965 [2024-07-24 22:34:26.528988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.965 [2024-07-24 22:34:26.529014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.965 qpair failed and we were unable to recover it. 00:25:00.965 [2024-07-24 22:34:26.529137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.965 [2024-07-24 22:34:26.529202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.965 qpair failed and we were unable to recover it. 00:25:00.965 [2024-07-24 22:34:26.529357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.965 [2024-07-24 22:34:26.529413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.965 qpair failed and we were unable to recover it. 00:25:00.965 [2024-07-24 22:34:26.529566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.965 [2024-07-24 22:34:26.529609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.965 qpair failed and we were unable to recover it. 00:25:00.965 [2024-07-24 22:34:26.529709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.965 [2024-07-24 22:34:26.529734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.965 qpair failed and we were unable to recover it. 00:25:00.965 [2024-07-24 22:34:26.529891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.965 [2024-07-24 22:34:26.529945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.965 qpair failed and we were unable to recover it. 00:25:00.966 [2024-07-24 22:34:26.530044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.966 [2024-07-24 22:34:26.530070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.966 qpair failed and we were unable to recover it. 00:25:00.966 [2024-07-24 22:34:26.530170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.966 [2024-07-24 22:34:26.530195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.966 qpair failed and we were unable to recover it. 00:25:00.966 [2024-07-24 22:34:26.530296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.966 [2024-07-24 22:34:26.530325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.966 qpair failed and we were unable to recover it. 00:25:00.966 [2024-07-24 22:34:26.530431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.966 [2024-07-24 22:34:26.530456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.966 qpair failed and we were unable to recover it. 00:25:00.966 [2024-07-24 22:34:26.530581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.966 [2024-07-24 22:34:26.530609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.966 qpair failed and we were unable to recover it. 00:25:00.966 [2024-07-24 22:34:26.530726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.966 [2024-07-24 22:34:26.530772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.966 qpair failed and we were unable to recover it. 00:25:00.966 [2024-07-24 22:34:26.530874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.966 [2024-07-24 22:34:26.530900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.966 qpair failed and we were unable to recover it. 00:25:00.966 [2024-07-24 22:34:26.531042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.966 [2024-07-24 22:34:26.531092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.966 qpair failed and we were unable to recover it. 00:25:00.966 [2024-07-24 22:34:26.531190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.966 [2024-07-24 22:34:26.531216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.966 qpair failed and we were unable to recover it. 00:25:00.966 [2024-07-24 22:34:26.531328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.966 [2024-07-24 22:34:26.531373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.966 qpair failed and we were unable to recover it. 00:25:00.966 [2024-07-24 22:34:26.531496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.966 [2024-07-24 22:34:26.531539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.966 qpair failed and we were unable to recover it. 00:25:00.966 [2024-07-24 22:34:26.531661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.966 [2024-07-24 22:34:26.531704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.966 qpair failed and we were unable to recover it. 00:25:00.966 [2024-07-24 22:34:26.531801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.966 [2024-07-24 22:34:26.531827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.966 qpair failed and we were unable to recover it. 00:25:00.966 [2024-07-24 22:34:26.531952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.966 [2024-07-24 22:34:26.532008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.966 qpair failed and we were unable to recover it. 00:25:00.966 [2024-07-24 22:34:26.532142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.966 [2024-07-24 22:34:26.532196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.966 qpair failed and we were unable to recover it. 00:25:00.966 [2024-07-24 22:34:26.532296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.966 [2024-07-24 22:34:26.532324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.966 qpair failed and we were unable to recover it. 00:25:00.966 [2024-07-24 22:34:26.532452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.966 [2024-07-24 22:34:26.532533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.966 qpair failed and we were unable to recover it. 00:25:00.966 [2024-07-24 22:34:26.532667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.966 [2024-07-24 22:34:26.532726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.966 qpair failed and we were unable to recover it. 00:25:00.966 [2024-07-24 22:34:26.532852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.966 [2024-07-24 22:34:26.532899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.966 qpair failed and we were unable to recover it. 00:25:00.966 [2024-07-24 22:34:26.533016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.966 [2024-07-24 22:34:26.533064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.966 qpair failed and we were unable to recover it. 00:25:00.966 [2024-07-24 22:34:26.533206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.966 [2024-07-24 22:34:26.533257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.966 qpair failed and we were unable to recover it. 00:25:00.966 [2024-07-24 22:34:26.533362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.966 [2024-07-24 22:34:26.533389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.966 qpair failed and we were unable to recover it. 00:25:00.966 [2024-07-24 22:34:26.533526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.966 [2024-07-24 22:34:26.533567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.966 qpair failed and we were unable to recover it. 00:25:00.966 [2024-07-24 22:34:26.533687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.966 [2024-07-24 22:34:26.533731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.966 qpair failed and we were unable to recover it. 00:25:00.966 [2024-07-24 22:34:26.533847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.966 [2024-07-24 22:34:26.533890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.966 qpair failed and we were unable to recover it. 00:25:00.966 [2024-07-24 22:34:26.534018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.966 [2024-07-24 22:34:26.534060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.966 qpair failed and we were unable to recover it. 00:25:00.966 [2024-07-24 22:34:26.534194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.966 [2024-07-24 22:34:26.534275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.966 qpair failed and we were unable to recover it. 00:25:00.966 [2024-07-24 22:34:26.534376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.966 [2024-07-24 22:34:26.534402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.966 qpair failed and we were unable to recover it. 00:25:00.966 [2024-07-24 22:34:26.534502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.966 [2024-07-24 22:34:26.534531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.966 qpair failed and we were unable to recover it. 00:25:00.966 [2024-07-24 22:34:26.534666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.966 [2024-07-24 22:34:26.534711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.966 qpair failed and we were unable to recover it. 00:25:00.966 [2024-07-24 22:34:26.534809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.966 [2024-07-24 22:34:26.534835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.966 qpair failed and we were unable to recover it. 00:25:00.966 [2024-07-24 22:34:26.534937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.966 [2024-07-24 22:34:26.534963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.966 qpair failed and we were unable to recover it. 00:25:00.966 [2024-07-24 22:34:26.535084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.966 [2024-07-24 22:34:26.535131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.966 qpair failed and we were unable to recover it. 00:25:00.966 [2024-07-24 22:34:26.535260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.966 [2024-07-24 22:34:26.535318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.966 qpair failed and we were unable to recover it. 00:25:00.966 [2024-07-24 22:34:26.535501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.966 [2024-07-24 22:34:26.535555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.966 qpair failed and we were unable to recover it. 00:25:00.966 [2024-07-24 22:34:26.535688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.966 [2024-07-24 22:34:26.535738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.966 qpair failed and we were unable to recover it. 00:25:00.966 [2024-07-24 22:34:26.535844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.966 [2024-07-24 22:34:26.535870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.966 qpair failed and we were unable to recover it. 00:25:00.966 [2024-07-24 22:34:26.535986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.966 [2024-07-24 22:34:26.536031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.966 qpair failed and we were unable to recover it. 00:25:00.966 [2024-07-24 22:34:26.536160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.966 [2024-07-24 22:34:26.536200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.966 qpair failed and we were unable to recover it. 00:25:00.966 [2024-07-24 22:34:26.536320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.966 [2024-07-24 22:34:26.536364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.966 qpair failed and we were unable to recover it. 00:25:00.966 [2024-07-24 22:34:26.536497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.966 [2024-07-24 22:34:26.536540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.966 qpair failed and we were unable to recover it. 00:25:00.966 [2024-07-24 22:34:26.536668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.966 [2024-07-24 22:34:26.536713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.966 qpair failed and we were unable to recover it. 00:25:00.966 [2024-07-24 22:34:26.536834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.966 [2024-07-24 22:34:26.536888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.966 qpair failed and we were unable to recover it. 00:25:00.966 [2024-07-24 22:34:26.537052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.967 [2024-07-24 22:34:26.537113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.967 qpair failed and we were unable to recover it. 00:25:00.967 [2024-07-24 22:34:26.537250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.967 [2024-07-24 22:34:26.537305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.967 qpair failed and we were unable to recover it. 00:25:00.967 [2024-07-24 22:34:26.537431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.967 [2024-07-24 22:34:26.537461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.967 qpair failed and we were unable to recover it. 00:25:00.967 [2024-07-24 22:34:26.537593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.967 [2024-07-24 22:34:26.537630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.967 qpair failed and we were unable to recover it. 00:25:00.967 [2024-07-24 22:34:26.537761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.967 [2024-07-24 22:34:26.537797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.967 qpair failed and we were unable to recover it. 00:25:00.967 [2024-07-24 22:34:26.537920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.967 [2024-07-24 22:34:26.537946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.967 qpair failed and we were unable to recover it. 00:25:00.967 [2024-07-24 22:34:26.538058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.967 [2024-07-24 22:34:26.538103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.967 qpair failed and we were unable to recover it. 00:25:00.967 [2024-07-24 22:34:26.538254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.967 [2024-07-24 22:34:26.538306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.967 qpair failed and we were unable to recover it. 00:25:00.967 [2024-07-24 22:34:26.538408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.967 [2024-07-24 22:34:26.538435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.967 qpair failed and we were unable to recover it. 00:25:00.967 [2024-07-24 22:34:26.538560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.967 [2024-07-24 22:34:26.538606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.967 qpair failed and we were unable to recover it. 00:25:00.967 [2024-07-24 22:34:26.538743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.967 [2024-07-24 22:34:26.538791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.967 qpair failed and we were unable to recover it. 00:25:00.967 [2024-07-24 22:34:26.538916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.967 [2024-07-24 22:34:26.538961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.967 qpair failed and we were unable to recover it. 00:25:00.967 [2024-07-24 22:34:26.539092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.967 [2024-07-24 22:34:26.539139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.967 qpair failed and we were unable to recover it. 00:25:00.967 [2024-07-24 22:34:26.539256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.967 [2024-07-24 22:34:26.539292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.967 qpair failed and we were unable to recover it. 00:25:00.967 [2024-07-24 22:34:26.539416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.967 [2024-07-24 22:34:26.539446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.967 qpair failed and we were unable to recover it. 00:25:00.967 [2024-07-24 22:34:26.539587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.967 [2024-07-24 22:34:26.539633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.967 qpair failed and we were unable to recover it. 00:25:00.967 [2024-07-24 22:34:26.539790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.967 [2024-07-24 22:34:26.539818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.967 qpair failed and we were unable to recover it. 00:25:00.967 [2024-07-24 22:34:26.539939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.967 [2024-07-24 22:34:26.539984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.967 qpair failed and we were unable to recover it. 00:25:00.967 [2024-07-24 22:34:26.540107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.967 [2024-07-24 22:34:26.540171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.967 qpair failed and we were unable to recover it. 00:25:00.967 [2024-07-24 22:34:26.540288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.967 [2024-07-24 22:34:26.540314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.967 qpair failed and we were unable to recover it. 00:25:00.967 [2024-07-24 22:34:26.540425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.967 [2024-07-24 22:34:26.540469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.967 qpair failed and we were unable to recover it. 00:25:00.967 [2024-07-24 22:34:26.540578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.967 [2024-07-24 22:34:26.540604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.967 qpair failed and we were unable to recover it. 00:25:00.967 [2024-07-24 22:34:26.540703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.967 [2024-07-24 22:34:26.540729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.967 qpair failed and we were unable to recover it. 00:25:00.967 [2024-07-24 22:34:26.540841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.967 [2024-07-24 22:34:26.540877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.967 qpair failed and we were unable to recover it. 00:25:00.967 [2024-07-24 22:34:26.541019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.967 [2024-07-24 22:34:26.541064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.967 qpair failed and we were unable to recover it. 00:25:00.967 [2024-07-24 22:34:26.541193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.967 [2024-07-24 22:34:26.541235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.967 qpair failed and we were unable to recover it. 00:25:00.967 [2024-07-24 22:34:26.541359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.967 [2024-07-24 22:34:26.541399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.967 qpair failed and we were unable to recover it. 00:25:00.967 [2024-07-24 22:34:26.541520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.967 [2024-07-24 22:34:26.541565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.967 qpair failed and we were unable to recover it. 00:25:00.967 [2024-07-24 22:34:26.541701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.967 [2024-07-24 22:34:26.541750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.967 qpair failed and we were unable to recover it. 00:25:00.967 [2024-07-24 22:34:26.541871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.967 [2024-07-24 22:34:26.541917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.967 qpair failed and we were unable to recover it. 00:25:00.967 [2024-07-24 22:34:26.542071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.967 [2024-07-24 22:34:26.542124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.967 qpair failed and we were unable to recover it. 00:25:00.967 [2024-07-24 22:34:26.542254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.967 [2024-07-24 22:34:26.542301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.967 qpair failed and we were unable to recover it. 00:25:00.967 [2024-07-24 22:34:26.542413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.967 [2024-07-24 22:34:26.542449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.967 qpair failed and we were unable to recover it. 00:25:00.967 [2024-07-24 22:34:26.542599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.967 [2024-07-24 22:34:26.542642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.967 qpair failed and we were unable to recover it. 00:25:00.967 [2024-07-24 22:34:26.542754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.967 [2024-07-24 22:34:26.542781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.967 qpair failed and we were unable to recover it. 00:25:00.967 [2024-07-24 22:34:26.542893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.967 [2024-07-24 22:34:26.542928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.967 qpair failed and we were unable to recover it. 00:25:00.967 [2024-07-24 22:34:26.543043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.967 [2024-07-24 22:34:26.543072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.967 qpair failed and we were unable to recover it. 00:25:00.967 [2024-07-24 22:34:26.543177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.967 [2024-07-24 22:34:26.543205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.967 qpair failed and we were unable to recover it. 00:25:00.967 [2024-07-24 22:34:26.543322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.967 [2024-07-24 22:34:26.543367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.967 qpair failed and we were unable to recover it. 00:25:00.967 [2024-07-24 22:34:26.543497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.967 [2024-07-24 22:34:26.543541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.967 qpair failed and we were unable to recover it. 00:25:00.967 [2024-07-24 22:34:26.543665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.967 [2024-07-24 22:34:26.543711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.967 qpair failed and we were unable to recover it. 00:25:00.967 [2024-07-24 22:34:26.543832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.967 [2024-07-24 22:34:26.543867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.967 qpair failed and we were unable to recover it. 00:25:00.967 [2024-07-24 22:34:26.544016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.967 [2024-07-24 22:34:26.544064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.967 qpair failed and we were unable to recover it. 00:25:00.967 [2024-07-24 22:34:26.544215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.967 [2024-07-24 22:34:26.544276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.967 qpair failed and we were unable to recover it. 00:25:00.967 [2024-07-24 22:34:26.544415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.967 [2024-07-24 22:34:26.544463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.968 qpair failed and we were unable to recover it. 00:25:00.968 [2024-07-24 22:34:26.544602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.968 [2024-07-24 22:34:26.544648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.968 qpair failed and we were unable to recover it. 00:25:00.968 [2024-07-24 22:34:26.544760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.968 [2024-07-24 22:34:26.544794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.968 qpair failed and we were unable to recover it. 00:25:00.968 [2024-07-24 22:34:26.544921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.968 [2024-07-24 22:34:26.544967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.968 qpair failed and we were unable to recover it. 00:25:00.968 [2024-07-24 22:34:26.545089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.968 [2024-07-24 22:34:26.545130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.968 qpair failed and we were unable to recover it. 00:25:00.968 [2024-07-24 22:34:26.545248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.968 [2024-07-24 22:34:26.545293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.968 qpair failed and we were unable to recover it. 00:25:00.968 [2024-07-24 22:34:26.545423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.968 [2024-07-24 22:34:26.545476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.968 qpair failed and we were unable to recover it. 00:25:00.968 [2024-07-24 22:34:26.545644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.968 [2024-07-24 22:34:26.545682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.968 qpair failed and we were unable to recover it. 00:25:00.968 [2024-07-24 22:34:26.545820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.968 [2024-07-24 22:34:26.545868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.968 qpair failed and we were unable to recover it. 00:25:00.968 [2024-07-24 22:34:26.545991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.968 [2024-07-24 22:34:26.546034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.968 qpair failed and we were unable to recover it. 00:25:00.968 [2024-07-24 22:34:26.546159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.968 [2024-07-24 22:34:26.546207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.968 qpair failed and we were unable to recover it. 00:25:00.968 [2024-07-24 22:34:26.546326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.968 [2024-07-24 22:34:26.546371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.968 qpair failed and we were unable to recover it. 00:25:00.968 [2024-07-24 22:34:26.546498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.968 [2024-07-24 22:34:26.546543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.968 qpair failed and we were unable to recover it. 00:25:00.968 [2024-07-24 22:34:26.546659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.968 [2024-07-24 22:34:26.546705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.968 qpair failed and we were unable to recover it. 00:25:00.968 [2024-07-24 22:34:26.546820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.968 [2024-07-24 22:34:26.546861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.968 qpair failed and we were unable to recover it. 00:25:00.968 [2024-07-24 22:34:26.547002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.968 [2024-07-24 22:34:26.547048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.968 qpair failed and we were unable to recover it. 00:25:00.968 [2024-07-24 22:34:26.547168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.968 [2024-07-24 22:34:26.547213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.968 qpair failed and we were unable to recover it. 00:25:00.968 [2024-07-24 22:34:26.547366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.968 [2024-07-24 22:34:26.547393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.968 qpair failed and we were unable to recover it. 00:25:00.968 [2024-07-24 22:34:26.547498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.968 [2024-07-24 22:34:26.547525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.968 qpair failed and we were unable to recover it. 00:25:00.968 [2024-07-24 22:34:26.547628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.968 [2024-07-24 22:34:26.547655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.968 qpair failed and we were unable to recover it. 00:25:00.968 [2024-07-24 22:34:26.547789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.968 [2024-07-24 22:34:26.547838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.968 qpair failed and we were unable to recover it. 00:25:00.968 [2024-07-24 22:34:26.547959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.968 [2024-07-24 22:34:26.548003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.968 qpair failed and we were unable to recover it. 00:25:00.968 [2024-07-24 22:34:26.548113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.968 [2024-07-24 22:34:26.548160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.968 qpair failed and we were unable to recover it. 00:25:00.968 [2024-07-24 22:34:26.548287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.968 [2024-07-24 22:34:26.548332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.968 qpair failed and we were unable to recover it. 00:25:00.968 [2024-07-24 22:34:26.548476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.968 [2024-07-24 22:34:26.548509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.968 qpair failed and we were unable to recover it. 00:25:00.968 [2024-07-24 22:34:26.548636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.968 [2024-07-24 22:34:26.548678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.968 qpair failed and we were unable to recover it. 00:25:00.968 [2024-07-24 22:34:26.548801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.968 [2024-07-24 22:34:26.548847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.968 qpair failed and we were unable to recover it. 00:25:00.968 [2024-07-24 22:34:26.548974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.968 [2024-07-24 22:34:26.549017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.968 qpair failed and we were unable to recover it. 00:25:00.968 [2024-07-24 22:34:26.549128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.968 [2024-07-24 22:34:26.549154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.968 qpair failed and we were unable to recover it. 00:25:00.968 [2024-07-24 22:34:26.549252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.968 [2024-07-24 22:34:26.549278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.968 qpair failed and we were unable to recover it. 00:25:00.968 [2024-07-24 22:34:26.549390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.968 [2024-07-24 22:34:26.549416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.968 qpair failed and we were unable to recover it. 00:25:00.968 [2024-07-24 22:34:26.549529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.968 [2024-07-24 22:34:26.549564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.968 qpair failed and we were unable to recover it. 00:25:00.968 [2024-07-24 22:34:26.549694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.968 [2024-07-24 22:34:26.549741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.968 qpair failed and we were unable to recover it. 00:25:00.968 [2024-07-24 22:34:26.549867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.968 [2024-07-24 22:34:26.549913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.968 qpair failed and we were unable to recover it. 00:25:00.968 [2024-07-24 22:34:26.550011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.968 [2024-07-24 22:34:26.550037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.968 qpair failed and we were unable to recover it. 00:25:00.968 [2024-07-24 22:34:26.550154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.968 [2024-07-24 22:34:26.550196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.968 qpair failed and we were unable to recover it. 00:25:00.968 [2024-07-24 22:34:26.550326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.968 [2024-07-24 22:34:26.550408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.968 qpair failed and we were unable to recover it. 00:25:00.968 [2024-07-24 22:34:26.550528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.968 [2024-07-24 22:34:26.550564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.968 qpair failed and we were unable to recover it. 00:25:00.968 [2024-07-24 22:34:26.550700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.968 [2024-07-24 22:34:26.550750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.968 qpair failed and we were unable to recover it. 00:25:00.968 [2024-07-24 22:34:26.550872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.968 [2024-07-24 22:34:26.550917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.968 qpair failed and we were unable to recover it. 00:25:00.968 [2024-07-24 22:34:26.551029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.968 [2024-07-24 22:34:26.551066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.968 qpair failed and we were unable to recover it. 00:25:00.969 [2024-07-24 22:34:26.551193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.969 [2024-07-24 22:34:26.551240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.969 qpair failed and we were unable to recover it. 00:25:00.969 [2024-07-24 22:34:26.551360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.969 [2024-07-24 22:34:26.551408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.969 qpair failed and we were unable to recover it. 00:25:00.969 [2024-07-24 22:34:26.551511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.969 [2024-07-24 22:34:26.551537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.969 qpair failed and we were unable to recover it. 00:25:00.969 [2024-07-24 22:34:26.551655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.969 [2024-07-24 22:34:26.551702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.969 qpair failed and we were unable to recover it. 00:25:00.969 [2024-07-24 22:34:26.551885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.969 [2024-07-24 22:34:26.551934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.969 qpair failed and we were unable to recover it. 00:25:00.969 [2024-07-24 22:34:26.552066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.969 [2024-07-24 22:34:26.552148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.969 qpair failed and we were unable to recover it. 00:25:00.969 [2024-07-24 22:34:26.552267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.969 [2024-07-24 22:34:26.552310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.969 qpair failed and we were unable to recover it. 00:25:00.969 [2024-07-24 22:34:26.552417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.969 [2024-07-24 22:34:26.552446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.969 qpair failed and we were unable to recover it. 00:25:00.969 [2024-07-24 22:34:26.552557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.969 [2024-07-24 22:34:26.552585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.969 qpair failed and we were unable to recover it. 00:25:00.969 [2024-07-24 22:34:26.552735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.969 [2024-07-24 22:34:26.552779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.969 qpair failed and we were unable to recover it. 00:25:00.969 [2024-07-24 22:34:26.552956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.969 [2024-07-24 22:34:26.553016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.969 qpair failed and we were unable to recover it. 00:25:00.969 [2024-07-24 22:34:26.553148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.969 [2024-07-24 22:34:26.553192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.969 qpair failed and we were unable to recover it. 00:25:00.969 [2024-07-24 22:34:26.553338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.969 [2024-07-24 22:34:26.553418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.969 qpair failed and we were unable to recover it. 00:25:00.969 [2024-07-24 22:34:26.553557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.969 [2024-07-24 22:34:26.553610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.969 qpair failed and we were unable to recover it. 00:25:00.969 [2024-07-24 22:34:26.553760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.969 [2024-07-24 22:34:26.553816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.969 qpair failed and we were unable to recover it. 00:25:00.969 [2024-07-24 22:34:26.553943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.969 [2024-07-24 22:34:26.553987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.969 qpair failed and we were unable to recover it. 00:25:00.969 [2024-07-24 22:34:26.554143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.969 [2024-07-24 22:34:26.554223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.969 qpair failed and we were unable to recover it. 00:25:00.969 [2024-07-24 22:34:26.554350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.969 [2024-07-24 22:34:26.554391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.969 qpair failed and we were unable to recover it. 00:25:00.969 [2024-07-24 22:34:26.554499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.969 [2024-07-24 22:34:26.554526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.969 qpair failed and we were unable to recover it. 00:25:00.969 [2024-07-24 22:34:26.554658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.969 [2024-07-24 22:34:26.554739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.969 qpair failed and we were unable to recover it. 00:25:00.969 [2024-07-24 22:34:26.554894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.969 [2024-07-24 22:34:26.554942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.969 qpair failed and we were unable to recover it. 00:25:00.969 [2024-07-24 22:34:26.555039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.970 [2024-07-24 22:34:26.555065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.970 qpair failed and we were unable to recover it. 00:25:00.970 [2024-07-24 22:34:26.555173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.970 [2024-07-24 22:34:26.555202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.970 qpair failed and we were unable to recover it. 00:25:00.970 [2024-07-24 22:34:26.555330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.970 [2024-07-24 22:34:26.555382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.970 qpair failed and we were unable to recover it. 00:25:00.970 [2024-07-24 22:34:26.555531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.970 [2024-07-24 22:34:26.555599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.970 qpair failed and we were unable to recover it. 00:25:00.970 [2024-07-24 22:34:26.555725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.970 [2024-07-24 22:34:26.555770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.970 qpair failed and we were unable to recover it. 00:25:00.970 [2024-07-24 22:34:26.555872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.970 [2024-07-24 22:34:26.555898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.970 qpair failed and we were unable to recover it. 00:25:00.970 [2024-07-24 22:34:26.556040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.970 [2024-07-24 22:34:26.556094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.970 qpair failed and we were unable to recover it. 00:25:00.970 [2024-07-24 22:34:26.556190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.970 [2024-07-24 22:34:26.556216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.970 qpair failed and we were unable to recover it. 00:25:00.970 [2024-07-24 22:34:26.556394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.970 [2024-07-24 22:34:26.556443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.970 qpair failed and we were unable to recover it. 00:25:00.970 [2024-07-24 22:34:26.556617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.970 [2024-07-24 22:34:26.556662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.970 qpair failed and we were unable to recover it. 00:25:00.970 [2024-07-24 22:34:26.556785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.970 [2024-07-24 22:34:26.556831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.970 qpair failed and we were unable to recover it. 00:25:00.970 [2024-07-24 22:34:26.556952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.970 [2024-07-24 22:34:26.556996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.970 qpair failed and we were unable to recover it. 00:25:00.970 [2024-07-24 22:34:26.557113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.970 [2024-07-24 22:34:26.557157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.970 qpair failed and we were unable to recover it. 00:25:00.970 [2024-07-24 22:34:26.557284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.970 [2024-07-24 22:34:26.557337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.970 qpair failed and we were unable to recover it. 00:25:00.970 [2024-07-24 22:34:26.557452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.970 [2024-07-24 22:34:26.557501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.970 qpair failed and we were unable to recover it. 00:25:00.970 [2024-07-24 22:34:26.557628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.970 [2024-07-24 22:34:26.557719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.970 qpair failed and we were unable to recover it. 00:25:00.970 [2024-07-24 22:34:26.557895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.970 [2024-07-24 22:34:26.557940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.970 qpair failed and we were unable to recover it. 00:25:00.970 [2024-07-24 22:34:26.558068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.970 [2024-07-24 22:34:26.558107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.970 qpair failed and we were unable to recover it. 00:25:00.970 [2024-07-24 22:34:26.558268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.970 [2024-07-24 22:34:26.558317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.970 qpair failed and we were unable to recover it. 00:25:00.970 [2024-07-24 22:34:26.558441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.970 [2024-07-24 22:34:26.558532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.970 qpair failed and we were unable to recover it. 00:25:00.970 [2024-07-24 22:34:26.558647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.970 [2024-07-24 22:34:26.558687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.970 qpair failed and we were unable to recover it. 00:25:00.970 [2024-07-24 22:34:26.558804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.970 [2024-07-24 22:34:26.558834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.970 qpair failed and we were unable to recover it. 00:25:00.970 [2024-07-24 22:34:26.558935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.970 [2024-07-24 22:34:26.558963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.970 qpair failed and we were unable to recover it. 00:25:00.970 [2024-07-24 22:34:26.559061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.970 [2024-07-24 22:34:26.559087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.970 qpair failed and we were unable to recover it. 00:25:00.970 [2024-07-24 22:34:26.559216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.970 [2024-07-24 22:34:26.559296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.970 qpair failed and we were unable to recover it. 00:25:00.970 [2024-07-24 22:34:26.559418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.970 [2024-07-24 22:34:26.559459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.970 qpair failed and we were unable to recover it. 00:25:00.970 [2024-07-24 22:34:26.559605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.970 [2024-07-24 22:34:26.559649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.970 qpair failed and we were unable to recover it. 00:25:00.970 [2024-07-24 22:34:26.559778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.970 [2024-07-24 22:34:26.559858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.970 qpair failed and we were unable to recover it. 00:25:00.970 [2024-07-24 22:34:26.559987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.970 [2024-07-24 22:34:26.560026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.970 qpair failed and we were unable to recover it. 00:25:00.970 [2024-07-24 22:34:26.560129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.970 [2024-07-24 22:34:26.560155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.970 qpair failed and we were unable to recover it. 00:25:00.970 [2024-07-24 22:34:26.560337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.970 [2024-07-24 22:34:26.560397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.970 qpair failed and we were unable to recover it. 00:25:00.970 [2024-07-24 22:34:26.560498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.970 [2024-07-24 22:34:26.560525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.970 qpair failed and we were unable to recover it. 00:25:00.970 [2024-07-24 22:34:26.560673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.970 [2024-07-24 22:34:26.560703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.970 qpair failed and we were unable to recover it. 00:25:00.970 [2024-07-24 22:34:26.560831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.970 [2024-07-24 22:34:26.560884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.970 qpair failed and we were unable to recover it. 00:25:00.970 [2024-07-24 22:34:26.561020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.970 [2024-07-24 22:34:26.561078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.970 qpair failed and we were unable to recover it. 00:25:00.970 [2024-07-24 22:34:26.561210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.970 [2024-07-24 22:34:26.561249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.970 qpair failed and we were unable to recover it. 00:25:00.970 [2024-07-24 22:34:26.561390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.970 [2024-07-24 22:34:26.561444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.970 qpair failed and we were unable to recover it. 00:25:00.970 [2024-07-24 22:34:26.561557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.970 [2024-07-24 22:34:26.561585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.970 qpair failed and we were unable to recover it. 00:25:00.970 [2024-07-24 22:34:26.561742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.970 [2024-07-24 22:34:26.561789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.970 qpair failed and we were unable to recover it. 00:25:00.970 [2024-07-24 22:34:26.561922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.971 [2024-07-24 22:34:26.561979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.971 qpair failed and we were unable to recover it. 00:25:00.971 [2024-07-24 22:34:26.562121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.971 [2024-07-24 22:34:26.562176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.971 qpair failed and we were unable to recover it. 00:25:00.971 [2024-07-24 22:34:26.562301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.971 [2024-07-24 22:34:26.562384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.971 qpair failed and we were unable to recover it. 00:25:00.971 [2024-07-24 22:34:26.562528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.971 [2024-07-24 22:34:26.562580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.971 qpair failed and we were unable to recover it. 00:25:00.971 [2024-07-24 22:34:26.562707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.971 [2024-07-24 22:34:26.562751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.971 qpair failed and we were unable to recover it. 00:25:00.971 [2024-07-24 22:34:26.562873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.971 [2024-07-24 22:34:26.562915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.971 qpair failed and we were unable to recover it. 00:25:00.971 [2024-07-24 22:34:26.563041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.971 [2024-07-24 22:34:26.563083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.971 qpair failed and we were unable to recover it. 00:25:00.971 [2024-07-24 22:34:26.563231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.971 [2024-07-24 22:34:26.563296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.971 qpair failed and we were unable to recover it. 00:25:00.971 [2024-07-24 22:34:26.563469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.971 [2024-07-24 22:34:26.563547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.971 qpair failed and we were unable to recover it. 00:25:00.971 [2024-07-24 22:34:26.563645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.971 [2024-07-24 22:34:26.563671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.971 qpair failed and we were unable to recover it. 00:25:00.971 [2024-07-24 22:34:26.563792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.971 [2024-07-24 22:34:26.563836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.971 qpair failed and we were unable to recover it. 00:25:00.971 [2024-07-24 22:34:26.563951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.971 [2024-07-24 22:34:26.563995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.971 qpair failed and we were unable to recover it. 00:25:00.971 [2024-07-24 22:34:26.564188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.971 [2024-07-24 22:34:26.564242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.971 qpair failed and we were unable to recover it. 00:25:00.971 [2024-07-24 22:34:26.564363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.971 [2024-07-24 22:34:26.564417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.971 qpair failed and we were unable to recover it. 00:25:00.971 [2024-07-24 22:34:26.564533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.971 [2024-07-24 22:34:26.564560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.971 qpair failed and we were unable to recover it. 00:25:00.971 [2024-07-24 22:34:26.564664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.971 [2024-07-24 22:34:26.564690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.971 qpair failed and we were unable to recover it. 00:25:00.971 [2024-07-24 22:34:26.564817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.971 [2024-07-24 22:34:26.564871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.971 qpair failed and we were unable to recover it. 00:25:00.971 [2024-07-24 22:34:26.565031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.971 [2024-07-24 22:34:26.565081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.971 qpair failed and we were unable to recover it. 00:25:00.971 [2024-07-24 22:34:26.565208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.971 [2024-07-24 22:34:26.565256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.971 qpair failed and we were unable to recover it. 00:25:00.971 [2024-07-24 22:34:26.565377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.971 [2024-07-24 22:34:26.565406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.971 qpair failed and we were unable to recover it. 00:25:00.971 [2024-07-24 22:34:26.565572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.971 [2024-07-24 22:34:26.565655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.971 qpair failed and we were unable to recover it. 00:25:00.971 [2024-07-24 22:34:26.565761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.971 [2024-07-24 22:34:26.565790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.971 qpair failed and we were unable to recover it. 00:25:00.971 [2024-07-24 22:34:26.565917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.971 [2024-07-24 22:34:26.565962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.971 qpair failed and we were unable to recover it. 00:25:00.971 [2024-07-24 22:34:26.566099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.971 [2024-07-24 22:34:26.566144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.971 qpair failed and we were unable to recover it. 00:25:00.971 [2024-07-24 22:34:26.566250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.971 [2024-07-24 22:34:26.566278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.971 qpair failed and we were unable to recover it. 00:25:00.971 [2024-07-24 22:34:26.566406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.971 [2024-07-24 22:34:26.566473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.971 qpair failed and we were unable to recover it. 00:25:00.971 [2024-07-24 22:34:26.566634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.971 [2024-07-24 22:34:26.566715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.971 qpair failed and we were unable to recover it. 00:25:00.971 [2024-07-24 22:34:26.566833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.971 [2024-07-24 22:34:26.566878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.971 qpair failed and we were unable to recover it. 00:25:00.971 [2024-07-24 22:34:26.567006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.971 [2024-07-24 22:34:26.567045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.971 qpair failed and we were unable to recover it. 00:25:00.971 [2024-07-24 22:34:26.567194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.971 [2024-07-24 22:34:26.567259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.971 qpair failed and we were unable to recover it. 00:25:00.971 [2024-07-24 22:34:26.567432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.971 [2024-07-24 22:34:26.567509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.971 qpair failed and we were unable to recover it. 00:25:00.971 [2024-07-24 22:34:26.567682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.971 [2024-07-24 22:34:26.567730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.971 qpair failed and we were unable to recover it. 00:25:00.971 [2024-07-24 22:34:26.567858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.971 [2024-07-24 22:34:26.567939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.971 qpair failed and we were unable to recover it. 00:25:00.971 [2024-07-24 22:34:26.568070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.971 [2024-07-24 22:34:26.568152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.971 qpair failed and we were unable to recover it. 00:25:00.971 [2024-07-24 22:34:26.568284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.971 [2024-07-24 22:34:26.568336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.971 qpair failed and we were unable to recover it. 00:25:00.971 [2024-07-24 22:34:26.568500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.971 [2024-07-24 22:34:26.568545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.971 qpair failed and we were unable to recover it. 00:25:00.971 [2024-07-24 22:34:26.568666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.971 [2024-07-24 22:34:26.568713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.971 qpair failed and we were unable to recover it. 00:25:00.971 [2024-07-24 22:34:26.568842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.971 [2024-07-24 22:34:26.568925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.972 qpair failed and we were unable to recover it. 00:25:00.972 [2024-07-24 22:34:26.569094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.972 [2024-07-24 22:34:26.569119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.972 qpair failed and we were unable to recover it. 00:25:00.972 [2024-07-24 22:34:26.569243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.972 [2024-07-24 22:34:26.569311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.972 qpair failed and we were unable to recover it. 00:25:00.972 [2024-07-24 22:34:26.569519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.972 [2024-07-24 22:34:26.569568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.972 qpair failed and we were unable to recover it. 00:25:00.972 [2024-07-24 22:34:26.569742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.972 [2024-07-24 22:34:26.569789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.972 qpair failed and we were unable to recover it. 00:25:00.972 [2024-07-24 22:34:26.569919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.972 [2024-07-24 22:34:26.569999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.972 qpair failed and we were unable to recover it. 00:25:00.972 [2024-07-24 22:34:26.570117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.972 [2024-07-24 22:34:26.570161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.972 qpair failed and we were unable to recover it. 00:25:00.972 [2024-07-24 22:34:26.570279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.972 [2024-07-24 22:34:26.570337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.972 qpair failed and we were unable to recover it. 00:25:00.972 [2024-07-24 22:34:26.570458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.972 [2024-07-24 22:34:26.570521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.972 qpair failed and we were unable to recover it. 00:25:00.972 [2024-07-24 22:34:26.570655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.972 [2024-07-24 22:34:26.570709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.972 qpair failed and we were unable to recover it. 00:25:00.972 [2024-07-24 22:34:26.570895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.972 [2024-07-24 22:34:26.570949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.972 qpair failed and we were unable to recover it. 00:25:00.972 [2024-07-24 22:34:26.571049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.972 [2024-07-24 22:34:26.571074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.972 qpair failed and we were unable to recover it. 00:25:00.972 [2024-07-24 22:34:26.571205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.972 [2024-07-24 22:34:26.571258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.972 qpair failed and we were unable to recover it. 00:25:00.972 [2024-07-24 22:34:26.571374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.972 [2024-07-24 22:34:26.571418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.972 qpair failed and we were unable to recover it. 00:25:00.972 [2024-07-24 22:34:26.571599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.972 [2024-07-24 22:34:26.571625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.972 qpair failed and we were unable to recover it. 00:25:00.972 [2024-07-24 22:34:26.571751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.972 [2024-07-24 22:34:26.571794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.972 qpair failed and we were unable to recover it. 00:25:00.972 [2024-07-24 22:34:26.571919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.972 [2024-07-24 22:34:26.571963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.972 qpair failed and we were unable to recover it. 00:25:00.972 [2024-07-24 22:34:26.572126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.972 [2024-07-24 22:34:26.572173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.972 qpair failed and we were unable to recover it. 00:25:00.972 [2024-07-24 22:34:26.572301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.972 [2024-07-24 22:34:26.572346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.972 qpair failed and we were unable to recover it. 00:25:00.972 [2024-07-24 22:34:26.572475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.972 [2024-07-24 22:34:26.572519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.972 qpair failed and we were unable to recover it. 00:25:00.972 [2024-07-24 22:34:26.572671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.972 [2024-07-24 22:34:26.572722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.972 qpair failed and we were unable to recover it. 00:25:00.972 [2024-07-24 22:34:26.572854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.972 [2024-07-24 22:34:26.572935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.972 qpair failed and we were unable to recover it. 00:25:00.972 [2024-07-24 22:34:26.573061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.972 [2024-07-24 22:34:26.573106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.972 qpair failed and we were unable to recover it. 00:25:00.972 [2024-07-24 22:34:26.573225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.972 [2024-07-24 22:34:26.573289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.972 qpair failed and we were unable to recover it. 00:25:00.972 [2024-07-24 22:34:26.573411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.972 [2024-07-24 22:34:26.573441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.972 qpair failed and we were unable to recover it. 00:25:00.972 [2024-07-24 22:34:26.573603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.972 [2024-07-24 22:34:26.573647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.972 qpair failed and we were unable to recover it. 00:25:00.972 [2024-07-24 22:34:26.573808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.972 [2024-07-24 22:34:26.573874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.972 qpair failed and we were unable to recover it. 00:25:00.972 [2024-07-24 22:34:26.574018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.972 [2024-07-24 22:34:26.574072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.972 qpair failed and we were unable to recover it. 00:25:00.972 [2024-07-24 22:34:26.574235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.972 [2024-07-24 22:34:26.574291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.972 qpair failed and we were unable to recover it. 00:25:00.972 [2024-07-24 22:34:26.574437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.972 [2024-07-24 22:34:26.574512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.972 qpair failed and we were unable to recover it. 00:25:00.972 [2024-07-24 22:34:26.574705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.972 [2024-07-24 22:34:26.574753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.972 qpair failed and we were unable to recover it. 00:25:00.972 [2024-07-24 22:34:26.574879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.972 [2024-07-24 22:34:26.574943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.972 qpair failed and we were unable to recover it. 00:25:00.972 [2024-07-24 22:34:26.575094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.972 [2024-07-24 22:34:26.575138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.972 qpair failed and we were unable to recover it. 00:25:00.972 [2024-07-24 22:34:26.575238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.972 [2024-07-24 22:34:26.575265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.972 qpair failed and we were unable to recover it. 00:25:00.972 [2024-07-24 22:34:26.575415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.972 [2024-07-24 22:34:26.575474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.972 qpair failed and we were unable to recover it. 00:25:00.972 [2024-07-24 22:34:26.575661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.972 [2024-07-24 22:34:26.575707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.972 qpair failed and we were unable to recover it. 00:25:00.972 [2024-07-24 22:34:26.575866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.972 [2024-07-24 22:34:26.575916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.972 qpair failed and we were unable to recover it. 00:25:00.972 [2024-07-24 22:34:26.576050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.972 [2024-07-24 22:34:26.576089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.972 qpair failed and we were unable to recover it. 00:25:00.973 [2024-07-24 22:34:26.576277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.973 [2024-07-24 22:34:26.576333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.973 qpair failed and we were unable to recover it. 00:25:00.973 [2024-07-24 22:34:26.576466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.973 [2024-07-24 22:34:26.576532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.973 qpair failed and we were unable to recover it. 00:25:00.973 [2024-07-24 22:34:26.576664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.973 [2024-07-24 22:34:26.576748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.973 qpair failed and we were unable to recover it. 00:25:00.973 [2024-07-24 22:34:26.576852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.973 [2024-07-24 22:34:26.576879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.973 qpair failed and we were unable to recover it. 00:25:00.973 [2024-07-24 22:34:26.577063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.973 [2024-07-24 22:34:26.577112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.973 qpair failed and we were unable to recover it. 00:25:00.973 [2024-07-24 22:34:26.577230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.973 [2024-07-24 22:34:26.577278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.973 qpair failed and we were unable to recover it. 00:25:00.973 [2024-07-24 22:34:26.577406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.973 [2024-07-24 22:34:26.577494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.973 qpair failed and we were unable to recover it. 00:25:00.973 [2024-07-24 22:34:26.577618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.973 [2024-07-24 22:34:26.577664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.973 qpair failed and we were unable to recover it. 00:25:00.973 [2024-07-24 22:34:26.577779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.973 [2024-07-24 22:34:26.577825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.973 qpair failed and we were unable to recover it. 00:25:00.973 [2024-07-24 22:34:26.578026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.973 [2024-07-24 22:34:26.578077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.973 qpair failed and we were unable to recover it. 00:25:00.973 [2024-07-24 22:34:26.578202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.973 [2024-07-24 22:34:26.578242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.973 qpair failed and we were unable to recover it. 00:25:00.973 [2024-07-24 22:34:26.578371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.973 [2024-07-24 22:34:26.578428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.973 qpair failed and we were unable to recover it. 00:25:00.973 [2024-07-24 22:34:26.578536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.973 [2024-07-24 22:34:26.578564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.973 qpair failed and we were unable to recover it. 00:25:00.973 [2024-07-24 22:34:26.578671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.973 [2024-07-24 22:34:26.578697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.973 qpair failed and we were unable to recover it. 00:25:00.973 [2024-07-24 22:34:26.578888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.973 [2024-07-24 22:34:26.578938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.973 qpair failed and we were unable to recover it. 00:25:00.973 [2024-07-24 22:34:26.579061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.973 [2024-07-24 22:34:26.579116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.973 qpair failed and we were unable to recover it. 00:25:00.973 [2024-07-24 22:34:26.579338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.973 [2024-07-24 22:34:26.579393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.973 qpair failed and we were unable to recover it. 00:25:00.973 [2024-07-24 22:34:26.579504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.973 [2024-07-24 22:34:26.579531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.973 qpair failed and we were unable to recover it. 00:25:00.973 [2024-07-24 22:34:26.579676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.973 [2024-07-24 22:34:26.579722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.973 qpair failed and we were unable to recover it. 00:25:00.973 [2024-07-24 22:34:26.579827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.973 [2024-07-24 22:34:26.579854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.973 qpair failed and we were unable to recover it. 00:25:00.973 [2024-07-24 22:34:26.580045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.973 [2024-07-24 22:34:26.580097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.973 qpair failed and we were unable to recover it. 00:25:00.973 [2024-07-24 22:34:26.580250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.973 [2024-07-24 22:34:26.580318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.973 qpair failed and we were unable to recover it. 00:25:00.973 [2024-07-24 22:34:26.580423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.973 [2024-07-24 22:34:26.580455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.973 qpair failed and we were unable to recover it. 00:25:00.973 [2024-07-24 22:34:26.580595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.973 [2024-07-24 22:34:26.580663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.973 qpair failed and we were unable to recover it. 00:25:00.973 [2024-07-24 22:34:26.580804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.973 [2024-07-24 22:34:26.580858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.973 qpair failed and we were unable to recover it. 00:25:00.973 [2024-07-24 22:34:26.580959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.973 [2024-07-24 22:34:26.580986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.973 qpair failed and we were unable to recover it. 00:25:00.973 [2024-07-24 22:34:26.581168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.973 [2024-07-24 22:34:26.581218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.973 qpair failed and we were unable to recover it. 00:25:00.973 [2024-07-24 22:34:26.581339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.973 [2024-07-24 22:34:26.581383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.973 qpair failed and we were unable to recover it. 00:25:00.973 [2024-07-24 22:34:26.581530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.973 [2024-07-24 22:34:26.581596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.973 qpair failed and we were unable to recover it. 00:25:00.973 [2024-07-24 22:34:26.581779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.973 [2024-07-24 22:34:26.581833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.973 qpair failed and we were unable to recover it. 00:25:00.973 [2024-07-24 22:34:26.581951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.973 [2024-07-24 22:34:26.582008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.973 qpair failed and we were unable to recover it. 00:25:00.973 [2024-07-24 22:34:26.582209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.973 [2024-07-24 22:34:26.582261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.973 qpair failed and we were unable to recover it. 00:25:00.973 [2024-07-24 22:34:26.582407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.973 [2024-07-24 22:34:26.582462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.974 qpair failed and we were unable to recover it. 00:25:00.974 [2024-07-24 22:34:26.582583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.974 [2024-07-24 22:34:26.582610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.974 qpair failed and we were unable to recover it. 00:25:00.974 [2024-07-24 22:34:26.582757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.974 [2024-07-24 22:34:26.582785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.974 qpair failed and we were unable to recover it. 00:25:00.974 [2024-07-24 22:34:26.582910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.974 [2024-07-24 22:34:26.582977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.974 qpair failed and we were unable to recover it. 00:25:00.974 [2024-07-24 22:34:26.583172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.974 [2024-07-24 22:34:26.583224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.974 qpair failed and we were unable to recover it. 00:25:00.974 [2024-07-24 22:34:26.583418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.974 [2024-07-24 22:34:26.583468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.974 qpair failed and we were unable to recover it. 00:25:00.974 [2024-07-24 22:34:26.583613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.974 [2024-07-24 22:34:26.583670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.974 qpair failed and we were unable to recover it. 00:25:00.974 [2024-07-24 22:34:26.583858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.974 [2024-07-24 22:34:26.583911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.974 qpair failed and we were unable to recover it. 00:25:00.974 [2024-07-24 22:34:26.584044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.974 [2024-07-24 22:34:26.584090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.974 qpair failed and we were unable to recover it. 00:25:00.974 [2024-07-24 22:34:26.584227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.974 [2024-07-24 22:34:26.584280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.974 qpair failed and we were unable to recover it. 00:25:00.974 [2024-07-24 22:34:26.584469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.974 [2024-07-24 22:34:26.584524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.974 qpair failed and we were unable to recover it. 00:25:00.974 [2024-07-24 22:34:26.584718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.974 [2024-07-24 22:34:26.584748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.974 qpair failed and we were unable to recover it. 00:25:00.974 [2024-07-24 22:34:26.584921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.974 [2024-07-24 22:34:26.584972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.974 qpair failed and we were unable to recover it. 00:25:00.974 [2024-07-24 22:34:26.585102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.974 [2024-07-24 22:34:26.585153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.974 qpair failed and we were unable to recover it. 00:25:00.974 [2024-07-24 22:34:26.585255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.974 [2024-07-24 22:34:26.585280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.974 qpair failed and we were unable to recover it. 00:25:00.974 [2024-07-24 22:34:26.585407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.974 [2024-07-24 22:34:26.585449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.974 qpair failed and we were unable to recover it. 00:25:00.974 [2024-07-24 22:34:26.585653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.974 [2024-07-24 22:34:26.585705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.974 qpair failed and we were unable to recover it. 00:25:00.974 [2024-07-24 22:34:26.585829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.974 [2024-07-24 22:34:26.585897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.974 qpair failed and we were unable to recover it. 00:25:00.974 [2024-07-24 22:34:26.586079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.974 [2024-07-24 22:34:26.586133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.974 qpair failed and we were unable to recover it. 00:25:00.974 [2024-07-24 22:34:26.586317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.974 [2024-07-24 22:34:26.586366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.974 qpair failed and we were unable to recover it. 00:25:00.974 [2024-07-24 22:34:26.586466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.974 [2024-07-24 22:34:26.586507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.974 qpair failed and we were unable to recover it. 00:25:00.974 [2024-07-24 22:34:26.586657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.974 [2024-07-24 22:34:26.586710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.974 qpair failed and we were unable to recover it. 00:25:00.974 [2024-07-24 22:34:26.586878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.974 [2024-07-24 22:34:26.586904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.974 qpair failed and we were unable to recover it. 00:25:00.974 [2024-07-24 22:34:26.587003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.974 [2024-07-24 22:34:26.587030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.974 qpair failed and we were unable to recover it. 00:25:00.974 [2024-07-24 22:34:26.587127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.974 [2024-07-24 22:34:26.587153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.974 qpair failed and we were unable to recover it. 00:25:00.974 [2024-07-24 22:34:26.587352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.974 [2024-07-24 22:34:26.587399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.974 qpair failed and we were unable to recover it. 00:25:00.974 [2024-07-24 22:34:26.587510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.974 [2024-07-24 22:34:26.587540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.974 qpair failed and we were unable to recover it. 00:25:00.974 [2024-07-24 22:34:26.587644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.974 [2024-07-24 22:34:26.587674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.974 qpair failed and we were unable to recover it. 00:25:00.974 [2024-07-24 22:34:26.587853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.974 [2024-07-24 22:34:26.587904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.974 qpair failed and we were unable to recover it. 00:25:00.974 [2024-07-24 22:34:26.588007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.974 [2024-07-24 22:34:26.588035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.974 qpair failed and we were unable to recover it. 00:25:00.974 [2024-07-24 22:34:26.588232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.974 [2024-07-24 22:34:26.588259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.974 qpair failed and we were unable to recover it. 00:25:00.974 [2024-07-24 22:34:26.588403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.974 [2024-07-24 22:34:26.588458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.974 qpair failed and we were unable to recover it. 00:25:00.974 [2024-07-24 22:34:26.588705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.974 [2024-07-24 22:34:26.588757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.974 qpair failed and we were unable to recover it. 00:25:00.974 [2024-07-24 22:34:26.588858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.974 [2024-07-24 22:34:26.588884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.974 qpair failed and we were unable to recover it. 00:25:00.974 [2024-07-24 22:34:26.588979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.974 [2024-07-24 22:34:26.589004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.974 qpair failed and we were unable to recover it. 00:25:00.975 [2024-07-24 22:34:26.589201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.975 [2024-07-24 22:34:26.589248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.975 qpair failed and we were unable to recover it. 00:25:00.975 [2024-07-24 22:34:26.589371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.975 [2024-07-24 22:34:26.589415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.975 qpair failed and we were unable to recover it. 00:25:00.975 [2024-07-24 22:34:26.589586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.975 [2024-07-24 22:34:26.589636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.975 qpair failed and we were unable to recover it. 00:25:00.975 [2024-07-24 22:34:26.589835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.975 [2024-07-24 22:34:26.589890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.975 qpair failed and we were unable to recover it. 00:25:00.975 [2024-07-24 22:34:26.589994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.975 [2024-07-24 22:34:26.590021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.975 qpair failed and we were unable to recover it. 00:25:00.975 [2024-07-24 22:34:26.590191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.975 [2024-07-24 22:34:26.590238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.975 qpair failed and we were unable to recover it. 00:25:00.975 [2024-07-24 22:34:26.590419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.975 [2024-07-24 22:34:26.590445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.975 qpair failed and we were unable to recover it. 00:25:00.975 [2024-07-24 22:34:26.590583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.975 [2024-07-24 22:34:26.590642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.975 qpair failed and we were unable to recover it. 00:25:00.975 [2024-07-24 22:34:26.590772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.975 [2024-07-24 22:34:26.590816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.975 qpair failed and we were unable to recover it. 00:25:00.975 [2024-07-24 22:34:26.590916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.975 [2024-07-24 22:34:26.590947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.975 qpair failed and we were unable to recover it. 00:25:00.975 [2024-07-24 22:34:26.591044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.975 [2024-07-24 22:34:26.591070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.975 qpair failed and we were unable to recover it. 00:25:00.975 [2024-07-24 22:34:26.591244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.975 [2024-07-24 22:34:26.591293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.975 qpair failed and we were unable to recover it. 00:25:00.975 [2024-07-24 22:34:26.591392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.975 [2024-07-24 22:34:26.591418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.975 qpair failed and we were unable to recover it. 00:25:00.975 [2024-07-24 22:34:26.591643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.975 [2024-07-24 22:34:26.591693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.975 qpair failed and we were unable to recover it. 00:25:00.975 [2024-07-24 22:34:26.591808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.975 [2024-07-24 22:34:26.591853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.975 qpair failed and we were unable to recover it. 00:25:00.975 [2024-07-24 22:34:26.591982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.975 [2024-07-24 22:34:26.592064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.975 qpair failed and we were unable to recover it. 00:25:00.975 [2024-07-24 22:34:26.592239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.975 [2024-07-24 22:34:26.592288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.975 qpair failed and we were unable to recover it. 00:25:00.975 [2024-07-24 22:34:26.592415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.975 [2024-07-24 22:34:26.592468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.975 qpair failed and we were unable to recover it. 00:25:00.975 [2024-07-24 22:34:26.592572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.975 [2024-07-24 22:34:26.592597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.975 qpair failed and we were unable to recover it. 00:25:00.975 [2024-07-24 22:34:26.592715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.975 [2024-07-24 22:34:26.592759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.975 qpair failed and we were unable to recover it. 00:25:00.975 [2024-07-24 22:34:26.592960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.975 [2024-07-24 22:34:26.593014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.975 qpair failed and we were unable to recover it. 00:25:00.975 [2024-07-24 22:34:26.593219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.975 [2024-07-24 22:34:26.593272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.975 qpair failed and we were unable to recover it. 00:25:00.975 [2024-07-24 22:34:26.593466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.975 [2024-07-24 22:34:26.593538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.975 qpair failed and we were unable to recover it. 00:25:00.975 [2024-07-24 22:34:26.593747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.975 [2024-07-24 22:34:26.593798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.975 qpair failed and we were unable to recover it. 00:25:00.975 [2024-07-24 22:34:26.593974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.975 [2024-07-24 22:34:26.594025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.975 qpair failed and we were unable to recover it. 00:25:00.975 [2024-07-24 22:34:26.594153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.975 [2024-07-24 22:34:26.594206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.975 qpair failed and we were unable to recover it. 00:25:00.975 [2024-07-24 22:34:26.594338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.975 [2024-07-24 22:34:26.594423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.975 qpair failed and we were unable to recover it. 00:25:00.975 [2024-07-24 22:34:26.594592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.975 [2024-07-24 22:34:26.594652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.975 qpair failed and we were unable to recover it. 00:25:00.975 [2024-07-24 22:34:26.594751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.975 [2024-07-24 22:34:26.594778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.975 qpair failed and we were unable to recover it. 00:25:00.975 [2024-07-24 22:34:26.594933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.975 [2024-07-24 22:34:26.594990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.975 qpair failed and we were unable to recover it. 00:25:00.975 [2024-07-24 22:34:26.595110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.975 [2024-07-24 22:34:26.595138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.975 qpair failed and we were unable to recover it. 00:25:00.975 [2024-07-24 22:34:26.595242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.975 [2024-07-24 22:34:26.595268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.975 qpair failed and we were unable to recover it. 00:25:00.975 [2024-07-24 22:34:26.595417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.975 [2024-07-24 22:34:26.595491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.975 qpair failed and we were unable to recover it. 00:25:00.975 [2024-07-24 22:34:26.595700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.975 [2024-07-24 22:34:26.595748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.975 qpair failed and we were unable to recover it. 00:25:00.975 [2024-07-24 22:34:26.595882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.975 [2024-07-24 22:34:26.595923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.975 qpair failed and we were unable to recover it. 00:25:00.975 [2024-07-24 22:34:26.596071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.975 [2024-07-24 22:34:26.596112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.975 qpair failed and we were unable to recover it. 00:25:00.975 [2024-07-24 22:34:26.596255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.975 [2024-07-24 22:34:26.596310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.975 qpair failed and we were unable to recover it. 00:25:00.975 [2024-07-24 22:34:26.596510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.976 [2024-07-24 22:34:26.596557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.976 qpair failed and we were unable to recover it. 00:25:00.976 [2024-07-24 22:34:26.596750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.976 [2024-07-24 22:34:26.596804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.976 qpair failed and we were unable to recover it. 00:25:00.976 [2024-07-24 22:34:26.596953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.976 [2024-07-24 22:34:26.597015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.976 qpair failed and we were unable to recover it. 00:25:00.976 [2024-07-24 22:34:26.597184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.976 [2024-07-24 22:34:26.597241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:00.976 qpair failed and we were unable to recover it. 00:25:00.976 [2024-07-24 22:34:26.597346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.976 [2024-07-24 22:34:26.597375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:00.976 qpair failed and we were unable to recover it. 00:25:00.976 [2024-07-24 22:34:26.597514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.976 [2024-07-24 22:34:26.597567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:00.976 qpair failed and we were unable to recover it. 00:25:00.976 [2024-07-24 22:34:26.597680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.976 [2024-07-24 22:34:26.597709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.976 qpair failed and we were unable to recover it. 00:25:00.976 [2024-07-24 22:34:26.597877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.976 [2024-07-24 22:34:26.597940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.976 qpair failed and we were unable to recover it. 00:25:00.976 [2024-07-24 22:34:26.598072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.976 [2024-07-24 22:34:26.598152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.976 qpair failed and we were unable to recover it. 00:25:00.976 [2024-07-24 22:34:26.598275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.976 [2024-07-24 22:34:26.598320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.976 qpair failed and we were unable to recover it. 00:25:00.976 [2024-07-24 22:34:26.598418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.976 [2024-07-24 22:34:26.598445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.976 qpair failed and we were unable to recover it. 00:25:00.976 [2024-07-24 22:34:26.598549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.976 [2024-07-24 22:34:26.598575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.976 qpair failed and we were unable to recover it. 00:25:00.976 [2024-07-24 22:34:26.598743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.976 [2024-07-24 22:34:26.598803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.976 qpair failed and we were unable to recover it. 00:25:00.976 [2024-07-24 22:34:26.598952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.976 [2024-07-24 22:34:26.599007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.976 qpair failed and we were unable to recover it. 00:25:00.976 [2024-07-24 22:34:26.599170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.976 [2024-07-24 22:34:26.599227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:00.976 qpair failed and we were unable to recover it. 00:25:01.269 [2024-07-24 22:34:26.599324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.270 [2024-07-24 22:34:26.599404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.270 qpair failed and we were unable to recover it. 00:25:01.270 [2024-07-24 22:34:26.599533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.270 [2024-07-24 22:34:26.599583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.270 qpair failed and we were unable to recover it. 00:25:01.270 [2024-07-24 22:34:26.599707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.270 [2024-07-24 22:34:26.599750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.270 qpair failed and we were unable to recover it. 00:25:01.270 [2024-07-24 22:34:26.599919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.270 [2024-07-24 22:34:26.599984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.270 qpair failed and we were unable to recover it. 00:25:01.270 [2024-07-24 22:34:26.600207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.270 [2024-07-24 22:34:26.600256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.270 qpair failed and we were unable to recover it. 00:25:01.270 [2024-07-24 22:34:26.600393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.270 [2024-07-24 22:34:26.600441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.270 qpair failed and we were unable to recover it. 00:25:01.270 [2024-07-24 22:34:26.600560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.270 [2024-07-24 22:34:26.600589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.270 qpair failed and we were unable to recover it. 00:25:01.270 [2024-07-24 22:34:26.600709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.270 [2024-07-24 22:34:26.600735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.270 qpair failed and we were unable to recover it. 00:25:01.270 [2024-07-24 22:34:26.600883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.270 [2024-07-24 22:34:26.600947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.270 qpair failed and we were unable to recover it. 00:25:01.270 [2024-07-24 22:34:26.601102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.270 [2024-07-24 22:34:26.601150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.270 qpair failed and we were unable to recover it. 00:25:01.270 [2024-07-24 22:34:26.601297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.270 [2024-07-24 22:34:26.601361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.270 qpair failed and we were unable to recover it. 00:25:01.270 [2024-07-24 22:34:26.601528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.270 [2024-07-24 22:34:26.601591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.270 qpair failed and we were unable to recover it. 00:25:01.270 [2024-07-24 22:34:26.601805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.270 [2024-07-24 22:34:26.601855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.270 qpair failed and we were unable to recover it. 00:25:01.270 [2024-07-24 22:34:26.601985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.270 [2024-07-24 22:34:26.602041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.270 qpair failed and we were unable to recover it. 00:25:01.270 [2024-07-24 22:34:26.602167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.270 [2024-07-24 22:34:26.602249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.270 qpair failed and we were unable to recover it. 00:25:01.270 [2024-07-24 22:34:26.602400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.270 [2024-07-24 22:34:26.602430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.270 qpair failed and we were unable to recover it. 00:25:01.270 [2024-07-24 22:34:26.602567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.270 [2024-07-24 22:34:26.602619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.270 qpair failed and we were unable to recover it. 00:25:01.270 [2024-07-24 22:34:26.602786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.270 [2024-07-24 22:34:26.602846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.270 qpair failed and we were unable to recover it. 00:25:01.270 [2024-07-24 22:34:26.602980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.270 [2024-07-24 22:34:26.603035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.270 qpair failed and we were unable to recover it. 00:25:01.270 [2024-07-24 22:34:26.603154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.270 [2024-07-24 22:34:26.603199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.270 qpair failed and we were unable to recover it. 00:25:01.270 [2024-07-24 22:34:26.603301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.270 [2024-07-24 22:34:26.603327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.270 qpair failed and we were unable to recover it. 00:25:01.270 [2024-07-24 22:34:26.603458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.270 [2024-07-24 22:34:26.603507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.270 qpair failed and we were unable to recover it. 00:25:01.270 [2024-07-24 22:34:26.603644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.270 [2024-07-24 22:34:26.603691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.270 qpair failed and we were unable to recover it. 00:25:01.270 [2024-07-24 22:34:26.603825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.270 [2024-07-24 22:34:26.603870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.270 qpair failed and we were unable to recover it. 00:25:01.270 [2024-07-24 22:34:26.604016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.270 [2024-07-24 22:34:26.604043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.270 qpair failed and we were unable to recover it. 00:25:01.270 [2024-07-24 22:34:26.604212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.270 [2024-07-24 22:34:26.604259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.270 qpair failed and we were unable to recover it. 00:25:01.270 [2024-07-24 22:34:26.604389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.270 [2024-07-24 22:34:26.604437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.270 qpair failed and we were unable to recover it. 00:25:01.270 [2024-07-24 22:34:26.604584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.270 [2024-07-24 22:34:26.604634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.270 qpair failed and we were unable to recover it. 00:25:01.270 [2024-07-24 22:34:26.604777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.270 [2024-07-24 22:34:26.604844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.270 qpair failed and we were unable to recover it. 00:25:01.270 [2024-07-24 22:34:26.604974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.270 [2024-07-24 22:34:26.605037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.270 qpair failed and we were unable to recover it. 00:25:01.270 [2024-07-24 22:34:26.605184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.270 [2024-07-24 22:34:26.605212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.270 qpair failed and we were unable to recover it. 00:25:01.270 [2024-07-24 22:34:26.605313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.270 [2024-07-24 22:34:26.605339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.270 qpair failed and we were unable to recover it. 00:25:01.270 [2024-07-24 22:34:26.605524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.270 [2024-07-24 22:34:26.605574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.270 qpair failed and we were unable to recover it. 00:25:01.270 [2024-07-24 22:34:26.605680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.270 [2024-07-24 22:34:26.605706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.270 qpair failed and we were unable to recover it. 00:25:01.270 [2024-07-24 22:34:26.605841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.270 [2024-07-24 22:34:26.605888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.270 qpair failed and we were unable to recover it. 00:25:01.270 [2024-07-24 22:34:26.606054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.270 [2024-07-24 22:34:26.606115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.270 qpair failed and we were unable to recover it. 00:25:01.270 [2024-07-24 22:34:26.606262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.270 [2024-07-24 22:34:26.606310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.270 qpair failed and we were unable to recover it. 00:25:01.270 [2024-07-24 22:34:26.606413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.270 [2024-07-24 22:34:26.606438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.270 qpair failed and we were unable to recover it. 00:25:01.271 [2024-07-24 22:34:26.606616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.271 [2024-07-24 22:34:26.606667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.271 qpair failed and we were unable to recover it. 00:25:01.271 [2024-07-24 22:34:26.606788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.271 [2024-07-24 22:34:26.606834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.271 qpair failed and we were unable to recover it. 00:25:01.271 [2024-07-24 22:34:26.606963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.271 [2024-07-24 22:34:26.607044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.271 qpair failed and we were unable to recover it. 00:25:01.271 [2024-07-24 22:34:26.607170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.271 [2024-07-24 22:34:26.607225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.271 qpair failed and we were unable to recover it. 00:25:01.271 [2024-07-24 22:34:26.607388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.271 [2024-07-24 22:34:26.607447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.271 qpair failed and we were unable to recover it. 00:25:01.271 [2024-07-24 22:34:26.607594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.271 [2024-07-24 22:34:26.607676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.271 qpair failed and we were unable to recover it. 00:25:01.271 [2024-07-24 22:34:26.607829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.271 [2024-07-24 22:34:26.607892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.271 qpair failed and we were unable to recover it. 00:25:01.271 [2024-07-24 22:34:26.608046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.271 [2024-07-24 22:34:26.608075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.271 qpair failed and we were unable to recover it. 00:25:01.271 [2024-07-24 22:34:26.608202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.271 [2024-07-24 22:34:26.608248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.271 qpair failed and we were unable to recover it. 00:25:01.271 [2024-07-24 22:34:26.608391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.271 [2024-07-24 22:34:26.608434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.271 qpair failed and we were unable to recover it. 00:25:01.271 [2024-07-24 22:34:26.608588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.271 [2024-07-24 22:34:26.608636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.271 qpair failed and we were unable to recover it. 00:25:01.271 [2024-07-24 22:34:26.608789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.271 [2024-07-24 22:34:26.608815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.271 qpair failed and we were unable to recover it. 00:25:01.271 [2024-07-24 22:34:26.608962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.271 [2024-07-24 22:34:26.609013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.271 qpair failed and we were unable to recover it. 00:25:01.271 [2024-07-24 22:34:26.609154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.271 [2024-07-24 22:34:26.609205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.271 qpair failed and we were unable to recover it. 00:25:01.271 [2024-07-24 22:34:26.609334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.271 [2024-07-24 22:34:26.609385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.271 qpair failed and we were unable to recover it. 00:25:01.271 [2024-07-24 22:34:26.609493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.271 [2024-07-24 22:34:26.609525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.271 qpair failed and we were unable to recover it. 00:25:01.271 [2024-07-24 22:34:26.609641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.271 [2024-07-24 22:34:26.609684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.271 qpair failed and we were unable to recover it. 00:25:01.271 [2024-07-24 22:34:26.609802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.271 [2024-07-24 22:34:26.609829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.271 qpair failed and we were unable to recover it. 00:25:01.271 [2024-07-24 22:34:26.609950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.271 [2024-07-24 22:34:26.609988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.271 qpair failed and we were unable to recover it. 00:25:01.271 [2024-07-24 22:34:26.610129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.271 [2024-07-24 22:34:26.610175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.271 qpair failed and we were unable to recover it. 00:25:01.271 [2024-07-24 22:34:26.610355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.271 [2024-07-24 22:34:26.610414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.271 qpair failed and we were unable to recover it. 00:25:01.271 [2024-07-24 22:34:26.610568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.271 [2024-07-24 22:34:26.610651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.271 qpair failed and we were unable to recover it. 00:25:01.271 [2024-07-24 22:34:26.610781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.271 [2024-07-24 22:34:26.610824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.271 qpair failed and we were unable to recover it. 00:25:01.271 [2024-07-24 22:34:26.610974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.271 [2024-07-24 22:34:26.611032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.271 qpair failed and we were unable to recover it. 00:25:01.271 [2024-07-24 22:34:26.611179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.271 [2024-07-24 22:34:26.611245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.271 qpair failed and we were unable to recover it. 00:25:01.271 [2024-07-24 22:34:26.611385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.271 [2024-07-24 22:34:26.611410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.271 qpair failed and we were unable to recover it. 00:25:01.271 [2024-07-24 22:34:26.611523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.271 [2024-07-24 22:34:26.611552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.271 qpair failed and we were unable to recover it. 00:25:01.271 [2024-07-24 22:34:26.611701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.271 [2024-07-24 22:34:26.611749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.271 qpair failed and we were unable to recover it. 00:25:01.271 [2024-07-24 22:34:26.611890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.271 [2024-07-24 22:34:26.611936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.271 qpair failed and we were unable to recover it. 00:25:01.271 [2024-07-24 22:34:26.612070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.271 [2024-07-24 22:34:26.612153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.271 qpair failed and we were unable to recover it. 00:25:01.271 [2024-07-24 22:34:26.612275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.271 [2024-07-24 22:34:26.612303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.271 qpair failed and we were unable to recover it. 00:25:01.271 [2024-07-24 22:34:26.612417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.271 [2024-07-24 22:34:26.612443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.271 qpair failed and we were unable to recover it. 00:25:01.271 [2024-07-24 22:34:26.612597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.271 [2024-07-24 22:34:26.612662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.271 qpair failed and we were unable to recover it. 00:25:01.271 [2024-07-24 22:34:26.612807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.271 [2024-07-24 22:34:26.612856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.271 qpair failed and we were unable to recover it. 00:25:01.271 [2024-07-24 22:34:26.613015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.271 [2024-07-24 22:34:26.613078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.271 qpair failed and we were unable to recover it. 00:25:01.271 [2024-07-24 22:34:26.613186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.271 [2024-07-24 22:34:26.613212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.271 qpair failed and we were unable to recover it. 00:25:01.271 [2024-07-24 22:34:26.613325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.271 [2024-07-24 22:34:26.613373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.271 qpair failed and we were unable to recover it. 00:25:01.271 [2024-07-24 22:34:26.613514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.271 [2024-07-24 22:34:26.613562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.271 qpair failed and we were unable to recover it. 00:25:01.272 [2024-07-24 22:34:26.613695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.272 [2024-07-24 22:34:26.613737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.272 qpair failed and we were unable to recover it. 00:25:01.272 [2024-07-24 22:34:26.613886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.272 [2024-07-24 22:34:26.613914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.272 qpair failed and we were unable to recover it. 00:25:01.272 [2024-07-24 22:34:26.614043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.272 [2024-07-24 22:34:26.614091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.272 qpair failed and we were unable to recover it. 00:25:01.272 [2024-07-24 22:34:26.614236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.272 [2024-07-24 22:34:26.614316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.272 qpair failed and we were unable to recover it. 00:25:01.272 [2024-07-24 22:34:26.614418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.272 [2024-07-24 22:34:26.614444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.272 qpair failed and we were unable to recover it. 00:25:01.272 [2024-07-24 22:34:26.614607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.272 [2024-07-24 22:34:26.614660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.272 qpair failed and we were unable to recover it. 00:25:01.272 [2024-07-24 22:34:26.614787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.272 [2024-07-24 22:34:26.614856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.272 qpair failed and we were unable to recover it. 00:25:01.272 [2024-07-24 22:34:26.615013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.272 [2024-07-24 22:34:26.615071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.272 qpair failed and we were unable to recover it. 00:25:01.272 [2024-07-24 22:34:26.615222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.272 [2024-07-24 22:34:26.615270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.272 qpair failed and we were unable to recover it. 00:25:01.272 [2024-07-24 22:34:26.615406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.272 [2024-07-24 22:34:26.615454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.272 qpair failed and we were unable to recover it. 00:25:01.272 [2024-07-24 22:34:26.615592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.272 [2024-07-24 22:34:26.615639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.272 qpair failed and we were unable to recover it. 00:25:01.272 [2024-07-24 22:34:26.615768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.272 [2024-07-24 22:34:26.615815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.272 qpair failed and we were unable to recover it. 00:25:01.272 [2024-07-24 22:34:26.615952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.272 [2024-07-24 22:34:26.615994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.272 qpair failed and we were unable to recover it. 00:25:01.272 [2024-07-24 22:34:26.616136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.272 [2024-07-24 22:34:26.616183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.272 qpair failed and we were unable to recover it. 00:25:01.272 [2024-07-24 22:34:26.616312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.272 [2024-07-24 22:34:26.616359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.272 qpair failed and we were unable to recover it. 00:25:01.272 [2024-07-24 22:34:26.616516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.272 [2024-07-24 22:34:26.616568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.272 qpair failed and we were unable to recover it. 00:25:01.272 [2024-07-24 22:34:26.616684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.272 [2024-07-24 22:34:26.616711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.272 qpair failed and we were unable to recover it. 00:25:01.272 [2024-07-24 22:34:26.616871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.272 [2024-07-24 22:34:26.616936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.272 qpair failed and we were unable to recover it. 00:25:01.272 [2024-07-24 22:34:26.617038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.272 [2024-07-24 22:34:26.617066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.272 qpair failed and we were unable to recover it. 00:25:01.272 [2024-07-24 22:34:26.617227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.272 [2024-07-24 22:34:26.617293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.272 qpair failed and we were unable to recover it. 00:25:01.272 [2024-07-24 22:34:26.617446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.272 [2024-07-24 22:34:26.617475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.272 qpair failed and we were unable to recover it. 00:25:01.272 [2024-07-24 22:34:26.617608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.272 [2024-07-24 22:34:26.617656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.272 qpair failed and we were unable to recover it. 00:25:01.272 [2024-07-24 22:34:26.617827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.272 [2024-07-24 22:34:26.617890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.272 qpair failed and we were unable to recover it. 00:25:01.272 [2024-07-24 22:34:26.618024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.272 [2024-07-24 22:34:26.618105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.272 qpair failed and we were unable to recover it. 00:25:01.272 [2024-07-24 22:34:26.618265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.272 [2024-07-24 22:34:26.618318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.272 qpair failed and we were unable to recover it. 00:25:01.272 [2024-07-24 22:34:26.618434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.272 [2024-07-24 22:34:26.618517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.272 qpair failed and we were unable to recover it. 00:25:01.272 [2024-07-24 22:34:26.618683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.272 [2024-07-24 22:34:26.618710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.272 qpair failed and we were unable to recover it. 00:25:01.272 [2024-07-24 22:34:26.618852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.272 [2024-07-24 22:34:26.618935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.272 qpair failed and we were unable to recover it. 00:25:01.272 [2024-07-24 22:34:26.619079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.272 [2024-07-24 22:34:26.619125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.272 qpair failed and we were unable to recover it. 00:25:01.272 [2024-07-24 22:34:26.619266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.272 [2024-07-24 22:34:26.619309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.272 qpair failed and we were unable to recover it. 00:25:01.272 [2024-07-24 22:34:26.619452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.272 [2024-07-24 22:34:26.619489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.272 qpair failed and we were unable to recover it. 00:25:01.272 [2024-07-24 22:34:26.619649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.272 [2024-07-24 22:34:26.619675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.272 qpair failed and we were unable to recover it. 00:25:01.272 [2024-07-24 22:34:26.619791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.272 [2024-07-24 22:34:26.619838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.272 qpair failed and we were unable to recover it. 00:25:01.272 [2024-07-24 22:34:26.619987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.272 [2024-07-24 22:34:26.620045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.272 qpair failed and we were unable to recover it. 00:25:01.272 [2024-07-24 22:34:26.620182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.272 [2024-07-24 22:34:26.620231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.272 qpair failed and we were unable to recover it. 00:25:01.272 [2024-07-24 22:34:26.620355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.272 [2024-07-24 22:34:26.620407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.272 qpair failed and we were unable to recover it. 00:25:01.272 [2024-07-24 22:34:26.620552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.272 [2024-07-24 22:34:26.620599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.272 qpair failed and we were unable to recover it. 00:25:01.272 [2024-07-24 22:34:26.620761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.273 [2024-07-24 22:34:26.620828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.273 qpair failed and we were unable to recover it. 00:25:01.273 [2024-07-24 22:34:26.620986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.273 [2024-07-24 22:34:26.621047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.273 qpair failed and we were unable to recover it. 00:25:01.273 [2024-07-24 22:34:26.621204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.273 [2024-07-24 22:34:26.621264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.273 qpair failed and we were unable to recover it. 00:25:01.273 [2024-07-24 22:34:26.621424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.273 [2024-07-24 22:34:26.621496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.273 qpair failed and we were unable to recover it. 00:25:01.273 [2024-07-24 22:34:26.621643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.273 [2024-07-24 22:34:26.621721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.273 qpair failed and we were unable to recover it. 00:25:01.273 [2024-07-24 22:34:26.621885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.273 [2024-07-24 22:34:26.621913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.273 qpair failed and we were unable to recover it. 00:25:01.273 [2024-07-24 22:34:26.622079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.273 [2024-07-24 22:34:26.622133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.273 qpair failed and we were unable to recover it. 00:25:01.273 [2024-07-24 22:34:26.622261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.273 [2024-07-24 22:34:26.622308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.273 qpair failed and we were unable to recover it. 00:25:01.273 [2024-07-24 22:34:26.622423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.273 [2024-07-24 22:34:26.622470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.273 qpair failed and we were unable to recover it. 00:25:01.273 [2024-07-24 22:34:26.622612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.273 [2024-07-24 22:34:26.622661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.273 qpair failed and we were unable to recover it. 00:25:01.273 [2024-07-24 22:34:26.622795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.273 [2024-07-24 22:34:26.622843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.273 qpair failed and we were unable to recover it. 00:25:01.273 [2024-07-24 22:34:26.622985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.273 [2024-07-24 22:34:26.623064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.273 qpair failed and we were unable to recover it. 00:25:01.273 [2024-07-24 22:34:26.623217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.273 [2024-07-24 22:34:26.623278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.273 qpair failed and we were unable to recover it. 00:25:01.273 [2024-07-24 22:34:26.623420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.273 [2024-07-24 22:34:26.623467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.273 qpair failed and we were unable to recover it. 00:25:01.273 [2024-07-24 22:34:26.623606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.273 [2024-07-24 22:34:26.623654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.273 qpair failed and we were unable to recover it. 00:25:01.273 [2024-07-24 22:34:26.623773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.273 [2024-07-24 22:34:26.623800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.273 qpair failed and we were unable to recover it. 00:25:01.273 [2024-07-24 22:34:26.623933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.273 [2024-07-24 22:34:26.623973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.273 qpair failed and we were unable to recover it. 00:25:01.273 [2024-07-24 22:34:26.624102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.273 [2024-07-24 22:34:26.624143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.273 qpair failed and we were unable to recover it. 00:25:01.273 [2024-07-24 22:34:26.624244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.273 [2024-07-24 22:34:26.624275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.273 qpair failed and we were unable to recover it. 00:25:01.273 [2024-07-24 22:34:26.624413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.273 [2024-07-24 22:34:26.624468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.273 qpair failed and we were unable to recover it. 00:25:01.273 [2024-07-24 22:34:26.624630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.273 [2024-07-24 22:34:26.624690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.273 qpair failed and we were unable to recover it. 00:25:01.273 [2024-07-24 22:34:26.624846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.273 [2024-07-24 22:34:26.624905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.273 qpair failed and we were unable to recover it. 00:25:01.273 [2024-07-24 22:34:26.625010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.273 [2024-07-24 22:34:26.625037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.273 qpair failed and we were unable to recover it. 00:25:01.273 [2024-07-24 22:34:26.625166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.273 [2024-07-24 22:34:26.625219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.273 qpair failed and we were unable to recover it. 00:25:01.273 [2024-07-24 22:34:26.625366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.273 [2024-07-24 22:34:26.625433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.273 qpair failed and we were unable to recover it. 00:25:01.273 [2024-07-24 22:34:26.625595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.273 [2024-07-24 22:34:26.625626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.273 qpair failed and we were unable to recover it. 00:25:01.273 [2024-07-24 22:34:26.625747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.273 [2024-07-24 22:34:26.625801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.273 qpair failed and we were unable to recover it. 00:25:01.273 [2024-07-24 22:34:26.625931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.273 [2024-07-24 22:34:26.625984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.273 qpair failed and we were unable to recover it. 00:25:01.273 [2024-07-24 22:34:26.626175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.273 [2024-07-24 22:34:26.626228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.273 qpair failed and we were unable to recover it. 00:25:01.273 [2024-07-24 22:34:26.626396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.273 [2024-07-24 22:34:26.626453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.273 qpair failed and we were unable to recover it. 00:25:01.273 [2024-07-24 22:34:26.626597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.273 [2024-07-24 22:34:26.626650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.273 qpair failed and we were unable to recover it. 00:25:01.273 [2024-07-24 22:34:26.626792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.273 [2024-07-24 22:34:26.626818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.273 qpair failed and we were unable to recover it. 00:25:01.273 [2024-07-24 22:34:26.626957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.273 [2024-07-24 22:34:26.627006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.273 qpair failed and we were unable to recover it. 00:25:01.273 [2024-07-24 22:34:26.627136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.273 [2024-07-24 22:34:26.627181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.273 qpair failed and we were unable to recover it. 00:25:01.273 [2024-07-24 22:34:26.627319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.273 [2024-07-24 22:34:26.627378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.273 qpair failed and we were unable to recover it. 00:25:01.273 [2024-07-24 22:34:26.627488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.273 [2024-07-24 22:34:26.627516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.273 qpair failed and we were unable to recover it. 00:25:01.273 [2024-07-24 22:34:26.627666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.273 [2024-07-24 22:34:26.627692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.273 qpair failed and we were unable to recover it. 00:25:01.273 [2024-07-24 22:34:26.627792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.273 [2024-07-24 22:34:26.627820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.273 qpair failed and we were unable to recover it. 00:25:01.273 [2024-07-24 22:34:26.627925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.274 [2024-07-24 22:34:26.627952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.274 qpair failed and we were unable to recover it. 00:25:01.274 [2024-07-24 22:34:26.628053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.274 [2024-07-24 22:34:26.628079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.274 qpair failed and we were unable to recover it. 00:25:01.274 [2024-07-24 22:34:26.628178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.274 [2024-07-24 22:34:26.628207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.274 qpair failed and we were unable to recover it. 00:25:01.274 [2024-07-24 22:34:26.628353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.274 [2024-07-24 22:34:26.628420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.274 qpair failed and we were unable to recover it. 00:25:01.274 [2024-07-24 22:34:26.628531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.274 [2024-07-24 22:34:26.628560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.274 qpair failed and we were unable to recover it. 00:25:01.274 [2024-07-24 22:34:26.628678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.274 [2024-07-24 22:34:26.628704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.274 qpair failed and we were unable to recover it. 00:25:01.274 [2024-07-24 22:34:26.628809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.274 [2024-07-24 22:34:26.628836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.274 qpair failed and we were unable to recover it. 00:25:01.274 [2024-07-24 22:34:26.628948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.274 [2024-07-24 22:34:26.628976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.274 qpair failed and we were unable to recover it. 00:25:01.274 [2024-07-24 22:34:26.629146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.274 [2024-07-24 22:34:26.629194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.274 qpair failed and we were unable to recover it. 00:25:01.274 [2024-07-24 22:34:26.629338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.274 [2024-07-24 22:34:26.629397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.274 qpair failed and we were unable to recover it. 00:25:01.274 [2024-07-24 22:34:26.629565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.274 [2024-07-24 22:34:26.629621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.274 qpair failed and we were unable to recover it. 00:25:01.274 [2024-07-24 22:34:26.629767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.274 [2024-07-24 22:34:26.629793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.274 qpair failed and we were unable to recover it. 00:25:01.274 [2024-07-24 22:34:26.629924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.274 [2024-07-24 22:34:26.629976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.274 qpair failed and we were unable to recover it. 00:25:01.274 [2024-07-24 22:34:26.630128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.274 [2024-07-24 22:34:26.630196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.274 qpair failed and we were unable to recover it. 00:25:01.274 [2024-07-24 22:34:26.630394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.274 [2024-07-24 22:34:26.630447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.274 qpair failed and we were unable to recover it. 00:25:01.274 [2024-07-24 22:34:26.630581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.274 [2024-07-24 22:34:26.630608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.274 qpair failed and we were unable to recover it. 00:25:01.274 [2024-07-24 22:34:26.630800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.274 [2024-07-24 22:34:26.630882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.274 qpair failed and we were unable to recover it. 00:25:01.274 [2024-07-24 22:34:26.630984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.274 [2024-07-24 22:34:26.631010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.274 qpair failed and we were unable to recover it. 00:25:01.274 [2024-07-24 22:34:26.631153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.274 [2024-07-24 22:34:26.631181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.274 qpair failed and we were unable to recover it. 00:25:01.274 [2024-07-24 22:34:26.631280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.274 [2024-07-24 22:34:26.631305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.274 qpair failed and we were unable to recover it. 00:25:01.274 [2024-07-24 22:34:26.631434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.274 [2024-07-24 22:34:26.631488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.274 qpair failed and we were unable to recover it. 00:25:01.274 [2024-07-24 22:34:26.631635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.274 [2024-07-24 22:34:26.631690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.274 qpair failed and we were unable to recover it. 00:25:01.274 [2024-07-24 22:34:26.631828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.274 [2024-07-24 22:34:26.631881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.274 qpair failed and we were unable to recover it. 00:25:01.274 [2024-07-24 22:34:26.632013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.274 [2024-07-24 22:34:26.632093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.274 qpair failed and we were unable to recover it. 00:25:01.274 [2024-07-24 22:34:26.632224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.274 [2024-07-24 22:34:26.632251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.274 qpair failed and we were unable to recover it. 00:25:01.274 [2024-07-24 22:34:26.632353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.274 [2024-07-24 22:34:26.632379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.274 qpair failed and we were unable to recover it. 00:25:01.274 [2024-07-24 22:34:26.632491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.274 [2024-07-24 22:34:26.632521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.274 qpair failed and we were unable to recover it. 00:25:01.274 [2024-07-24 22:34:26.632711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.274 [2024-07-24 22:34:26.632771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.274 qpair failed and we were unable to recover it. 00:25:01.274 [2024-07-24 22:34:26.632914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.274 [2024-07-24 22:34:26.632995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.274 qpair failed and we were unable to recover it. 00:25:01.274 [2024-07-24 22:34:26.633176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.274 [2024-07-24 22:34:26.633227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.274 qpair failed and we were unable to recover it. 00:25:01.274 [2024-07-24 22:34:26.633424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.274 [2024-07-24 22:34:26.633474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.274 qpair failed and we were unable to recover it. 00:25:01.274 [2024-07-24 22:34:26.633703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.275 [2024-07-24 22:34:26.633754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.275 qpair failed and we were unable to recover it. 00:25:01.275 [2024-07-24 22:34:26.633903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.275 [2024-07-24 22:34:26.633968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.275 qpair failed and we were unable to recover it. 00:25:01.275 [2024-07-24 22:34:26.634098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.275 [2024-07-24 22:34:26.634143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.275 qpair failed and we were unable to recover it. 00:25:01.275 [2024-07-24 22:34:26.634319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.275 [2024-07-24 22:34:26.634376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.275 qpair failed and we were unable to recover it. 00:25:01.275 [2024-07-24 22:34:26.634477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.275 [2024-07-24 22:34:26.634512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.275 qpair failed and we were unable to recover it. 00:25:01.275 [2024-07-24 22:34:26.634664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.275 [2024-07-24 22:34:26.634717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.275 qpair failed and we were unable to recover it. 00:25:01.275 [2024-07-24 22:34:26.634872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.275 [2024-07-24 22:34:26.634933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.275 qpair failed and we were unable to recover it. 00:25:01.275 [2024-07-24 22:34:26.635099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.275 [2024-07-24 22:34:26.635126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.275 qpair failed and we were unable to recover it. 00:25:01.275 [2024-07-24 22:34:26.635224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.275 [2024-07-24 22:34:26.635250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.275 qpair failed and we were unable to recover it. 00:25:01.275 [2024-07-24 22:34:26.635347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.275 [2024-07-24 22:34:26.635373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.275 qpair failed and we were unable to recover it. 00:25:01.275 [2024-07-24 22:34:26.635510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.275 [2024-07-24 22:34:26.635536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.275 qpair failed and we were unable to recover it. 00:25:01.275 [2024-07-24 22:34:26.635711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.275 [2024-07-24 22:34:26.635759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.275 qpair failed and we were unable to recover it. 00:25:01.275 [2024-07-24 22:34:26.635883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.275 [2024-07-24 22:34:26.635964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.275 qpair failed and we were unable to recover it. 00:25:01.275 [2024-07-24 22:34:26.636115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.275 [2024-07-24 22:34:26.636178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.275 qpair failed and we were unable to recover it. 00:25:01.275 [2024-07-24 22:34:26.636370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.275 [2024-07-24 22:34:26.636422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.275 qpair failed and we were unable to recover it. 00:25:01.275 [2024-07-24 22:34:26.636573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.275 [2024-07-24 22:34:26.636620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.275 qpair failed and we were unable to recover it. 00:25:01.275 [2024-07-24 22:34:26.636770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.275 [2024-07-24 22:34:26.636838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.275 qpair failed and we were unable to recover it. 00:25:01.275 [2024-07-24 22:34:26.636947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.275 [2024-07-24 22:34:26.636974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.275 qpair failed and we were unable to recover it. 00:25:01.275 [2024-07-24 22:34:26.637076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.275 [2024-07-24 22:34:26.637104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.275 qpair failed and we were unable to recover it. 00:25:01.275 [2024-07-24 22:34:26.637224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.275 [2024-07-24 22:34:26.637280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.275 qpair failed and we were unable to recover it. 00:25:01.275 [2024-07-24 22:34:26.637435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.275 [2024-07-24 22:34:26.637512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.275 qpair failed and we were unable to recover it. 00:25:01.275 [2024-07-24 22:34:26.637660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.275 [2024-07-24 22:34:26.637718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.275 qpair failed and we were unable to recover it. 00:25:01.275 [2024-07-24 22:34:26.637848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.275 [2024-07-24 22:34:26.637898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.275 qpair failed and we were unable to recover it. 00:25:01.275 [2024-07-24 22:34:26.638051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.275 [2024-07-24 22:34:26.638096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.275 qpair failed and we were unable to recover it. 00:25:01.275 [2024-07-24 22:34:26.638198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.275 [2024-07-24 22:34:26.638226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.275 qpair failed and we were unable to recover it. 00:25:01.275 [2024-07-24 22:34:26.638352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.275 [2024-07-24 22:34:26.638419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.275 qpair failed and we were unable to recover it. 00:25:01.275 [2024-07-24 22:34:26.638616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.275 [2024-07-24 22:34:26.638669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.275 qpair failed and we were unable to recover it. 00:25:01.275 [2024-07-24 22:34:26.638832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.275 [2024-07-24 22:34:26.638888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.275 qpair failed and we were unable to recover it. 00:25:01.275 [2024-07-24 22:34:26.639033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.275 [2024-07-24 22:34:26.639084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.275 qpair failed and we were unable to recover it. 00:25:01.275 [2024-07-24 22:34:26.639231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.275 [2024-07-24 22:34:26.639258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.275 qpair failed and we were unable to recover it. 00:25:01.275 [2024-07-24 22:34:26.639375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.275 [2024-07-24 22:34:26.639434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.275 qpair failed and we were unable to recover it. 00:25:01.275 [2024-07-24 22:34:26.639544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.275 [2024-07-24 22:34:26.639575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.275 qpair failed and we were unable to recover it. 00:25:01.275 [2024-07-24 22:34:26.639696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.275 [2024-07-24 22:34:26.639722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.275 qpair failed and we were unable to recover it. 00:25:01.275 [2024-07-24 22:34:26.639933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.275 [2024-07-24 22:34:26.639984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.275 qpair failed and we were unable to recover it. 00:25:01.275 [2024-07-24 22:34:26.640131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.275 [2024-07-24 22:34:26.640193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.275 qpair failed and we were unable to recover it. 00:25:01.275 [2024-07-24 22:34:26.640351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.275 [2024-07-24 22:34:26.640406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.275 qpair failed and we were unable to recover it. 00:25:01.275 [2024-07-24 22:34:26.640531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.275 [2024-07-24 22:34:26.640584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.275 qpair failed and we were unable to recover it. 00:25:01.275 [2024-07-24 22:34:26.640739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.275 [2024-07-24 22:34:26.640794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.275 qpair failed and we were unable to recover it. 00:25:01.275 [2024-07-24 22:34:26.640936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.276 [2024-07-24 22:34:26.641015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.276 qpair failed and we were unable to recover it. 00:25:01.276 [2024-07-24 22:34:26.641114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.276 [2024-07-24 22:34:26.641140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.276 qpair failed and we were unable to recover it. 00:25:01.276 [2024-07-24 22:34:26.641338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.276 [2024-07-24 22:34:26.641392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.276 qpair failed and we were unable to recover it. 00:25:01.276 [2024-07-24 22:34:26.641501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.276 [2024-07-24 22:34:26.641533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.276 qpair failed and we were unable to recover it. 00:25:01.276 [2024-07-24 22:34:26.641722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.276 [2024-07-24 22:34:26.641749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.276 qpair failed and we were unable to recover it. 00:25:01.276 [2024-07-24 22:34:26.641921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.276 [2024-07-24 22:34:26.641977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.276 qpair failed and we were unable to recover it. 00:25:01.276 [2024-07-24 22:34:26.642092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.276 [2024-07-24 22:34:26.642118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.276 qpair failed and we were unable to recover it. 00:25:01.276 [2024-07-24 22:34:26.642220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.276 [2024-07-24 22:34:26.642247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.276 qpair failed and we were unable to recover it. 00:25:01.276 [2024-07-24 22:34:26.642378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.276 [2024-07-24 22:34:26.642429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.276 qpair failed and we were unable to recover it. 00:25:01.276 [2024-07-24 22:34:26.642553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.276 [2024-07-24 22:34:26.642580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.276 qpair failed and we were unable to recover it. 00:25:01.276 [2024-07-24 22:34:26.642775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.276 [2024-07-24 22:34:26.642827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.276 qpair failed and we were unable to recover it. 00:25:01.276 [2024-07-24 22:34:26.643010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.276 [2024-07-24 22:34:26.643036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.276 qpair failed and we were unable to recover it. 00:25:01.276 [2024-07-24 22:34:26.643153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.276 [2024-07-24 22:34:26.643199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.276 qpair failed and we were unable to recover it. 00:25:01.276 [2024-07-24 22:34:26.643327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.276 [2024-07-24 22:34:26.643407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.276 qpair failed and we were unable to recover it. 00:25:01.276 [2024-07-24 22:34:26.643522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.276 [2024-07-24 22:34:26.643551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.276 qpair failed and we were unable to recover it. 00:25:01.276 [2024-07-24 22:34:26.643668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.276 [2024-07-24 22:34:26.643695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.276 qpair failed and we were unable to recover it. 00:25:01.276 [2024-07-24 22:34:26.643838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.276 [2024-07-24 22:34:26.643893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.276 qpair failed and we were unable to recover it. 00:25:01.276 [2024-07-24 22:34:26.644063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.276 [2024-07-24 22:34:26.644113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.276 qpair failed and we were unable to recover it. 00:25:01.276 [2024-07-24 22:34:26.644250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.276 [2024-07-24 22:34:26.644339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.276 qpair failed and we were unable to recover it. 00:25:01.276 [2024-07-24 22:34:26.644464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.276 [2024-07-24 22:34:26.644522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.276 qpair failed and we were unable to recover it. 00:25:01.276 [2024-07-24 22:34:26.644704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.276 [2024-07-24 22:34:26.644753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.276 qpair failed and we were unable to recover it. 00:25:01.276 [2024-07-24 22:34:26.644914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.276 [2024-07-24 22:34:26.644976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.276 qpair failed and we were unable to recover it. 00:25:01.276 [2024-07-24 22:34:26.645099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.276 [2024-07-24 22:34:26.645166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.276 qpair failed and we were unable to recover it. 00:25:01.276 [2024-07-24 22:34:26.645334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.276 [2024-07-24 22:34:26.645400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.276 qpair failed and we were unable to recover it. 00:25:01.276 [2024-07-24 22:34:26.645508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.276 [2024-07-24 22:34:26.645536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.276 qpair failed and we were unable to recover it. 00:25:01.276 [2024-07-24 22:34:26.645654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.276 [2024-07-24 22:34:26.645699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.276 qpair failed and we were unable to recover it. 00:25:01.276 [2024-07-24 22:34:26.645883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.276 [2024-07-24 22:34:26.645930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.276 qpair failed and we were unable to recover it. 00:25:01.276 [2024-07-24 22:34:26.646113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.276 [2024-07-24 22:34:26.646164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.276 qpair failed and we were unable to recover it. 00:25:01.276 [2024-07-24 22:34:26.646292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.276 [2024-07-24 22:34:26.646374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.276 qpair failed and we were unable to recover it. 00:25:01.276 [2024-07-24 22:34:26.646520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.276 [2024-07-24 22:34:26.646562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.276 qpair failed and we were unable to recover it. 00:25:01.276 [2024-07-24 22:34:26.646661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.276 [2024-07-24 22:34:26.646686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.276 qpair failed and we were unable to recover it. 00:25:01.276 [2024-07-24 22:34:26.646881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.276 [2024-07-24 22:34:26.646933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.276 qpair failed and we were unable to recover it. 00:25:01.276 [2024-07-24 22:34:26.647093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.276 [2024-07-24 22:34:26.647160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.276 qpair failed and we were unable to recover it. 00:25:01.276 [2024-07-24 22:34:26.647289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.276 [2024-07-24 22:34:26.647328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.276 qpair failed and we were unable to recover it. 00:25:01.276 [2024-07-24 22:34:26.647495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.276 [2024-07-24 22:34:26.647554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.276 qpair failed and we were unable to recover it. 00:25:01.276 [2024-07-24 22:34:26.647688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.276 [2024-07-24 22:34:26.647733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.276 qpair failed and we were unable to recover it. 00:25:01.276 [2024-07-24 22:34:26.647924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.276 [2024-07-24 22:34:26.647979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.276 qpair failed and we were unable to recover it. 00:25:01.276 [2024-07-24 22:34:26.648115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.276 [2024-07-24 22:34:26.648166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.276 qpair failed and we were unable to recover it. 00:25:01.276 [2024-07-24 22:34:26.648316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.277 [2024-07-24 22:34:26.648345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.277 qpair failed and we were unable to recover it. 00:25:01.277 [2024-07-24 22:34:26.648469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.277 [2024-07-24 22:34:26.648534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.277 qpair failed and we were unable to recover it. 00:25:01.277 [2024-07-24 22:34:26.648682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.277 [2024-07-24 22:34:26.648739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.277 qpair failed and we were unable to recover it. 00:25:01.277 [2024-07-24 22:34:26.648906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.277 [2024-07-24 22:34:26.648962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.277 qpair failed and we were unable to recover it. 00:25:01.277 [2024-07-24 22:34:26.649146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.277 [2024-07-24 22:34:26.649194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.277 qpair failed and we were unable to recover it. 00:25:01.277 [2024-07-24 22:34:26.649306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.277 [2024-07-24 22:34:26.649334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.277 qpair failed and we were unable to recover it. 00:25:01.277 [2024-07-24 22:34:26.649460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.277 [2024-07-24 22:34:26.649553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.277 qpair failed and we were unable to recover it. 00:25:01.277 [2024-07-24 22:34:26.649705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.277 [2024-07-24 22:34:26.649738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.277 qpair failed and we were unable to recover it. 00:25:01.277 [2024-07-24 22:34:26.649931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.277 [2024-07-24 22:34:26.649987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.277 qpair failed and we were unable to recover it. 00:25:01.277 [2024-07-24 22:34:26.650087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.277 [2024-07-24 22:34:26.650114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.277 qpair failed and we were unable to recover it. 00:25:01.277 [2024-07-24 22:34:26.650278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.277 [2024-07-24 22:34:26.650331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.277 qpair failed and we were unable to recover it. 00:25:01.277 [2024-07-24 22:34:26.650464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.277 [2024-07-24 22:34:26.650534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.277 qpair failed and we were unable to recover it. 00:25:01.277 [2024-07-24 22:34:26.650634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.277 [2024-07-24 22:34:26.650661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.277 qpair failed and we were unable to recover it. 00:25:01.277 [2024-07-24 22:34:26.650792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.277 [2024-07-24 22:34:26.650871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.277 qpair failed and we were unable to recover it. 00:25:01.277 [2024-07-24 22:34:26.651031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.277 [2024-07-24 22:34:26.651093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.277 qpair failed and we were unable to recover it. 00:25:01.277 [2024-07-24 22:34:26.651240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.277 [2024-07-24 22:34:26.651306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.277 qpair failed and we were unable to recover it. 00:25:01.277 [2024-07-24 22:34:26.651453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.277 [2024-07-24 22:34:26.651515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.277 qpair failed and we were unable to recover it. 00:25:01.277 [2024-07-24 22:34:26.651715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.277 [2024-07-24 22:34:26.651765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.277 qpair failed and we were unable to recover it. 00:25:01.277 [2024-07-24 22:34:26.651921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.277 [2024-07-24 22:34:26.651979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.277 qpair failed and we were unable to recover it. 00:25:01.277 [2024-07-24 22:34:26.652079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.277 [2024-07-24 22:34:26.652106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.277 qpair failed and we were unable to recover it. 00:25:01.277 [2024-07-24 22:34:26.652207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.277 [2024-07-24 22:34:26.652236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.277 qpair failed and we were unable to recover it. 00:25:01.277 [2024-07-24 22:34:26.652381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.277 [2024-07-24 22:34:26.652425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.277 qpair failed and we were unable to recover it. 00:25:01.277 [2024-07-24 22:34:26.652606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.277 [2024-07-24 22:34:26.652657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.277 qpair failed and we were unable to recover it. 00:25:01.277 [2024-07-24 22:34:26.652797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.277 [2024-07-24 22:34:26.652844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.277 qpair failed and we were unable to recover it. 00:25:01.277 [2024-07-24 22:34:26.653048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.277 [2024-07-24 22:34:26.653099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.277 qpair failed and we were unable to recover it. 00:25:01.277 [2024-07-24 22:34:26.653200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.277 [2024-07-24 22:34:26.653226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.277 qpair failed and we were unable to recover it. 00:25:01.277 [2024-07-24 22:34:26.653325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.277 [2024-07-24 22:34:26.653350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.277 qpair failed and we were unable to recover it. 00:25:01.277 [2024-07-24 22:34:26.653589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.277 [2024-07-24 22:34:26.653642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.277 qpair failed and we were unable to recover it. 00:25:01.277 [2024-07-24 22:34:26.653760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.277 [2024-07-24 22:34:26.653816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.277 qpair failed and we were unable to recover it. 00:25:01.277 [2024-07-24 22:34:26.653934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.277 [2024-07-24 22:34:26.653989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.277 qpair failed and we were unable to recover it. 00:25:01.277 [2024-07-24 22:34:26.654147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.277 [2024-07-24 22:34:26.654173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.277 qpair failed and we were unable to recover it. 00:25:01.277 [2024-07-24 22:34:26.654286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.277 [2024-07-24 22:34:26.654315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.277 qpair failed and we were unable to recover it. 00:25:01.277 [2024-07-24 22:34:26.654419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.277 [2024-07-24 22:34:26.654504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.277 qpair failed and we were unable to recover it. 00:25:01.277 [2024-07-24 22:34:26.654641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.277 [2024-07-24 22:34:26.654696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.277 qpair failed and we were unable to recover it. 00:25:01.277 [2024-07-24 22:34:26.654844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.277 [2024-07-24 22:34:26.654913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.277 qpair failed and we were unable to recover it. 00:25:01.277 [2024-07-24 22:34:26.655096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.277 [2024-07-24 22:34:26.655122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.277 qpair failed and we were unable to recover it. 00:25:01.277 [2024-07-24 22:34:26.655258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.277 [2024-07-24 22:34:26.655310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.277 qpair failed and we were unable to recover it. 00:25:01.277 [2024-07-24 22:34:26.655577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.277 [2024-07-24 22:34:26.655628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.277 qpair failed and we were unable to recover it. 00:25:01.277 [2024-07-24 22:34:26.655731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-07-24 22:34:26.655757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.278 qpair failed and we were unable to recover it. 00:25:01.278 [2024-07-24 22:34:26.655858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-07-24 22:34:26.655885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.278 qpair failed and we were unable to recover it. 00:25:01.278 [2024-07-24 22:34:26.655985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-07-24 22:34:26.656011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.278 qpair failed and we were unable to recover it. 00:25:01.278 [2024-07-24 22:34:26.656165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-07-24 22:34:26.656225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.278 qpair failed and we were unable to recover it. 00:25:01.278 [2024-07-24 22:34:26.656358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-07-24 22:34:26.656438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.278 qpair failed and we were unable to recover it. 00:25:01.278 [2024-07-24 22:34:26.656597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-07-24 22:34:26.656624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.278 qpair failed and we were unable to recover it. 00:25:01.278 [2024-07-24 22:34:26.656799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-07-24 22:34:26.656824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.278 qpair failed and we were unable to recover it. 00:25:01.278 [2024-07-24 22:34:26.656951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-07-24 22:34:26.657031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.278 qpair failed and we were unable to recover it. 00:25:01.278 [2024-07-24 22:34:26.657167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-07-24 22:34:26.657212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.278 qpair failed and we were unable to recover it. 00:25:01.278 [2024-07-24 22:34:26.657337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-07-24 22:34:26.657382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.278 qpair failed and we were unable to recover it. 00:25:01.278 [2024-07-24 22:34:26.657517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-07-24 22:34:26.657564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.278 qpair failed and we were unable to recover it. 00:25:01.278 [2024-07-24 22:34:26.657714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-07-24 22:34:26.657742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.278 qpair failed and we were unable to recover it. 00:25:01.278 [2024-07-24 22:34:26.657911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-07-24 22:34:26.657963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.278 qpair failed and we were unable to recover it. 00:25:01.278 [2024-07-24 22:34:26.658121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-07-24 22:34:26.658149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.278 qpair failed and we were unable to recover it. 00:25:01.278 [2024-07-24 22:34:26.658331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-07-24 22:34:26.658390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.278 qpair failed and we were unable to recover it. 00:25:01.278 [2024-07-24 22:34:26.658500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-07-24 22:34:26.658532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.278 qpair failed and we were unable to recover it. 00:25:01.278 [2024-07-24 22:34:26.658723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-07-24 22:34:26.658775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.278 qpair failed and we were unable to recover it. 00:25:01.278 [2024-07-24 22:34:26.658943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-07-24 22:34:26.658969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.278 qpair failed and we were unable to recover it. 00:25:01.278 [2024-07-24 22:34:26.659113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-07-24 22:34:26.659141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.278 qpair failed and we were unable to recover it. 00:25:01.278 [2024-07-24 22:34:26.659236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-07-24 22:34:26.659262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.278 qpair failed and we were unable to recover it. 00:25:01.278 [2024-07-24 22:34:26.659413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-07-24 22:34:26.659476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.278 qpair failed and we were unable to recover it. 00:25:01.278 [2024-07-24 22:34:26.659629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-07-24 22:34:26.659682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.278 qpair failed and we were unable to recover it. 00:25:01.278 [2024-07-24 22:34:26.659827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-07-24 22:34:26.659897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.278 qpair failed and we were unable to recover it. 00:25:01.278 [2024-07-24 22:34:26.660000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-07-24 22:34:26.660035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.278 qpair failed and we were unable to recover it. 00:25:01.278 [2024-07-24 22:34:26.660173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-07-24 22:34:26.660212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.278 qpair failed and we were unable to recover it. 00:25:01.278 [2024-07-24 22:34:26.660377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-07-24 22:34:26.660434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.278 qpair failed and we were unable to recover it. 00:25:01.278 [2024-07-24 22:34:26.660543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-07-24 22:34:26.660571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.278 qpair failed and we were unable to recover it. 00:25:01.278 [2024-07-24 22:34:26.660709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-07-24 22:34:26.660756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.278 qpair failed and we were unable to recover it. 00:25:01.278 [2024-07-24 22:34:26.660908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-07-24 22:34:26.660968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.278 qpair failed and we were unable to recover it. 00:25:01.278 [2024-07-24 22:34:26.661067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-07-24 22:34:26.661106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.278 qpair failed and we were unable to recover it. 00:25:01.278 [2024-07-24 22:34:26.661218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-07-24 22:34:26.661245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.278 qpair failed and we were unable to recover it. 00:25:01.278 [2024-07-24 22:34:26.661380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-07-24 22:34:26.661406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.278 qpair failed and we were unable to recover it. 00:25:01.278 [2024-07-24 22:34:26.661517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-07-24 22:34:26.661545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.278 qpair failed and we were unable to recover it. 00:25:01.278 [2024-07-24 22:34:26.661701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-07-24 22:34:26.661727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.278 qpair failed and we were unable to recover it. 00:25:01.278 [2024-07-24 22:34:26.661884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-07-24 22:34:26.661936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.278 qpair failed and we were unable to recover it. 00:25:01.278 [2024-07-24 22:34:26.662083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-07-24 22:34:26.662147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.278 qpair failed and we were unable to recover it. 00:25:01.278 [2024-07-24 22:34:26.662302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-07-24 22:34:26.662365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.278 qpair failed and we were unable to recover it. 00:25:01.278 [2024-07-24 22:34:26.662478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.278 [2024-07-24 22:34:26.662513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.278 qpair failed and we were unable to recover it. 00:25:01.278 [2024-07-24 22:34:26.662664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-07-24 22:34:26.662733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.279 qpair failed and we were unable to recover it. 00:25:01.279 [2024-07-24 22:34:26.662883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-07-24 22:34:26.662909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.279 qpair failed and we were unable to recover it. 00:25:01.279 [2024-07-24 22:34:26.663053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-07-24 22:34:26.663080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.279 qpair failed and we were unable to recover it. 00:25:01.279 [2024-07-24 22:34:26.663230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-07-24 22:34:26.663293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.279 qpair failed and we were unable to recover it. 00:25:01.279 [2024-07-24 22:34:26.663490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-07-24 22:34:26.663543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.279 qpair failed and we were unable to recover it. 00:25:01.279 [2024-07-24 22:34:26.663677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-07-24 22:34:26.663705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.279 qpair failed and we were unable to recover it. 00:25:01.279 [2024-07-24 22:34:26.663867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-07-24 22:34:26.663929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.279 qpair failed and we were unable to recover it. 00:25:01.279 [2024-07-24 22:34:26.664043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-07-24 22:34:26.664069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.279 qpair failed and we were unable to recover it. 00:25:01.279 [2024-07-24 22:34:26.664203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-07-24 22:34:26.664258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.279 qpair failed and we were unable to recover it. 00:25:01.279 [2024-07-24 22:34:26.664411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-07-24 22:34:26.664471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.279 qpair failed and we were unable to recover it. 00:25:01.279 [2024-07-24 22:34:26.664624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-07-24 22:34:26.664671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.279 qpair failed and we were unable to recover it. 00:25:01.279 [2024-07-24 22:34:26.664835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-07-24 22:34:26.664892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.279 qpair failed and we were unable to recover it. 00:25:01.279 [2024-07-24 22:34:26.665039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-07-24 22:34:26.665086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.279 qpair failed and we were unable to recover it. 00:25:01.279 [2024-07-24 22:34:26.665244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-07-24 22:34:26.665293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.279 qpair failed and we were unable to recover it. 00:25:01.279 [2024-07-24 22:34:26.665493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-07-24 22:34:26.665549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.279 qpair failed and we were unable to recover it. 00:25:01.279 [2024-07-24 22:34:26.665654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-07-24 22:34:26.665680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.279 qpair failed and we were unable to recover it. 00:25:01.279 [2024-07-24 22:34:26.665813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-07-24 22:34:26.665864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.279 qpair failed and we were unable to recover it. 00:25:01.279 [2024-07-24 22:34:26.666069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-07-24 22:34:26.666118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.279 qpair failed and we were unable to recover it. 00:25:01.279 [2024-07-24 22:34:26.666223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-07-24 22:34:26.666250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.279 qpair failed and we were unable to recover it. 00:25:01.279 [2024-07-24 22:34:26.666388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-07-24 22:34:26.666434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.279 qpair failed and we were unable to recover it. 00:25:01.279 [2024-07-24 22:34:26.666615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-07-24 22:34:26.666666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.279 qpair failed and we were unable to recover it. 00:25:01.279 [2024-07-24 22:34:26.666839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-07-24 22:34:26.666889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.279 qpair failed and we were unable to recover it. 00:25:01.279 [2024-07-24 22:34:26.667082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-07-24 22:34:26.667165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.279 qpair failed and we were unable to recover it. 00:25:01.279 [2024-07-24 22:34:26.667264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-07-24 22:34:26.667289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.279 qpair failed and we were unable to recover it. 00:25:01.279 [2024-07-24 22:34:26.667431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-07-24 22:34:26.667491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.279 qpair failed and we were unable to recover it. 00:25:01.279 [2024-07-24 22:34:26.667642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-07-24 22:34:26.667709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.279 qpair failed and we were unable to recover it. 00:25:01.279 [2024-07-24 22:34:26.667811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-07-24 22:34:26.667838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.279 qpair failed and we were unable to recover it. 00:25:01.279 [2024-07-24 22:34:26.667995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-07-24 22:34:26.668056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.279 qpair failed and we were unable to recover it. 00:25:01.279 [2024-07-24 22:34:26.668203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-07-24 22:34:26.668229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.279 qpair failed and we were unable to recover it. 00:25:01.279 [2024-07-24 22:34:26.668354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-07-24 22:34:26.668400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.279 qpair failed and we were unable to recover it. 00:25:01.279 [2024-07-24 22:34:26.668568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-07-24 22:34:26.668617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.279 qpair failed and we were unable to recover it. 00:25:01.279 [2024-07-24 22:34:26.668765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-07-24 22:34:26.668792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.279 qpair failed and we were unable to recover it. 00:25:01.279 [2024-07-24 22:34:26.668936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.279 [2024-07-24 22:34:26.668963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.279 qpair failed and we were unable to recover it. 00:25:01.279 [2024-07-24 22:34:26.669124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-07-24 22:34:26.669152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.280 qpair failed and we were unable to recover it. 00:25:01.280 [2024-07-24 22:34:26.669304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-07-24 22:34:26.669356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.280 qpair failed and we were unable to recover it. 00:25:01.280 [2024-07-24 22:34:26.669496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-07-24 22:34:26.669546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.280 qpair failed and we were unable to recover it. 00:25:01.280 [2024-07-24 22:34:26.669644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-07-24 22:34:26.669670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.280 qpair failed and we were unable to recover it. 00:25:01.280 [2024-07-24 22:34:26.669803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-07-24 22:34:26.669843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.280 qpair failed and we were unable to recover it. 00:25:01.280 [2024-07-24 22:34:26.669971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-07-24 22:34:26.670054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.280 qpair failed and we were unable to recover it. 00:25:01.280 [2024-07-24 22:34:26.670193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-07-24 22:34:26.670245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.280 qpair failed and we were unable to recover it. 00:25:01.280 [2024-07-24 22:34:26.670421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-07-24 22:34:26.670473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.280 qpair failed and we were unable to recover it. 00:25:01.280 [2024-07-24 22:34:26.670644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-07-24 22:34:26.670700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.280 qpair failed and we were unable to recover it. 00:25:01.280 [2024-07-24 22:34:26.670842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-07-24 22:34:26.670896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.280 qpair failed and we were unable to recover it. 00:25:01.280 [2024-07-24 22:34:26.671050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-07-24 22:34:26.671111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.280 qpair failed and we were unable to recover it. 00:25:01.280 [2024-07-24 22:34:26.671259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-07-24 22:34:26.671285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.280 qpair failed and we were unable to recover it. 00:25:01.280 [2024-07-24 22:34:26.671418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-07-24 22:34:26.671460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.280 qpair failed and we were unable to recover it. 00:25:01.280 [2024-07-24 22:34:26.671656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-07-24 22:34:26.671711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.280 qpair failed and we were unable to recover it. 00:25:01.280 [2024-07-24 22:34:26.671877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-07-24 22:34:26.671933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.280 qpair failed and we were unable to recover it. 00:25:01.280 [2024-07-24 22:34:26.672035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-07-24 22:34:26.672061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.280 qpair failed and we were unable to recover it. 00:25:01.280 [2024-07-24 22:34:26.672193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-07-24 22:34:26.672273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.280 qpair failed and we were unable to recover it. 00:25:01.280 [2024-07-24 22:34:26.672432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-07-24 22:34:26.672485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.280 qpair failed and we were unable to recover it. 00:25:01.280 [2024-07-24 22:34:26.672633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-07-24 22:34:26.672659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.280 qpair failed and we were unable to recover it. 00:25:01.280 [2024-07-24 22:34:26.672872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-07-24 22:34:26.672925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.280 qpair failed and we were unable to recover it. 00:25:01.280 [2024-07-24 22:34:26.673125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-07-24 22:34:26.673176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.280 qpair failed and we were unable to recover it. 00:25:01.280 [2024-07-24 22:34:26.673348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-07-24 22:34:26.673397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.280 qpair failed and we were unable to recover it. 00:25:01.280 [2024-07-24 22:34:26.673565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-07-24 22:34:26.673615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.280 qpair failed and we were unable to recover it. 00:25:01.280 [2024-07-24 22:34:26.673723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-07-24 22:34:26.673753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.280 qpair failed and we were unable to recover it. 00:25:01.280 [2024-07-24 22:34:26.673910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-07-24 22:34:26.673965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.280 qpair failed and we were unable to recover it. 00:25:01.280 [2024-07-24 22:34:26.674128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-07-24 22:34:26.674193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.280 qpair failed and we were unable to recover it. 00:25:01.280 [2024-07-24 22:34:26.674339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-07-24 22:34:26.674405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.280 qpair failed and we were unable to recover it. 00:25:01.280 [2024-07-24 22:34:26.674518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-07-24 22:34:26.674553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.280 qpair failed and we were unable to recover it. 00:25:01.280 [2024-07-24 22:34:26.674655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-07-24 22:34:26.674683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.280 qpair failed and we were unable to recover it. 00:25:01.280 [2024-07-24 22:34:26.674817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-07-24 22:34:26.674864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.280 qpair failed and we were unable to recover it. 00:25:01.280 [2024-07-24 22:34:26.675069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-07-24 22:34:26.675121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.280 qpair failed and we were unable to recover it. 00:25:01.280 [2024-07-24 22:34:26.675305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-07-24 22:34:26.675335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.280 qpair failed and we were unable to recover it. 00:25:01.280 [2024-07-24 22:34:26.675546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-07-24 22:34:26.675583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.280 qpair failed and we were unable to recover it. 00:25:01.280 [2024-07-24 22:34:26.675735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-07-24 22:34:26.675762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.280 qpair failed and we were unable to recover it. 00:25:01.280 [2024-07-24 22:34:26.675878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-07-24 22:34:26.675944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.280 qpair failed and we were unable to recover it. 00:25:01.280 [2024-07-24 22:34:26.676104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-07-24 22:34:26.676162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.280 qpair failed and we were unable to recover it. 00:25:01.280 [2024-07-24 22:34:26.676304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-07-24 22:34:26.676360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.280 qpair failed and we were unable to recover it. 00:25:01.280 [2024-07-24 22:34:26.676512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.280 [2024-07-24 22:34:26.676577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.280 qpair failed and we were unable to recover it. 00:25:01.281 [2024-07-24 22:34:26.676727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-07-24 22:34:26.676784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.281 qpair failed and we were unable to recover it. 00:25:01.281 [2024-07-24 22:34:26.676925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-07-24 22:34:26.676970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.281 qpair failed and we were unable to recover it. 00:25:01.281 [2024-07-24 22:34:26.677097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-07-24 22:34:26.677152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.281 qpair failed and we were unable to recover it. 00:25:01.281 [2024-07-24 22:34:26.677285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-07-24 22:34:26.677330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.281 qpair failed and we were unable to recover it. 00:25:01.281 [2024-07-24 22:34:26.677522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-07-24 22:34:26.677571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.281 qpair failed and we were unable to recover it. 00:25:01.281 [2024-07-24 22:34:26.677748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-07-24 22:34:26.677800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.281 qpair failed and we were unable to recover it. 00:25:01.281 [2024-07-24 22:34:26.677901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-07-24 22:34:26.677927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.281 qpair failed and we were unable to recover it. 00:25:01.281 [2024-07-24 22:34:26.678091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-07-24 22:34:26.678117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.281 qpair failed and we were unable to recover it. 00:25:01.281 [2024-07-24 22:34:26.678274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-07-24 22:34:26.678341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.281 qpair failed and we were unable to recover it. 00:25:01.281 [2024-07-24 22:34:26.678514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-07-24 22:34:26.678551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.281 qpair failed and we were unable to recover it. 00:25:01.281 [2024-07-24 22:34:26.678699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-07-24 22:34:26.678727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.281 qpair failed and we were unable to recover it. 00:25:01.281 [2024-07-24 22:34:26.678920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-07-24 22:34:26.678962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.281 qpair failed and we were unable to recover it. 00:25:01.281 [2024-07-24 22:34:26.679063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-07-24 22:34:26.679088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.281 qpair failed and we were unable to recover it. 00:25:01.281 [2024-07-24 22:34:26.679232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-07-24 22:34:26.679258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.281 qpair failed and we were unable to recover it. 00:25:01.281 [2024-07-24 22:34:26.679412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-07-24 22:34:26.679438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.281 qpair failed and we were unable to recover it. 00:25:01.281 [2024-07-24 22:34:26.679613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-07-24 22:34:26.679666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.281 qpair failed and we were unable to recover it. 00:25:01.281 [2024-07-24 22:34:26.679805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-07-24 22:34:26.679854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.281 qpair failed and we were unable to recover it. 00:25:01.281 [2024-07-24 22:34:26.680069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-07-24 22:34:26.680119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.281 qpair failed and we were unable to recover it. 00:25:01.281 [2024-07-24 22:34:26.680302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-07-24 22:34:26.680349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.281 qpair failed and we were unable to recover it. 00:25:01.281 [2024-07-24 22:34:26.680458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-07-24 22:34:26.680494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.281 qpair failed and we were unable to recover it. 00:25:01.281 [2024-07-24 22:34:26.680647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-07-24 22:34:26.680701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.281 qpair failed and we were unable to recover it. 00:25:01.281 [2024-07-24 22:34:26.680863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-07-24 22:34:26.680926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.281 qpair failed and we were unable to recover it. 00:25:01.281 [2024-07-24 22:34:26.681058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-07-24 22:34:26.681137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.281 qpair failed and we were unable to recover it. 00:25:01.281 [2024-07-24 22:34:26.681328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-07-24 22:34:26.681384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.281 qpair failed and we were unable to recover it. 00:25:01.281 [2024-07-24 22:34:26.681503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-07-24 22:34:26.681540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.281 qpair failed and we were unable to recover it. 00:25:01.281 [2024-07-24 22:34:26.681704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-07-24 22:34:26.681751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.281 qpair failed and we were unable to recover it. 00:25:01.281 [2024-07-24 22:34:26.681899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-07-24 22:34:26.681953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.281 qpair failed and we were unable to recover it. 00:25:01.281 [2024-07-24 22:34:26.682133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-07-24 22:34:26.682183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.281 qpair failed and we were unable to recover it. 00:25:01.281 [2024-07-24 22:34:26.682288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-07-24 22:34:26.682314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.281 qpair failed and we were unable to recover it. 00:25:01.281 [2024-07-24 22:34:26.682428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-07-24 22:34:26.682457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.281 qpair failed and we were unable to recover it. 00:25:01.281 [2024-07-24 22:34:26.682614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-07-24 22:34:26.682664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.281 qpair failed and we were unable to recover it. 00:25:01.281 [2024-07-24 22:34:26.682792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-07-24 22:34:26.682874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.281 qpair failed and we were unable to recover it. 00:25:01.281 [2024-07-24 22:34:26.683032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-07-24 22:34:26.683092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.281 qpair failed and we were unable to recover it. 00:25:01.281 [2024-07-24 22:34:26.683300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-07-24 22:34:26.683350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.281 qpair failed and we were unable to recover it. 00:25:01.281 [2024-07-24 22:34:26.683603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-07-24 22:34:26.683653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.281 qpair failed and we were unable to recover it. 00:25:01.281 [2024-07-24 22:34:26.683802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-07-24 22:34:26.683849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.281 qpair failed and we were unable to recover it. 00:25:01.281 [2024-07-24 22:34:26.683990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-07-24 22:34:26.684038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.281 qpair failed and we were unable to recover it. 00:25:01.281 [2024-07-24 22:34:26.684239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.281 [2024-07-24 22:34:26.684297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.281 qpair failed and we were unable to recover it. 00:25:01.282 [2024-07-24 22:34:26.684447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-07-24 22:34:26.684507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.282 qpair failed and we were unable to recover it. 00:25:01.282 [2024-07-24 22:34:26.684688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-07-24 22:34:26.684735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.282 qpair failed and we were unable to recover it. 00:25:01.282 [2024-07-24 22:34:26.684838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-07-24 22:34:26.684865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.282 qpair failed and we were unable to recover it. 00:25:01.282 [2024-07-24 22:34:26.685010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-07-24 22:34:26.685057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.282 qpair failed and we were unable to recover it. 00:25:01.282 [2024-07-24 22:34:26.685210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-07-24 22:34:26.685256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.282 qpair failed and we were unable to recover it. 00:25:01.282 [2024-07-24 22:34:26.685404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-07-24 22:34:26.685430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.282 qpair failed and we were unable to recover it. 00:25:01.282 [2024-07-24 22:34:26.685557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-07-24 22:34:26.685607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.282 qpair failed and we were unable to recover it. 00:25:01.282 [2024-07-24 22:34:26.685734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-07-24 22:34:26.685782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.282 qpair failed and we were unable to recover it. 00:25:01.282 [2024-07-24 22:34:26.685878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-07-24 22:34:26.685904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.282 qpair failed and we were unable to recover it. 00:25:01.282 [2024-07-24 22:34:26.686068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-07-24 22:34:26.686120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.282 qpair failed and we were unable to recover it. 00:25:01.282 [2024-07-24 22:34:26.686300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-07-24 22:34:26.686328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.282 qpair failed and we were unable to recover it. 00:25:01.282 [2024-07-24 22:34:26.686541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-07-24 22:34:26.686595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.282 qpair failed and we were unable to recover it. 00:25:01.282 [2024-07-24 22:34:26.686780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-07-24 22:34:26.686834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.282 qpair failed and we were unable to recover it. 00:25:01.282 [2024-07-24 22:34:26.687033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-07-24 22:34:26.687088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.282 qpair failed and we were unable to recover it. 00:25:01.282 [2024-07-24 22:34:26.687235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-07-24 22:34:26.687278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.282 qpair failed and we were unable to recover it. 00:25:01.282 [2024-07-24 22:34:26.687372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-07-24 22:34:26.687398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.282 qpair failed and we were unable to recover it. 00:25:01.282 [2024-07-24 22:34:26.687547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-07-24 22:34:26.687600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.282 qpair failed and we were unable to recover it. 00:25:01.282 [2024-07-24 22:34:26.687719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-07-24 22:34:26.687769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.282 qpair failed and we were unable to recover it. 00:25:01.282 [2024-07-24 22:34:26.687938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-07-24 22:34:26.687997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.282 qpair failed and we were unable to recover it. 00:25:01.282 [2024-07-24 22:34:26.688118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-07-24 22:34:26.688168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.282 qpair failed and we were unable to recover it. 00:25:01.282 [2024-07-24 22:34:26.688338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-07-24 22:34:26.688393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.282 qpair failed and we were unable to recover it. 00:25:01.282 [2024-07-24 22:34:26.688539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-07-24 22:34:26.688592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.282 qpair failed and we were unable to recover it. 00:25:01.282 [2024-07-24 22:34:26.688792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-07-24 22:34:26.688843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.282 qpair failed and we were unable to recover it. 00:25:01.282 [2024-07-24 22:34:26.689006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-07-24 22:34:26.689054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.282 qpair failed and we were unable to recover it. 00:25:01.282 [2024-07-24 22:34:26.689157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-07-24 22:34:26.689183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.282 qpair failed and we were unable to recover it. 00:25:01.282 [2024-07-24 22:34:26.689302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-07-24 22:34:26.689351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.282 qpair failed and we were unable to recover it. 00:25:01.282 [2024-07-24 22:34:26.689495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-07-24 22:34:26.689542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.282 qpair failed and we were unable to recover it. 00:25:01.282 [2024-07-24 22:34:26.689665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-07-24 22:34:26.689712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.282 qpair failed and we were unable to recover it. 00:25:01.282 [2024-07-24 22:34:26.689862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-07-24 22:34:26.689919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.282 qpair failed and we were unable to recover it. 00:25:01.282 [2024-07-24 22:34:26.690079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-07-24 22:34:26.690105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.282 qpair failed and we were unable to recover it. 00:25:01.282 [2024-07-24 22:34:26.690233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-07-24 22:34:26.690280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.282 qpair failed and we were unable to recover it. 00:25:01.282 [2024-07-24 22:34:26.690460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-07-24 22:34:26.690514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.282 qpair failed and we were unable to recover it. 00:25:01.282 [2024-07-24 22:34:26.690680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-07-24 22:34:26.690727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.282 qpair failed and we were unable to recover it. 00:25:01.282 [2024-07-24 22:34:26.690867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-07-24 22:34:26.690914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.282 qpair failed and we were unable to recover it. 00:25:01.282 [2024-07-24 22:34:26.691064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-07-24 22:34:26.691110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.282 qpair failed and we were unable to recover it. 00:25:01.282 [2024-07-24 22:34:26.691212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-07-24 22:34:26.691239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.282 qpair failed and we were unable to recover it. 00:25:01.282 [2024-07-24 22:34:26.691375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-07-24 22:34:26.691427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.282 qpair failed and we were unable to recover it. 00:25:01.282 [2024-07-24 22:34:26.691559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.282 [2024-07-24 22:34:26.691607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.283 qpair failed and we were unable to recover it. 00:25:01.283 [2024-07-24 22:34:26.691753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-07-24 22:34:26.691779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.283 qpair failed and we were unable to recover it. 00:25:01.283 [2024-07-24 22:34:26.691937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-07-24 22:34:26.691997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.283 qpair failed and we were unable to recover it. 00:25:01.283 [2024-07-24 22:34:26.692123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-07-24 22:34:26.692168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.283 qpair failed and we were unable to recover it. 00:25:01.283 [2024-07-24 22:34:26.692269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-07-24 22:34:26.692296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.283 qpair failed and we were unable to recover it. 00:25:01.283 [2024-07-24 22:34:26.692395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-07-24 22:34:26.692422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.283 qpair failed and we were unable to recover it. 00:25:01.283 [2024-07-24 22:34:26.692602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-07-24 22:34:26.692629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.283 qpair failed and we were unable to recover it. 00:25:01.283 [2024-07-24 22:34:26.692751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-07-24 22:34:26.692801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.283 qpair failed and we were unable to recover it. 00:25:01.283 [2024-07-24 22:34:26.692969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-07-24 22:34:26.693031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.283 qpair failed and we were unable to recover it. 00:25:01.283 [2024-07-24 22:34:26.693158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-07-24 22:34:26.693203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.283 qpair failed and we were unable to recover it. 00:25:01.283 [2024-07-24 22:34:26.693355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-07-24 22:34:26.693420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.283 qpair failed and we were unable to recover it. 00:25:01.283 [2024-07-24 22:34:26.693533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-07-24 22:34:26.693560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.283 qpair failed and we were unable to recover it. 00:25:01.283 [2024-07-24 22:34:26.693662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-07-24 22:34:26.693689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.283 qpair failed and we were unable to recover it. 00:25:01.283 [2024-07-24 22:34:26.693801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-07-24 22:34:26.693827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.283 qpair failed and we were unable to recover it. 00:25:01.283 [2024-07-24 22:34:26.693981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-07-24 22:34:26.694050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.283 qpair failed and we were unable to recover it. 00:25:01.283 [2024-07-24 22:34:26.694167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-07-24 22:34:26.694194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.283 qpair failed and we were unable to recover it. 00:25:01.283 [2024-07-24 22:34:26.694301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-07-24 22:34:26.694330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.283 qpair failed and we were unable to recover it. 00:25:01.283 [2024-07-24 22:34:26.694442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-07-24 22:34:26.694468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.283 qpair failed and we were unable to recover it. 00:25:01.283 [2024-07-24 22:34:26.694586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-07-24 22:34:26.694613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.283 qpair failed and we were unable to recover it. 00:25:01.283 [2024-07-24 22:34:26.694760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-07-24 22:34:26.694828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.283 qpair failed and we were unable to recover it. 00:25:01.283 [2024-07-24 22:34:26.694959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-07-24 22:34:26.695039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.283 qpair failed and we were unable to recover it. 00:25:01.283 [2024-07-24 22:34:26.695198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-07-24 22:34:26.695264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.283 qpair failed and we were unable to recover it. 00:25:01.283 [2024-07-24 22:34:26.695391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-07-24 22:34:26.695472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.283 qpair failed and we were unable to recover it. 00:25:01.283 [2024-07-24 22:34:26.695683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-07-24 22:34:26.695736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.283 qpair failed and we were unable to recover it. 00:25:01.283 [2024-07-24 22:34:26.695873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-07-24 22:34:26.695921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.283 qpair failed and we were unable to recover it. 00:25:01.283 [2024-07-24 22:34:26.696035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-07-24 22:34:26.696061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.283 qpair failed and we were unable to recover it. 00:25:01.283 [2024-07-24 22:34:26.696230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-07-24 22:34:26.696289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.283 qpair failed and we were unable to recover it. 00:25:01.283 [2024-07-24 22:34:26.696427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-07-24 22:34:26.696473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.283 qpair failed and we were unable to recover it. 00:25:01.283 [2024-07-24 22:34:26.696697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-07-24 22:34:26.696749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.283 qpair failed and we were unable to recover it. 00:25:01.283 [2024-07-24 22:34:26.696880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-07-24 22:34:26.696960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.283 qpair failed and we were unable to recover it. 00:25:01.283 [2024-07-24 22:34:26.697059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-07-24 22:34:26.697084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.283 qpair failed and we were unable to recover it. 00:25:01.283 [2024-07-24 22:34:26.697253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-07-24 22:34:26.697306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.283 qpair failed and we were unable to recover it. 00:25:01.283 [2024-07-24 22:34:26.697446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.283 [2024-07-24 22:34:26.697504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.284 qpair failed and we were unable to recover it. 00:25:01.284 [2024-07-24 22:34:26.697685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-07-24 22:34:26.697737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.284 qpair failed and we were unable to recover it. 00:25:01.284 [2024-07-24 22:34:26.697900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-07-24 22:34:26.697949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.284 qpair failed and we were unable to recover it. 00:25:01.284 [2024-07-24 22:34:26.698126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-07-24 22:34:26.698151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.284 qpair failed and we were unable to recover it. 00:25:01.284 [2024-07-24 22:34:26.698272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-07-24 22:34:26.698317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.284 qpair failed and we were unable to recover it. 00:25:01.284 [2024-07-24 22:34:26.698501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-07-24 22:34:26.698531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.284 qpair failed and we were unable to recover it. 00:25:01.284 [2024-07-24 22:34:26.698711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-07-24 22:34:26.698760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.284 qpair failed and we were unable to recover it. 00:25:01.284 [2024-07-24 22:34:26.698859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-07-24 22:34:26.698885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.284 qpair failed and we were unable to recover it. 00:25:01.284 [2024-07-24 22:34:26.699013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-07-24 22:34:26.699069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.284 qpair failed and we were unable to recover it. 00:25:01.284 [2024-07-24 22:34:26.699245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-07-24 22:34:26.699271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.284 qpair failed and we were unable to recover it. 00:25:01.284 [2024-07-24 22:34:26.699395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-07-24 22:34:26.699441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.284 qpair failed and we were unable to recover it. 00:25:01.284 [2024-07-24 22:34:26.699581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-07-24 22:34:26.699608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.284 qpair failed and we were unable to recover it. 00:25:01.284 [2024-07-24 22:34:26.699736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-07-24 22:34:26.699761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.284 qpair failed and we were unable to recover it. 00:25:01.284 [2024-07-24 22:34:26.699870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-07-24 22:34:26.699899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.284 qpair failed and we were unable to recover it. 00:25:01.284 [2024-07-24 22:34:26.700019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-07-24 22:34:26.700045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.284 qpair failed and we were unable to recover it. 00:25:01.284 [2024-07-24 22:34:26.700149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-07-24 22:34:26.700179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.284 qpair failed and we were unable to recover it. 00:25:01.284 [2024-07-24 22:34:26.700281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-07-24 22:34:26.700307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.284 qpair failed and we were unable to recover it. 00:25:01.284 [2024-07-24 22:34:26.700437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-07-24 22:34:26.700525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.284 qpair failed and we were unable to recover it. 00:25:01.284 [2024-07-24 22:34:26.700656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-07-24 22:34:26.700711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.284 qpair failed and we were unable to recover it. 00:25:01.284 [2024-07-24 22:34:26.700862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-07-24 22:34:26.700910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.284 qpair failed and we were unable to recover it. 00:25:01.284 [2024-07-24 22:34:26.701099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-07-24 22:34:26.701125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.284 qpair failed and we were unable to recover it. 00:25:01.284 [2024-07-24 22:34:26.701245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-07-24 22:34:26.701298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.284 qpair failed and we were unable to recover it. 00:25:01.284 [2024-07-24 22:34:26.701404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-07-24 22:34:26.701434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.284 qpair failed and we were unable to recover it. 00:25:01.284 [2024-07-24 22:34:26.701651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-07-24 22:34:26.701704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.284 qpair failed and we were unable to recover it. 00:25:01.284 [2024-07-24 22:34:26.701889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-07-24 22:34:26.701939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.284 qpair failed and we were unable to recover it. 00:25:01.284 [2024-07-24 22:34:26.702072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-07-24 22:34:26.702118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.284 qpair failed and we were unable to recover it. 00:25:01.284 [2024-07-24 22:34:26.702303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-07-24 22:34:26.702331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.284 qpair failed and we were unable to recover it. 00:25:01.284 [2024-07-24 22:34:26.702547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-07-24 22:34:26.702603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.284 qpair failed and we were unable to recover it. 00:25:01.284 [2024-07-24 22:34:26.702802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-07-24 22:34:26.702856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.284 qpair failed and we were unable to recover it. 00:25:01.284 [2024-07-24 22:34:26.702980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-07-24 22:34:26.703022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.284 qpair failed and we were unable to recover it. 00:25:01.284 [2024-07-24 22:34:26.703127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-07-24 22:34:26.703154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.284 qpair failed and we were unable to recover it. 00:25:01.284 [2024-07-24 22:34:26.703354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-07-24 22:34:26.703401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.284 qpair failed and we were unable to recover it. 00:25:01.284 [2024-07-24 22:34:26.703541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-07-24 22:34:26.703595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.284 qpair failed and we were unable to recover it. 00:25:01.284 [2024-07-24 22:34:26.703784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-07-24 22:34:26.703844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.284 qpair failed and we were unable to recover it. 00:25:01.284 [2024-07-24 22:34:26.703938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-07-24 22:34:26.703963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.284 qpair failed and we were unable to recover it. 00:25:01.284 [2024-07-24 22:34:26.704071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-07-24 22:34:26.704097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.284 qpair failed and we were unable to recover it. 00:25:01.284 [2024-07-24 22:34:26.704269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-07-24 22:34:26.704319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.284 qpair failed and we were unable to recover it. 00:25:01.284 [2024-07-24 22:34:26.704436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-07-24 22:34:26.704493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.284 qpair failed and we were unable to recover it. 00:25:01.284 [2024-07-24 22:34:26.704642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.284 [2024-07-24 22:34:26.704693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.285 qpair failed and we were unable to recover it. 00:25:01.285 [2024-07-24 22:34:26.704824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-07-24 22:34:26.704878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.285 qpair failed and we were unable to recover it. 00:25:01.285 [2024-07-24 22:34:26.704973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-07-24 22:34:26.704998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.285 qpair failed and we were unable to recover it. 00:25:01.285 [2024-07-24 22:34:26.705146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-07-24 22:34:26.705173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.285 qpair failed and we were unable to recover it. 00:25:01.285 [2024-07-24 22:34:26.705279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-07-24 22:34:26.705306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.285 qpair failed and we were unable to recover it. 00:25:01.285 [2024-07-24 22:34:26.705405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-07-24 22:34:26.705432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.285 qpair failed and we were unable to recover it. 00:25:01.285 [2024-07-24 22:34:26.705625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-07-24 22:34:26.705675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.285 qpair failed and we were unable to recover it. 00:25:01.285 [2024-07-24 22:34:26.705810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-07-24 22:34:26.705857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.285 qpair failed and we were unable to recover it. 00:25:01.285 [2024-07-24 22:34:26.705979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-07-24 22:34:26.706046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.285 qpair failed and we were unable to recover it. 00:25:01.285 [2024-07-24 22:34:26.706239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-07-24 22:34:26.706294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.285 qpair failed and we were unable to recover it. 00:25:01.285 [2024-07-24 22:34:26.706423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-07-24 22:34:26.706477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.285 qpair failed and we were unable to recover it. 00:25:01.285 [2024-07-24 22:34:26.706653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-07-24 22:34:26.706709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.285 qpair failed and we were unable to recover it. 00:25:01.285 [2024-07-24 22:34:26.706881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-07-24 22:34:26.706935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.285 qpair failed and we were unable to recover it. 00:25:01.285 [2024-07-24 22:34:26.707056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-07-24 22:34:26.707111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.285 qpair failed and we were unable to recover it. 00:25:01.285 [2024-07-24 22:34:26.707257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-07-24 22:34:26.707283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.285 qpair failed and we were unable to recover it. 00:25:01.285 [2024-07-24 22:34:26.707380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-07-24 22:34:26.707405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.285 qpair failed and we were unable to recover it. 00:25:01.285 [2024-07-24 22:34:26.707511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-07-24 22:34:26.707537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.285 qpair failed and we were unable to recover it. 00:25:01.285 [2024-07-24 22:34:26.707705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-07-24 22:34:26.707754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.285 qpair failed and we were unable to recover it. 00:25:01.285 [2024-07-24 22:34:26.707883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-07-24 22:34:26.707928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.285 qpair failed and we were unable to recover it. 00:25:01.285 [2024-07-24 22:34:26.708112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-07-24 22:34:26.708166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.285 qpair failed and we were unable to recover it. 00:25:01.285 [2024-07-24 22:34:26.708332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-07-24 22:34:26.708361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.285 qpair failed and we were unable to recover it. 00:25:01.285 [2024-07-24 22:34:26.708532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-07-24 22:34:26.708591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.285 qpair failed and we were unable to recover it. 00:25:01.285 [2024-07-24 22:34:26.708737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-07-24 22:34:26.708803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.285 qpair failed and we were unable to recover it. 00:25:01.285 [2024-07-24 22:34:26.708924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-07-24 22:34:26.708970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.285 qpair failed and we were unable to recover it. 00:25:01.285 [2024-07-24 22:34:26.709097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-07-24 22:34:26.709164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.285 qpair failed and we were unable to recover it. 00:25:01.285 [2024-07-24 22:34:26.709281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-07-24 22:34:26.709308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.285 qpair failed and we were unable to recover it. 00:25:01.285 [2024-07-24 22:34:26.709405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-07-24 22:34:26.709431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.285 qpair failed and we were unable to recover it. 00:25:01.285 [2024-07-24 22:34:26.709537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-07-24 22:34:26.709565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.285 qpair failed and we were unable to recover it. 00:25:01.285 [2024-07-24 22:34:26.709753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-07-24 22:34:26.709801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.285 qpair failed and we were unable to recover it. 00:25:01.285 [2024-07-24 22:34:26.709919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-07-24 22:34:26.709957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.285 qpair failed and we were unable to recover it. 00:25:01.285 [2024-07-24 22:34:26.710071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-07-24 22:34:26.710098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.285 qpair failed and we were unable to recover it. 00:25:01.285 [2024-07-24 22:34:26.710249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-07-24 22:34:26.710311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.285 qpair failed and we were unable to recover it. 00:25:01.285 [2024-07-24 22:34:26.710409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-07-24 22:34:26.710437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.285 qpair failed and we were unable to recover it. 00:25:01.285 [2024-07-24 22:34:26.710598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-07-24 22:34:26.710655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.285 qpair failed and we were unable to recover it. 00:25:01.285 [2024-07-24 22:34:26.710756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-07-24 22:34:26.710783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.285 qpair failed and we were unable to recover it. 00:25:01.285 [2024-07-24 22:34:26.711007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-07-24 22:34:26.711056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.285 qpair failed and we were unable to recover it. 00:25:01.285 [2024-07-24 22:34:26.711211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-07-24 22:34:26.711265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.285 qpair failed and we were unable to recover it. 00:25:01.285 [2024-07-24 22:34:26.711426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.285 [2024-07-24 22:34:26.711494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.285 qpair failed and we were unable to recover it. 00:25:01.285 [2024-07-24 22:34:26.711601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.286 [2024-07-24 22:34:26.711629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.286 qpair failed and we were unable to recover it. 00:25:01.286 [2024-07-24 22:34:26.711792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.286 [2024-07-24 22:34:26.711850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.286 qpair failed and we were unable to recover it. 00:25:01.286 [2024-07-24 22:34:26.711996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.286 [2024-07-24 22:34:26.712023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.286 qpair failed and we were unable to recover it. 00:25:01.286 [2024-07-24 22:34:26.712184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.286 [2024-07-24 22:34:26.712237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.286 qpair failed and we were unable to recover it. 00:25:01.286 [2024-07-24 22:34:26.712387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.286 [2024-07-24 22:34:26.712444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.286 qpair failed and we were unable to recover it. 00:25:01.286 [2024-07-24 22:34:26.712631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.286 [2024-07-24 22:34:26.712687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.286 qpair failed and we were unable to recover it. 00:25:01.286 [2024-07-24 22:34:26.712786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.286 [2024-07-24 22:34:26.712811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.286 qpair failed and we were unable to recover it. 00:25:01.286 [2024-07-24 22:34:26.713046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.286 [2024-07-24 22:34:26.713100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.286 qpair failed and we were unable to recover it. 00:25:01.286 [2024-07-24 22:34:26.713286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.286 [2024-07-24 22:34:26.713340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.286 qpair failed and we were unable to recover it. 00:25:01.286 [2024-07-24 22:34:26.713471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.286 [2024-07-24 22:34:26.713531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.286 qpair failed and we were unable to recover it. 00:25:01.286 [2024-07-24 22:34:26.713686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.286 [2024-07-24 22:34:26.713744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.286 qpair failed and we were unable to recover it. 00:25:01.286 [2024-07-24 22:34:26.713864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.286 [2024-07-24 22:34:26.713911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.286 qpair failed and we were unable to recover it. 00:25:01.286 [2024-07-24 22:34:26.714014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.286 [2024-07-24 22:34:26.714041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.286 qpair failed and we were unable to recover it. 00:25:01.286 [2024-07-24 22:34:26.714160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.286 [2024-07-24 22:34:26.714224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.286 qpair failed and we were unable to recover it. 00:25:01.286 [2024-07-24 22:34:26.714331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.286 [2024-07-24 22:34:26.714358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.286 qpair failed and we were unable to recover it. 00:25:01.286 [2024-07-24 22:34:26.714515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.286 [2024-07-24 22:34:26.714579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.286 qpair failed and we were unable to recover it. 00:25:01.286 [2024-07-24 22:34:26.714737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.286 [2024-07-24 22:34:26.714798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.286 qpair failed and we were unable to recover it. 00:25:01.286 [2024-07-24 22:34:26.714953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.286 [2024-07-24 22:34:26.714979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.286 qpair failed and we were unable to recover it. 00:25:01.286 [2024-07-24 22:34:26.715170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.286 [2024-07-24 22:34:26.715220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.286 qpair failed and we were unable to recover it. 00:25:01.286 [2024-07-24 22:34:26.715354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.286 [2024-07-24 22:34:26.715403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.286 qpair failed and we were unable to recover it. 00:25:01.286 [2024-07-24 22:34:26.715569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.286 [2024-07-24 22:34:26.715617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.286 qpair failed and we were unable to recover it. 00:25:01.286 [2024-07-24 22:34:26.715748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.286 [2024-07-24 22:34:26.715830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.286 qpair failed and we were unable to recover it. 00:25:01.286 [2024-07-24 22:34:26.715981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.286 [2024-07-24 22:34:26.716008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.286 qpair failed and we were unable to recover it. 00:25:01.286 [2024-07-24 22:34:26.716117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.286 [2024-07-24 22:34:26.716146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.286 qpair failed and we were unable to recover it. 00:25:01.286 [2024-07-24 22:34:26.716360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.286 [2024-07-24 22:34:26.716408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.286 qpair failed and we were unable to recover it. 00:25:01.286 [2024-07-24 22:34:26.716558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.286 [2024-07-24 22:34:26.716613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.286 qpair failed and we were unable to recover it. 00:25:01.286 [2024-07-24 22:34:26.716755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.286 [2024-07-24 22:34:26.716808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.286 qpair failed and we were unable to recover it. 00:25:01.286 [2024-07-24 22:34:26.716950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.286 [2024-07-24 22:34:26.717007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.286 qpair failed and we were unable to recover it. 00:25:01.286 [2024-07-24 22:34:26.717133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.286 [2024-07-24 22:34:26.717200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.286 qpair failed and we were unable to recover it. 00:25:01.286 [2024-07-24 22:34:26.717367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.286 [2024-07-24 22:34:26.717421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.286 qpair failed and we were unable to recover it. 00:25:01.286 [2024-07-24 22:34:26.717519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.286 [2024-07-24 22:34:26.717545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.286 qpair failed and we were unable to recover it. 00:25:01.286 [2024-07-24 22:34:26.717675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.286 [2024-07-24 22:34:26.717746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.286 qpair failed and we were unable to recover it. 00:25:01.286 [2024-07-24 22:34:26.717864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.286 [2024-07-24 22:34:26.717892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.286 qpair failed and we were unable to recover it. 00:25:01.286 [2024-07-24 22:34:26.718079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.286 [2024-07-24 22:34:26.718129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.286 qpair failed and we were unable to recover it. 00:25:01.286 [2024-07-24 22:34:26.718291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.286 [2024-07-24 22:34:26.718350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.286 qpair failed and we were unable to recover it. 00:25:01.286 [2024-07-24 22:34:26.718450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.286 [2024-07-24 22:34:26.718476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.286 qpair failed and we were unable to recover it. 00:25:01.286 [2024-07-24 22:34:26.718660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.286 [2024-07-24 22:34:26.718712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.286 qpair failed and we were unable to recover it. 00:25:01.286 [2024-07-24 22:34:26.718845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.286 [2024-07-24 22:34:26.718925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.286 qpair failed and we were unable to recover it. 00:25:01.286 [2024-07-24 22:34:26.719089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-07-24 22:34:26.719142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.287 qpair failed and we were unable to recover it. 00:25:01.287 [2024-07-24 22:34:26.719342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-07-24 22:34:26.719376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.287 qpair failed and we were unable to recover it. 00:25:01.287 [2024-07-24 22:34:26.719544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-07-24 22:34:26.719572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.287 qpair failed and we were unable to recover it. 00:25:01.287 [2024-07-24 22:34:26.719675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-07-24 22:34:26.719702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.287 qpair failed and we were unable to recover it. 00:25:01.287 [2024-07-24 22:34:26.719827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-07-24 22:34:26.719872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.287 qpair failed and we were unable to recover it. 00:25:01.287 [2024-07-24 22:34:26.720037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-07-24 22:34:26.720090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.287 qpair failed and we were unable to recover it. 00:25:01.287 [2024-07-24 22:34:26.720238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-07-24 22:34:26.720305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.287 qpair failed and we were unable to recover it. 00:25:01.287 [2024-07-24 22:34:26.720413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-07-24 22:34:26.720441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.287 qpair failed and we were unable to recover it. 00:25:01.287 [2024-07-24 22:34:26.720560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-07-24 22:34:26.720589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.287 qpair failed and we were unable to recover it. 00:25:01.287 [2024-07-24 22:34:26.720713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-07-24 22:34:26.720769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.287 qpair failed and we were unable to recover it. 00:25:01.287 [2024-07-24 22:34:26.720868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-07-24 22:34:26.720893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.287 qpair failed and we were unable to recover it. 00:25:01.287 [2024-07-24 22:34:26.721055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-07-24 22:34:26.721111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.287 qpair failed and we were unable to recover it. 00:25:01.287 [2024-07-24 22:34:26.721211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-07-24 22:34:26.721238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.287 qpair failed and we were unable to recover it. 00:25:01.287 [2024-07-24 22:34:26.721339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-07-24 22:34:26.721365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.287 qpair failed and we were unable to recover it. 00:25:01.287 [2024-07-24 22:34:26.721614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-07-24 22:34:26.721668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.287 qpair failed and we were unable to recover it. 00:25:01.287 [2024-07-24 22:34:26.721776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-07-24 22:34:26.721804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.287 qpair failed and we were unable to recover it. 00:25:01.287 [2024-07-24 22:34:26.721927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-07-24 22:34:26.721973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.287 qpair failed and we were unable to recover it. 00:25:01.287 [2024-07-24 22:34:26.722156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-07-24 22:34:26.722204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.287 qpair failed and we were unable to recover it. 00:25:01.287 [2024-07-24 22:34:26.722389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-07-24 22:34:26.722440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.287 qpair failed and we were unable to recover it. 00:25:01.287 [2024-07-24 22:34:26.722576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-07-24 22:34:26.722635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.287 qpair failed and we were unable to recover it. 00:25:01.287 [2024-07-24 22:34:26.722825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-07-24 22:34:26.722876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.287 qpair failed and we were unable to recover it. 00:25:01.287 [2024-07-24 22:34:26.723074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-07-24 22:34:26.723128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.287 qpair failed and we were unable to recover it. 00:25:01.287 [2024-07-24 22:34:26.723241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-07-24 22:34:26.723267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.287 qpair failed and we were unable to recover it. 00:25:01.287 [2024-07-24 22:34:26.723394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-07-24 22:34:26.723445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.287 qpair failed and we were unable to recover it. 00:25:01.287 [2024-07-24 22:34:26.723639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-07-24 22:34:26.723690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.287 qpair failed and we were unable to recover it. 00:25:01.287 [2024-07-24 22:34:26.723830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-07-24 22:34:26.723882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.287 qpair failed and we were unable to recover it. 00:25:01.287 [2024-07-24 22:34:26.724080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-07-24 22:34:26.724128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.287 qpair failed and we were unable to recover it. 00:25:01.287 [2024-07-24 22:34:26.724227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-07-24 22:34:26.724253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.287 qpair failed and we were unable to recover it. 00:25:01.287 [2024-07-24 22:34:26.724368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-07-24 22:34:26.724430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.287 qpair failed and we were unable to recover it. 00:25:01.287 [2024-07-24 22:34:26.724549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-07-24 22:34:26.724601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.287 qpair failed and we were unable to recover it. 00:25:01.287 [2024-07-24 22:34:26.724792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-07-24 22:34:26.724842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.287 qpair failed and we were unable to recover it. 00:25:01.287 [2024-07-24 22:34:26.725035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-07-24 22:34:26.725085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.287 qpair failed and we were unable to recover it. 00:25:01.287 [2024-07-24 22:34:26.725245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-07-24 22:34:26.725294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.287 qpair failed and we were unable to recover it. 00:25:01.287 [2024-07-24 22:34:26.725484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-07-24 22:34:26.725510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.287 qpair failed and we were unable to recover it. 00:25:01.287 [2024-07-24 22:34:26.725658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-07-24 22:34:26.725710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.287 qpair failed and we were unable to recover it. 00:25:01.287 [2024-07-24 22:34:26.725908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-07-24 22:34:26.725960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.287 qpair failed and we were unable to recover it. 00:25:01.287 [2024-07-24 22:34:26.726066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-07-24 22:34:26.726095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.287 qpair failed and we were unable to recover it. 00:25:01.287 [2024-07-24 22:34:26.726262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.287 [2024-07-24 22:34:26.726318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.287 qpair failed and we were unable to recover it. 00:25:01.287 [2024-07-24 22:34:26.726467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-07-24 22:34:26.726500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.288 qpair failed and we were unable to recover it. 00:25:01.288 [2024-07-24 22:34:26.726694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-07-24 22:34:26.726748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.288 qpair failed and we were unable to recover it. 00:25:01.288 [2024-07-24 22:34:26.726888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-07-24 22:34:26.726934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.288 qpair failed and we were unable to recover it. 00:25:01.288 [2024-07-24 22:34:26.727033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-07-24 22:34:26.727059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.288 qpair failed and we were unable to recover it. 00:25:01.288 [2024-07-24 22:34:26.727260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-07-24 22:34:26.727314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.288 qpair failed and we were unable to recover it. 00:25:01.288 [2024-07-24 22:34:26.727439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-07-24 22:34:26.727526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.288 qpair failed and we were unable to recover it. 00:25:01.288 [2024-07-24 22:34:26.727732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-07-24 22:34:26.727758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.288 qpair failed and we were unable to recover it. 00:25:01.288 [2024-07-24 22:34:26.727862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-07-24 22:34:26.727887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.288 qpair failed and we were unable to recover it. 00:25:01.288 [2024-07-24 22:34:26.728038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-07-24 22:34:26.728065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.288 qpair failed and we were unable to recover it. 00:25:01.288 [2024-07-24 22:34:26.728211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-07-24 22:34:26.728237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.288 qpair failed and we were unable to recover it. 00:25:01.288 [2024-07-24 22:34:26.728361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-07-24 22:34:26.728422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.288 qpair failed and we were unable to recover it. 00:25:01.288 [2024-07-24 22:34:26.728551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-07-24 22:34:26.728611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.288 qpair failed and we were unable to recover it. 00:25:01.288 [2024-07-24 22:34:26.728852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-07-24 22:34:26.728902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.288 qpair failed and we were unable to recover it. 00:25:01.288 [2024-07-24 22:34:26.729063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-07-24 22:34:26.729120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.288 qpair failed and we were unable to recover it. 00:25:01.288 [2024-07-24 22:34:26.729247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-07-24 22:34:26.729294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.288 qpair failed and we were unable to recover it. 00:25:01.288 [2024-07-24 22:34:26.729539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-07-24 22:34:26.729565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.288 qpair failed and we were unable to recover it. 00:25:01.288 [2024-07-24 22:34:26.729662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-07-24 22:34:26.729688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.288 qpair failed and we were unable to recover it. 00:25:01.288 [2024-07-24 22:34:26.729849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-07-24 22:34:26.729913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.288 qpair failed and we were unable to recover it. 00:25:01.288 [2024-07-24 22:34:26.730019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-07-24 22:34:26.730046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.288 qpair failed and we were unable to recover it. 00:25:01.288 [2024-07-24 22:34:26.730254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-07-24 22:34:26.730301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.288 qpair failed and we were unable to recover it. 00:25:01.288 [2024-07-24 22:34:26.730459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-07-24 22:34:26.730524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.288 qpair failed and we were unable to recover it. 00:25:01.288 [2024-07-24 22:34:26.730729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-07-24 22:34:26.730781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.288 qpair failed and we were unable to recover it. 00:25:01.288 [2024-07-24 22:34:26.730959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-07-24 22:34:26.730985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.288 qpair failed and we were unable to recover it. 00:25:01.288 [2024-07-24 22:34:26.731180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-07-24 22:34:26.731231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.288 qpair failed and we were unable to recover it. 00:25:01.288 [2024-07-24 22:34:26.731426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-07-24 22:34:26.731473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.288 qpair failed and we were unable to recover it. 00:25:01.288 [2024-07-24 22:34:26.731658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-07-24 22:34:26.731714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.288 qpair failed and we were unable to recover it. 00:25:01.288 [2024-07-24 22:34:26.731915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-07-24 22:34:26.731965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.288 qpair failed and we were unable to recover it. 00:25:01.288 [2024-07-24 22:34:26.732175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-07-24 22:34:26.732225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.288 qpair failed and we were unable to recover it. 00:25:01.288 [2024-07-24 22:34:26.732411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-07-24 22:34:26.732463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.288 qpair failed and we were unable to recover it. 00:25:01.288 [2024-07-24 22:34:26.732574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-07-24 22:34:26.732604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.288 qpair failed and we were unable to recover it. 00:25:01.288 [2024-07-24 22:34:26.732724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-07-24 22:34:26.732779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.288 qpair failed and we were unable to recover it. 00:25:01.288 [2024-07-24 22:34:26.732919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-07-24 22:34:26.732972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.288 qpair failed and we were unable to recover it. 00:25:01.288 [2024-07-24 22:34:26.733104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-07-24 22:34:26.733185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.288 qpair failed and we were unable to recover it. 00:25:01.288 [2024-07-24 22:34:26.733376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.288 [2024-07-24 22:34:26.733427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.289 qpair failed and we were unable to recover it. 00:25:01.289 [2024-07-24 22:34:26.733561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-07-24 22:34:26.733607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.289 qpair failed and we were unable to recover it. 00:25:01.289 [2024-07-24 22:34:26.733708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-07-24 22:34:26.733734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.289 qpair failed and we were unable to recover it. 00:25:01.289 [2024-07-24 22:34:26.733917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-07-24 22:34:26.733970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.289 qpair failed and we were unable to recover it. 00:25:01.289 [2024-07-24 22:34:26.734121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-07-24 22:34:26.734174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.289 qpair failed and we were unable to recover it. 00:25:01.289 [2024-07-24 22:34:26.734330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-07-24 22:34:26.734380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.289 qpair failed and we were unable to recover it. 00:25:01.289 [2024-07-24 22:34:26.734488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-07-24 22:34:26.734516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.289 qpair failed and we were unable to recover it. 00:25:01.289 [2024-07-24 22:34:26.734667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-07-24 22:34:26.734731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.289 qpair failed and we were unable to recover it. 00:25:01.289 [2024-07-24 22:34:26.734857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-07-24 22:34:26.734913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.289 qpair failed and we were unable to recover it. 00:25:01.289 [2024-07-24 22:34:26.735109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-07-24 22:34:26.735161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.289 qpair failed and we were unable to recover it. 00:25:01.289 [2024-07-24 22:34:26.735388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-07-24 22:34:26.735441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.289 qpair failed and we were unable to recover it. 00:25:01.289 [2024-07-24 22:34:26.735607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-07-24 22:34:26.735674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.289 qpair failed and we were unable to recover it. 00:25:01.289 [2024-07-24 22:34:26.735842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-07-24 22:34:26.735900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.289 qpair failed and we were unable to recover it. 00:25:01.289 [2024-07-24 22:34:26.736062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-07-24 22:34:26.736119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.289 qpair failed and we were unable to recover it. 00:25:01.289 [2024-07-24 22:34:26.736247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-07-24 22:34:26.736315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.289 qpair failed and we were unable to recover it. 00:25:01.289 [2024-07-24 22:34:26.736485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-07-24 22:34:26.736517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.289 qpair failed and we were unable to recover it. 00:25:01.289 [2024-07-24 22:34:26.736714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-07-24 22:34:26.736767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.289 qpair failed and we were unable to recover it. 00:25:01.289 [2024-07-24 22:34:26.736964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-07-24 22:34:26.737014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.289 qpair failed and we were unable to recover it. 00:25:01.289 [2024-07-24 22:34:26.737149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-07-24 22:34:26.737230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.289 qpair failed and we were unable to recover it. 00:25:01.289 [2024-07-24 22:34:26.737403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-07-24 22:34:26.737452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.289 qpair failed and we were unable to recover it. 00:25:01.289 [2024-07-24 22:34:26.737614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-07-24 22:34:26.737679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.289 qpair failed and we were unable to recover it. 00:25:01.289 [2024-07-24 22:34:26.737866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-07-24 22:34:26.737918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.289 qpair failed and we were unable to recover it. 00:25:01.289 [2024-07-24 22:34:26.738015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-07-24 22:34:26.738040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.289 qpair failed and we were unable to recover it. 00:25:01.289 [2024-07-24 22:34:26.738182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-07-24 22:34:26.738208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.289 qpair failed and we were unable to recover it. 00:25:01.289 [2024-07-24 22:34:26.738333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-07-24 22:34:26.738391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.289 qpair failed and we were unable to recover it. 00:25:01.289 [2024-07-24 22:34:26.738572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-07-24 22:34:26.738624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.289 qpair failed and we were unable to recover it. 00:25:01.289 [2024-07-24 22:34:26.738762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-07-24 22:34:26.738791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.289 qpair failed and we were unable to recover it. 00:25:01.289 [2024-07-24 22:34:26.738928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-07-24 22:34:26.738954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.289 qpair failed and we were unable to recover it. 00:25:01.289 [2024-07-24 22:34:26.739057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-07-24 22:34:26.739083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.289 qpair failed and we were unable to recover it. 00:25:01.289 [2024-07-24 22:34:26.739184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-07-24 22:34:26.739210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.289 qpair failed and we were unable to recover it. 00:25:01.289 [2024-07-24 22:34:26.739308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-07-24 22:34:26.739334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.289 qpair failed and we were unable to recover it. 00:25:01.289 [2024-07-24 22:34:26.739539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-07-24 22:34:26.739587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.289 qpair failed and we were unable to recover it. 00:25:01.289 [2024-07-24 22:34:26.739724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-07-24 22:34:26.739805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.289 qpair failed and we were unable to recover it. 00:25:01.289 [2024-07-24 22:34:26.739909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-07-24 22:34:26.739936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.289 qpair failed and we were unable to recover it. 00:25:01.289 [2024-07-24 22:34:26.740120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-07-24 22:34:26.740173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.289 qpair failed and we were unable to recover it. 00:25:01.289 [2024-07-24 22:34:26.740339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-07-24 22:34:26.740395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.289 qpair failed and we were unable to recover it. 00:25:01.289 [2024-07-24 22:34:26.740600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-07-24 22:34:26.740651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.289 qpair failed and we were unable to recover it. 00:25:01.289 [2024-07-24 22:34:26.740752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.289 [2024-07-24 22:34:26.740778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.289 qpair failed and we were unable to recover it. 00:25:01.289 [2024-07-24 22:34:26.740887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.290 [2024-07-24 22:34:26.740914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.290 qpair failed and we were unable to recover it. 00:25:01.290 [2024-07-24 22:34:26.741053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.290 [2024-07-24 22:34:26.741100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.290 qpair failed and we were unable to recover it. 00:25:01.290 [2024-07-24 22:34:26.741241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.290 [2024-07-24 22:34:26.741287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.290 qpair failed and we were unable to recover it. 00:25:01.290 [2024-07-24 22:34:26.741530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.290 [2024-07-24 22:34:26.741558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.290 qpair failed and we were unable to recover it. 00:25:01.290 [2024-07-24 22:34:26.741783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.290 [2024-07-24 22:34:26.741837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.290 qpair failed and we were unable to recover it. 00:25:01.290 [2024-07-24 22:34:26.741958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.290 [2024-07-24 22:34:26.742004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.290 qpair failed and we were unable to recover it. 00:25:01.290 [2024-07-24 22:34:26.742129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.290 [2024-07-24 22:34:26.742197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.290 qpair failed and we were unable to recover it. 00:25:01.290 [2024-07-24 22:34:26.742397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.290 [2024-07-24 22:34:26.742449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.290 qpair failed and we were unable to recover it. 00:25:01.290 [2024-07-24 22:34:26.742646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.290 [2024-07-24 22:34:26.742698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.290 qpair failed and we were unable to recover it. 00:25:01.290 [2024-07-24 22:34:26.742888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.290 [2024-07-24 22:34:26.742941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.290 qpair failed and we were unable to recover it. 00:25:01.290 [2024-07-24 22:34:26.743098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.290 [2024-07-24 22:34:26.743158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.290 qpair failed and we were unable to recover it. 00:25:01.290 [2024-07-24 22:34:26.743257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.290 [2024-07-24 22:34:26.743282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.290 qpair failed and we were unable to recover it. 00:25:01.290 [2024-07-24 22:34:26.743421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.290 [2024-07-24 22:34:26.743466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.290 qpair failed and we were unable to recover it. 00:25:01.290 [2024-07-24 22:34:26.743603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.290 [2024-07-24 22:34:26.743689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.290 qpair failed and we were unable to recover it. 00:25:01.290 [2024-07-24 22:34:26.743897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.290 [2024-07-24 22:34:26.743948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.290 qpair failed and we were unable to recover it. 00:25:01.290 [2024-07-24 22:34:26.744128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.290 [2024-07-24 22:34:26.744181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.290 qpair failed and we were unable to recover it. 00:25:01.290 [2024-07-24 22:34:26.744339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.290 [2024-07-24 22:34:26.744388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.290 qpair failed and we were unable to recover it. 00:25:01.290 [2024-07-24 22:34:26.744490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.290 [2024-07-24 22:34:26.744521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.290 qpair failed and we were unable to recover it. 00:25:01.290 [2024-07-24 22:34:26.744682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.290 [2024-07-24 22:34:26.744739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.290 qpair failed and we were unable to recover it. 00:25:01.290 [2024-07-24 22:34:26.744853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.290 [2024-07-24 22:34:26.744883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.290 qpair failed and we were unable to recover it. 00:25:01.290 [2024-07-24 22:34:26.745013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.290 [2024-07-24 22:34:26.745097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.290 qpair failed and we were unable to recover it. 00:25:01.290 [2024-07-24 22:34:26.745233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.290 [2024-07-24 22:34:26.745280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.290 qpair failed and we were unable to recover it. 00:25:01.290 [2024-07-24 22:34:26.745428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.290 [2024-07-24 22:34:26.745501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.290 qpair failed and we were unable to recover it. 00:25:01.290 [2024-07-24 22:34:26.745670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.290 [2024-07-24 22:34:26.745720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.290 qpair failed and we were unable to recover it. 00:25:01.290 [2024-07-24 22:34:26.745863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.290 [2024-07-24 22:34:26.745914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.290 qpair failed and we were unable to recover it. 00:25:01.290 [2024-07-24 22:34:26.746013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.290 [2024-07-24 22:34:26.746039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.290 qpair failed and we were unable to recover it. 00:25:01.290 [2024-07-24 22:34:26.746182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.290 [2024-07-24 22:34:26.746239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.290 qpair failed and we were unable to recover it. 00:25:01.290 [2024-07-24 22:34:26.746403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.290 [2024-07-24 22:34:26.746465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.290 qpair failed and we were unable to recover it. 00:25:01.290 [2024-07-24 22:34:26.746609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.290 [2024-07-24 22:34:26.746664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.290 qpair failed and we were unable to recover it. 00:25:01.290 [2024-07-24 22:34:26.746801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.290 [2024-07-24 22:34:26.746847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.290 qpair failed and we were unable to recover it. 00:25:01.290 [2024-07-24 22:34:26.746943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.290 [2024-07-24 22:34:26.746969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.290 qpair failed and we were unable to recover it. 00:25:01.290 [2024-07-24 22:34:26.747140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.290 [2024-07-24 22:34:26.747197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.290 qpair failed and we were unable to recover it. 00:25:01.290 [2024-07-24 22:34:26.747379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.290 [2024-07-24 22:34:26.747433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.290 qpair failed and we were unable to recover it. 00:25:01.290 [2024-07-24 22:34:26.747549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.290 [2024-07-24 22:34:26.747597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.290 qpair failed and we were unable to recover it. 00:25:01.290 [2024-07-24 22:34:26.747726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.290 [2024-07-24 22:34:26.747806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.290 qpair failed and we were unable to recover it. 00:25:01.290 [2024-07-24 22:34:26.747957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.290 [2024-07-24 22:34:26.747984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.290 qpair failed and we were unable to recover it. 00:25:01.290 [2024-07-24 22:34:26.748178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.290 [2024-07-24 22:34:26.748227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.290 qpair failed and we were unable to recover it. 00:25:01.290 [2024-07-24 22:34:26.748510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.290 [2024-07-24 22:34:26.748559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.290 qpair failed and we were unable to recover it. 00:25:01.290 [2024-07-24 22:34:26.748664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.291 [2024-07-24 22:34:26.748692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.291 qpair failed and we were unable to recover it. 00:25:01.291 [2024-07-24 22:34:26.748794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.291 [2024-07-24 22:34:26.748820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.291 qpair failed and we were unable to recover it. 00:25:01.291 [2024-07-24 22:34:26.748952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.291 [2024-07-24 22:34:26.749012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.291 qpair failed and we were unable to recover it. 00:25:01.291 [2024-07-24 22:34:26.749165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.291 [2024-07-24 22:34:26.749194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.291 qpair failed and we were unable to recover it. 00:25:01.291 [2024-07-24 22:34:26.749379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.291 [2024-07-24 22:34:26.749429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.291 qpair failed and we were unable to recover it. 00:25:01.291 [2024-07-24 22:34:26.749562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.291 [2024-07-24 22:34:26.749608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.291 qpair failed and we were unable to recover it. 00:25:01.291 [2024-07-24 22:34:26.749709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.291 [2024-07-24 22:34:26.749735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.291 qpair failed and we were unable to recover it. 00:25:01.291 [2024-07-24 22:34:26.749909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.291 [2024-07-24 22:34:26.749960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.291 qpair failed and we were unable to recover it. 00:25:01.291 [2024-07-24 22:34:26.750151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.291 [2024-07-24 22:34:26.750200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.291 qpair failed and we were unable to recover it. 00:25:01.291 [2024-07-24 22:34:26.750363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.291 [2024-07-24 22:34:26.750418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.291 qpair failed and we were unable to recover it. 00:25:01.291 [2024-07-24 22:34:26.750525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.291 [2024-07-24 22:34:26.750553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.291 qpair failed and we were unable to recover it. 00:25:01.291 [2024-07-24 22:34:26.750740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.291 [2024-07-24 22:34:26.750768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.291 qpair failed and we were unable to recover it. 00:25:01.291 [2024-07-24 22:34:26.750888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.291 [2024-07-24 22:34:26.750943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.291 qpair failed and we were unable to recover it. 00:25:01.291 [2024-07-24 22:34:26.751044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.291 [2024-07-24 22:34:26.751071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.291 qpair failed and we were unable to recover it. 00:25:01.291 [2024-07-24 22:34:26.751170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.291 [2024-07-24 22:34:26.751196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.291 qpair failed and we were unable to recover it. 00:25:01.291 [2024-07-24 22:34:26.751318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.291 [2024-07-24 22:34:26.751373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.291 qpair failed and we were unable to recover it. 00:25:01.291 [2024-07-24 22:34:26.751566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.291 [2024-07-24 22:34:26.751594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.291 qpair failed and we were unable to recover it. 00:25:01.291 [2024-07-24 22:34:26.751798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.291 [2024-07-24 22:34:26.751824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.291 qpair failed and we were unable to recover it. 00:25:01.291 [2024-07-24 22:34:26.751936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.291 [2024-07-24 22:34:26.751996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.291 qpair failed and we were unable to recover it. 00:25:01.291 [2024-07-24 22:34:26.752194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.291 [2024-07-24 22:34:26.752242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.291 qpair failed and we were unable to recover it. 00:25:01.291 [2024-07-24 22:34:26.752370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.291 [2024-07-24 22:34:26.752419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.291 qpair failed and we were unable to recover it. 00:25:01.291 [2024-07-24 22:34:26.752639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.291 [2024-07-24 22:34:26.752690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.291 qpair failed and we were unable to recover it. 00:25:01.291 [2024-07-24 22:34:26.752883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.291 [2024-07-24 22:34:26.752935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.291 qpair failed and we were unable to recover it. 00:25:01.291 [2024-07-24 22:34:26.753062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.291 [2024-07-24 22:34:26.753113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.291 qpair failed and we were unable to recover it. 00:25:01.291 [2024-07-24 22:34:26.753262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.291 [2024-07-24 22:34:26.753290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.291 qpair failed and we were unable to recover it. 00:25:01.291 [2024-07-24 22:34:26.753423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.291 [2024-07-24 22:34:26.753469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.291 qpair failed and we were unable to recover it. 00:25:01.291 [2024-07-24 22:34:26.753662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.291 [2024-07-24 22:34:26.753690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.291 qpair failed and we were unable to recover it. 00:25:01.291 [2024-07-24 22:34:26.753791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.291 [2024-07-24 22:34:26.753817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.291 qpair failed and we were unable to recover it. 00:25:01.291 [2024-07-24 22:34:26.754011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.291 [2024-07-24 22:34:26.754060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.291 qpair failed and we were unable to recover it. 00:25:01.291 [2024-07-24 22:34:26.754228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.291 [2024-07-24 22:34:26.754287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.291 qpair failed and we were unable to recover it. 00:25:01.291 [2024-07-24 22:34:26.754427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.291 [2024-07-24 22:34:26.754489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.291 qpair failed and we were unable to recover it. 00:25:01.291 [2024-07-24 22:34:26.754689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.291 [2024-07-24 22:34:26.754740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.291 qpair failed and we were unable to recover it. 00:25:01.291 [2024-07-24 22:34:26.754888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.291 [2024-07-24 22:34:26.754914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.291 qpair failed and we were unable to recover it. 00:25:01.291 [2024-07-24 22:34:26.755018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.291 [2024-07-24 22:34:26.755046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.291 qpair failed and we were unable to recover it. 00:25:01.291 [2024-07-24 22:34:26.755248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.291 [2024-07-24 22:34:26.755301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.291 qpair failed and we were unable to recover it. 00:25:01.291 [2024-07-24 22:34:26.755524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.291 [2024-07-24 22:34:26.755568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.291 qpair failed and we were unable to recover it. 00:25:01.291 [2024-07-24 22:34:26.755765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.291 [2024-07-24 22:34:26.755818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.291 qpair failed and we were unable to recover it. 00:25:01.292 [2024-07-24 22:34:26.755973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.292 [2024-07-24 22:34:26.756035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.292 qpair failed and we were unable to recover it. 00:25:01.292 [2024-07-24 22:34:26.756130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.292 [2024-07-24 22:34:26.756156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.292 qpair failed and we were unable to recover it. 00:25:01.292 [2024-07-24 22:34:26.756366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.292 [2024-07-24 22:34:26.756421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.292 qpair failed and we were unable to recover it. 00:25:01.292 [2024-07-24 22:34:26.756531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.292 [2024-07-24 22:34:26.756559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.292 qpair failed and we were unable to recover it. 00:25:01.292 [2024-07-24 22:34:26.756756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.292 [2024-07-24 22:34:26.756805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.292 qpair failed and we were unable to recover it. 00:25:01.292 [2024-07-24 22:34:26.756956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.292 [2024-07-24 22:34:26.757026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.292 qpair failed and we were unable to recover it. 00:25:01.292 [2024-07-24 22:34:26.757231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.292 [2024-07-24 22:34:26.757283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.292 qpair failed and we were unable to recover it. 00:25:01.292 [2024-07-24 22:34:26.757413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.292 [2024-07-24 22:34:26.757506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.292 qpair failed and we were unable to recover it. 00:25:01.292 [2024-07-24 22:34:26.757681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.292 [2024-07-24 22:34:26.757733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.292 qpair failed and we were unable to recover it. 00:25:01.292 [2024-07-24 22:34:26.757928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.292 [2024-07-24 22:34:26.757977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.292 qpair failed and we were unable to recover it. 00:25:01.292 [2024-07-24 22:34:26.758149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.292 [2024-07-24 22:34:26.758199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.292 qpair failed and we were unable to recover it. 00:25:01.292 [2024-07-24 22:34:26.758412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.292 [2024-07-24 22:34:26.758470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.292 qpair failed and we were unable to recover it. 00:25:01.292 [2024-07-24 22:34:26.758585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.292 [2024-07-24 22:34:26.758612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.292 qpair failed and we were unable to recover it. 00:25:01.292 [2024-07-24 22:34:26.758735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.292 [2024-07-24 22:34:26.758803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.292 qpair failed and we were unable to recover it. 00:25:01.292 [2024-07-24 22:34:26.758964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.292 [2024-07-24 22:34:26.758990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.292 qpair failed and we were unable to recover it. 00:25:01.292 [2024-07-24 22:34:26.759175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.292 [2024-07-24 22:34:26.759228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.292 qpair failed and we were unable to recover it. 00:25:01.292 [2024-07-24 22:34:26.759426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.292 [2024-07-24 22:34:26.759478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.292 qpair failed and we were unable to recover it. 00:25:01.292 [2024-07-24 22:34:26.759702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.292 [2024-07-24 22:34:26.759754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.292 qpair failed and we were unable to recover it. 00:25:01.292 [2024-07-24 22:34:26.759938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.292 [2024-07-24 22:34:26.759965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.292 qpair failed and we were unable to recover it. 00:25:01.292 [2024-07-24 22:34:26.760167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.292 [2024-07-24 22:34:26.760218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.292 qpair failed and we were unable to recover it. 00:25:01.292 [2024-07-24 22:34:26.760361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.292 [2024-07-24 22:34:26.760411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.292 qpair failed and we were unable to recover it. 00:25:01.292 [2024-07-24 22:34:26.760611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.292 [2024-07-24 22:34:26.760664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.292 qpair failed and we were unable to recover it. 00:25:01.292 [2024-07-24 22:34:26.760773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.292 [2024-07-24 22:34:26.760802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.292 qpair failed and we were unable to recover it. 00:25:01.292 [2024-07-24 22:34:26.760982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.292 [2024-07-24 22:34:26.761033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.292 qpair failed and we were unable to recover it. 00:25:01.292 [2024-07-24 22:34:26.761179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.292 [2024-07-24 22:34:26.761245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.292 qpair failed and we were unable to recover it. 00:25:01.292 [2024-07-24 22:34:26.761435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.292 [2024-07-24 22:34:26.761489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.292 qpair failed and we were unable to recover it. 00:25:01.292 [2024-07-24 22:34:26.761677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.292 [2024-07-24 22:34:26.761730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.292 qpair failed and we were unable to recover it. 00:25:01.292 [2024-07-24 22:34:26.761858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.292 [2024-07-24 22:34:26.761938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.292 qpair failed and we were unable to recover it. 00:25:01.292 [2024-07-24 22:34:26.762101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.292 [2024-07-24 22:34:26.762158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.292 qpair failed and we were unable to recover it. 00:25:01.292 [2024-07-24 22:34:26.762360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.292 [2024-07-24 22:34:26.762409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.292 qpair failed and we were unable to recover it. 00:25:01.292 [2024-07-24 22:34:26.762514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.292 [2024-07-24 22:34:26.762542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.292 qpair failed and we were unable to recover it. 00:25:01.292 [2024-07-24 22:34:26.762702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.292 [2024-07-24 22:34:26.762760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.292 qpair failed and we were unable to recover it. 00:25:01.292 [2024-07-24 22:34:26.762911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.292 [2024-07-24 22:34:26.762941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.292 qpair failed and we were unable to recover it. 00:25:01.292 [2024-07-24 22:34:26.763072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.292 [2024-07-24 22:34:26.763152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.292 qpair failed and we were unable to recover it. 00:25:01.292 [2024-07-24 22:34:26.763301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.292 [2024-07-24 22:34:26.763352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.292 qpair failed and we were unable to recover it. 00:25:01.292 [2024-07-24 22:34:26.763538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.292 [2024-07-24 22:34:26.763565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.293 qpair failed and we were unable to recover it. 00:25:01.293 [2024-07-24 22:34:26.763733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.293 [2024-07-24 22:34:26.763789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.293 qpair failed and we were unable to recover it. 00:25:01.293 [2024-07-24 22:34:26.763954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.293 [2024-07-24 22:34:26.763981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.293 qpair failed and we were unable to recover it. 00:25:01.293 [2024-07-24 22:34:26.764175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.293 [2024-07-24 22:34:26.764225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.293 qpair failed and we were unable to recover it. 00:25:01.293 [2024-07-24 22:34:26.764412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.293 [2024-07-24 22:34:26.764468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.293 qpair failed and we were unable to recover it. 00:25:01.293 [2024-07-24 22:34:26.764587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.293 [2024-07-24 22:34:26.764615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.293 qpair failed and we were unable to recover it. 00:25:01.293 [2024-07-24 22:34:26.764717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.293 [2024-07-24 22:34:26.764744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.293 qpair failed and we were unable to recover it. 00:25:01.293 [2024-07-24 22:34:26.764871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.293 [2024-07-24 22:34:26.764898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.293 qpair failed and we were unable to recover it. 00:25:01.293 [2024-07-24 22:34:26.765011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.293 [2024-07-24 22:34:26.765039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.293 qpair failed and we were unable to recover it. 00:25:01.293 [2024-07-24 22:34:26.765170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.293 [2024-07-24 22:34:26.765196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.293 qpair failed and we were unable to recover it. 00:25:01.293 [2024-07-24 22:34:26.765296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.293 [2024-07-24 22:34:26.765323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.293 qpair failed and we were unable to recover it. 00:25:01.293 [2024-07-24 22:34:26.765468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.293 [2024-07-24 22:34:26.765535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.293 qpair failed and we were unable to recover it. 00:25:01.293 [2024-07-24 22:34:26.765677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.293 [2024-07-24 22:34:26.765723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.293 qpair failed and we were unable to recover it. 00:25:01.293 [2024-07-24 22:34:26.765871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.293 [2024-07-24 22:34:26.765936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.293 qpair failed and we were unable to recover it. 00:25:01.293 [2024-07-24 22:34:26.766093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.293 [2024-07-24 22:34:26.766151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.293 qpair failed and we were unable to recover it. 00:25:01.293 [2024-07-24 22:34:26.766254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.293 [2024-07-24 22:34:26.766278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.293 qpair failed and we were unable to recover it. 00:25:01.293 [2024-07-24 22:34:26.766373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.293 [2024-07-24 22:34:26.766398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.293 qpair failed and we were unable to recover it. 00:25:01.293 [2024-07-24 22:34:26.766554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.293 [2024-07-24 22:34:26.766605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.293 qpair failed and we were unable to recover it. 00:25:01.293 [2024-07-24 22:34:26.766723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.293 [2024-07-24 22:34:26.766773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.293 qpair failed and we were unable to recover it. 00:25:01.293 [2024-07-24 22:34:26.766877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.293 [2024-07-24 22:34:26.766903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.293 qpair failed and we were unable to recover it. 00:25:01.293 [2024-07-24 22:34:26.767058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.293 [2024-07-24 22:34:26.767106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.293 qpair failed and we were unable to recover it. 00:25:01.293 [2024-07-24 22:34:26.767235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.293 [2024-07-24 22:34:26.767281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.293 qpair failed and we were unable to recover it. 00:25:01.293 [2024-07-24 22:34:26.767418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.293 [2024-07-24 22:34:26.767460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.293 qpair failed and we were unable to recover it. 00:25:01.293 [2024-07-24 22:34:26.767591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.293 [2024-07-24 22:34:26.767641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.293 qpair failed and we were unable to recover it. 00:25:01.293 [2024-07-24 22:34:26.767777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.293 [2024-07-24 22:34:26.767825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.293 qpair failed and we were unable to recover it. 00:25:01.293 [2024-07-24 22:34:26.767972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.293 [2024-07-24 22:34:26.768000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.293 qpair failed and we were unable to recover it. 00:25:01.293 [2024-07-24 22:34:26.768098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.293 [2024-07-24 22:34:26.768124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.293 qpair failed and we were unable to recover it. 00:25:01.293 [2024-07-24 22:34:26.768235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.293 [2024-07-24 22:34:26.768299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.293 qpair failed and we were unable to recover it. 00:25:01.293 [2024-07-24 22:34:26.768435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.293 [2024-07-24 22:34:26.768493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.293 qpair failed and we were unable to recover it. 00:25:01.293 [2024-07-24 22:34:26.768660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.293 [2024-07-24 22:34:26.768686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.293 qpair failed and we were unable to recover it. 00:25:01.293 [2024-07-24 22:34:26.768848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.293 [2024-07-24 22:34:26.768897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.293 qpair failed and we were unable to recover it. 00:25:01.293 [2024-07-24 22:34:26.769054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.293 [2024-07-24 22:34:26.769104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.293 qpair failed and we were unable to recover it. 00:25:01.293 [2024-07-24 22:34:26.769233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.293 [2024-07-24 22:34:26.769281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.293 qpair failed and we were unable to recover it. 00:25:01.293 [2024-07-24 22:34:26.769457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.294 [2024-07-24 22:34:26.769531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.294 qpair failed and we were unable to recover it. 00:25:01.294 [2024-07-24 22:34:26.769773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.294 [2024-07-24 22:34:26.769837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.294 qpair failed and we were unable to recover it. 00:25:01.294 [2024-07-24 22:34:26.769985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.294 [2024-07-24 22:34:26.770064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.294 qpair failed and we were unable to recover it. 00:25:01.294 [2024-07-24 22:34:26.770170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.294 [2024-07-24 22:34:26.770197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.294 qpair failed and we were unable to recover it. 00:25:01.294 [2024-07-24 22:34:26.770318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.294 [2024-07-24 22:34:26.770346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.294 qpair failed and we were unable to recover it. 00:25:01.294 [2024-07-24 22:34:26.770514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.294 [2024-07-24 22:34:26.770556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.294 qpair failed and we were unable to recover it. 00:25:01.294 [2024-07-24 22:34:26.770709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.294 [2024-07-24 22:34:26.770736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.294 qpair failed and we were unable to recover it. 00:25:01.294 [2024-07-24 22:34:26.770930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.294 [2024-07-24 22:34:26.770980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.294 qpair failed and we were unable to recover it. 00:25:01.294 [2024-07-24 22:34:26.771161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.294 [2024-07-24 22:34:26.771215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.294 qpair failed and we were unable to recover it. 00:25:01.294 [2024-07-24 22:34:26.771354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.294 [2024-07-24 22:34:26.771399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.294 qpair failed and we were unable to recover it. 00:25:01.294 [2024-07-24 22:34:26.771536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.294 [2024-07-24 22:34:26.771564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.294 qpair failed and we were unable to recover it. 00:25:01.294 [2024-07-24 22:34:26.771668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.294 [2024-07-24 22:34:26.771694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.294 qpair failed and we were unable to recover it. 00:25:01.294 [2024-07-24 22:34:26.771890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.294 [2024-07-24 22:34:26.771939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.294 qpair failed and we were unable to recover it. 00:25:01.294 [2024-07-24 22:34:26.772092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.294 [2024-07-24 22:34:26.772147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.294 qpair failed and we were unable to recover it. 00:25:01.294 [2024-07-24 22:34:26.772243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.294 [2024-07-24 22:34:26.772269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.294 qpair failed and we were unable to recover it. 00:25:01.294 [2024-07-24 22:34:26.772420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.294 [2024-07-24 22:34:26.772446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.294 qpair failed and we were unable to recover it. 00:25:01.294 [2024-07-24 22:34:26.772589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.294 [2024-07-24 22:34:26.772632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.294 qpair failed and we were unable to recover it. 00:25:01.294 [2024-07-24 22:34:26.772734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.294 [2024-07-24 22:34:26.772761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.294 qpair failed and we were unable to recover it. 00:25:01.294 [2024-07-24 22:34:26.772929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.294 [2024-07-24 22:34:26.772982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.294 qpair failed and we were unable to recover it. 00:25:01.294 [2024-07-24 22:34:26.773145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.294 [2024-07-24 22:34:26.773204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.294 qpair failed and we were unable to recover it. 00:25:01.294 [2024-07-24 22:34:26.773331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.294 [2024-07-24 22:34:26.773380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.294 qpair failed and we were unable to recover it. 00:25:01.294 [2024-07-24 22:34:26.773474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.294 [2024-07-24 22:34:26.773507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.294 qpair failed and we were unable to recover it. 00:25:01.294 [2024-07-24 22:34:26.773638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.294 [2024-07-24 22:34:26.773721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.294 qpair failed and we were unable to recover it. 00:25:01.294 [2024-07-24 22:34:26.773824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.294 [2024-07-24 22:34:26.773851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.294 qpair failed and we were unable to recover it. 00:25:01.294 [2024-07-24 22:34:26.773956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.294 [2024-07-24 22:34:26.773985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.294 qpair failed and we were unable to recover it. 00:25:01.294 [2024-07-24 22:34:26.774154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.294 [2024-07-24 22:34:26.774211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.294 qpair failed and we were unable to recover it. 00:25:01.294 [2024-07-24 22:34:26.774345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.294 [2024-07-24 22:34:26.774397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.294 qpair failed and we were unable to recover it. 00:25:01.294 [2024-07-24 22:34:26.774495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.294 [2024-07-24 22:34:26.774524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.294 qpair failed and we were unable to recover it. 00:25:01.294 [2024-07-24 22:34:26.774633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.294 [2024-07-24 22:34:26.774661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.294 qpair failed and we were unable to recover it. 00:25:01.294 [2024-07-24 22:34:26.774768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.294 [2024-07-24 22:34:26.774798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.294 qpair failed and we were unable to recover it. 00:25:01.294 [2024-07-24 22:34:26.774901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.294 [2024-07-24 22:34:26.774927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.294 qpair failed and we were unable to recover it. 00:25:01.294 [2024-07-24 22:34:26.775060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.294 [2024-07-24 22:34:26.775111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.294 qpair failed and we were unable to recover it. 00:25:01.294 [2024-07-24 22:34:26.775310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.294 [2024-07-24 22:34:26.775359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.294 qpair failed and we were unable to recover it. 00:25:01.294 [2024-07-24 22:34:26.775468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.294 [2024-07-24 22:34:26.775504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.294 qpair failed and we were unable to recover it. 00:25:01.294 [2024-07-24 22:34:26.775639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.294 [2024-07-24 22:34:26.775719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.294 qpair failed and we were unable to recover it. 00:25:01.294 [2024-07-24 22:34:26.775846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.294 [2024-07-24 22:34:26.775928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.294 qpair failed and we were unable to recover it. 00:25:01.294 [2024-07-24 22:34:26.776056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.294 [2024-07-24 22:34:26.776127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.294 qpair failed and we were unable to recover it. 00:25:01.294 [2024-07-24 22:34:26.776290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.294 [2024-07-24 22:34:26.776354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.294 qpair failed and we were unable to recover it. 00:25:01.295 [2024-07-24 22:34:26.776511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.295 [2024-07-24 22:34:26.776578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.295 qpair failed and we were unable to recover it. 00:25:01.295 [2024-07-24 22:34:26.776683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.295 [2024-07-24 22:34:26.776710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.295 qpair failed and we were unable to recover it. 00:25:01.295 [2024-07-24 22:34:26.776816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.295 [2024-07-24 22:34:26.776842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.295 qpair failed and we were unable to recover it. 00:25:01.295 [2024-07-24 22:34:26.776937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.295 [2024-07-24 22:34:26.776963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.295 qpair failed and we were unable to recover it. 00:25:01.295 [2024-07-24 22:34:26.777122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.295 [2024-07-24 22:34:26.777180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.295 qpair failed and we were unable to recover it. 00:25:01.295 [2024-07-24 22:34:26.777307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.295 [2024-07-24 22:34:26.777355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.295 qpair failed and we were unable to recover it. 00:25:01.295 [2024-07-24 22:34:26.777506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.295 [2024-07-24 22:34:26.777561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.295 qpair failed and we were unable to recover it. 00:25:01.295 [2024-07-24 22:34:26.777764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.295 [2024-07-24 22:34:26.777818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.295 qpair failed and we were unable to recover it. 00:25:01.295 [2024-07-24 22:34:26.777946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.295 [2024-07-24 22:34:26.777993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.295 qpair failed and we were unable to recover it. 00:25:01.295 [2024-07-24 22:34:26.778127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.295 [2024-07-24 22:34:26.778176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.295 qpair failed and we were unable to recover it. 00:25:01.295 [2024-07-24 22:34:26.778301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.295 [2024-07-24 22:34:26.778352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.295 qpair failed and we were unable to recover it. 00:25:01.295 [2024-07-24 22:34:26.778499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.295 [2024-07-24 22:34:26.778543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.295 qpair failed and we were unable to recover it. 00:25:01.295 [2024-07-24 22:34:26.778699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.295 [2024-07-24 22:34:26.778726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.295 qpair failed and we were unable to recover it. 00:25:01.295 [2024-07-24 22:34:26.778893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.295 [2024-07-24 22:34:26.778943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.295 qpair failed and we were unable to recover it. 00:25:01.295 [2024-07-24 22:34:26.779077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.295 [2024-07-24 22:34:26.779126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.295 qpair failed and we were unable to recover it. 00:25:01.295 [2024-07-24 22:34:26.779263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.295 [2024-07-24 22:34:26.779312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.295 qpair failed and we were unable to recover it. 00:25:01.295 [2024-07-24 22:34:26.779450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.295 [2024-07-24 22:34:26.779514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.295 qpair failed and we were unable to recover it. 00:25:01.295 [2024-07-24 22:34:26.779646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.295 [2024-07-24 22:34:26.779693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.295 qpair failed and we were unable to recover it. 00:25:01.295 [2024-07-24 22:34:26.779846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.295 [2024-07-24 22:34:26.779874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.295 qpair failed and we were unable to recover it. 00:25:01.295 [2024-07-24 22:34:26.780012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.295 [2024-07-24 22:34:26.780055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.295 qpair failed and we were unable to recover it. 00:25:01.295 [2024-07-24 22:34:26.780166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.295 [2024-07-24 22:34:26.780198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.295 qpair failed and we were unable to recover it. 00:25:01.295 [2024-07-24 22:34:26.780367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.295 [2024-07-24 22:34:26.780424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.295 qpair failed and we were unable to recover it. 00:25:01.295 [2024-07-24 22:34:26.780549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.295 [2024-07-24 22:34:26.780613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.295 qpair failed and we were unable to recover it. 00:25:01.295 [2024-07-24 22:34:26.780719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.295 [2024-07-24 22:34:26.780745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.295 qpair failed and we were unable to recover it. 00:25:01.295 [2024-07-24 22:34:26.780871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.295 [2024-07-24 22:34:26.780916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.295 qpair failed and we were unable to recover it. 00:25:01.295 [2024-07-24 22:34:26.781062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.295 [2024-07-24 22:34:26.781089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.295 qpair failed and we were unable to recover it. 00:25:01.295 [2024-07-24 22:34:26.781246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.295 [2024-07-24 22:34:26.781296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.295 qpair failed and we were unable to recover it. 00:25:01.295 [2024-07-24 22:34:26.781394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.295 [2024-07-24 22:34:26.781421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.295 qpair failed and we were unable to recover it. 00:25:01.295 [2024-07-24 22:34:26.781569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.295 [2024-07-24 22:34:26.781623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.295 qpair failed and we were unable to recover it. 00:25:01.295 [2024-07-24 22:34:26.781757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.295 [2024-07-24 22:34:26.781808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.295 qpair failed and we were unable to recover it. 00:25:01.295 [2024-07-24 22:34:26.781932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.295 [2024-07-24 22:34:26.781980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.295 qpair failed and we were unable to recover it. 00:25:01.295 [2024-07-24 22:34:26.782111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.295 [2024-07-24 22:34:26.782166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.295 qpair failed and we were unable to recover it. 00:25:01.295 [2024-07-24 22:34:26.782267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.295 [2024-07-24 22:34:26.782294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.295 qpair failed and we were unable to recover it. 00:25:01.295 [2024-07-24 22:34:26.782394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.295 [2024-07-24 22:34:26.782421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.295 qpair failed and we were unable to recover it. 00:25:01.295 [2024-07-24 22:34:26.782597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.295 [2024-07-24 22:34:26.782655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.295 qpair failed and we were unable to recover it. 00:25:01.295 [2024-07-24 22:34:26.782814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.295 [2024-07-24 22:34:26.782841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.295 qpair failed and we were unable to recover it. 00:25:01.295 [2024-07-24 22:34:26.782957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.295 [2024-07-24 22:34:26.783006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.295 qpair failed and we were unable to recover it. 00:25:01.295 [2024-07-24 22:34:26.783154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.295 [2024-07-24 22:34:26.783182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.296 qpair failed and we were unable to recover it. 00:25:01.296 [2024-07-24 22:34:26.783319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.296 [2024-07-24 22:34:26.783368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.296 qpair failed and we were unable to recover it. 00:25:01.296 [2024-07-24 22:34:26.783502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.296 [2024-07-24 22:34:26.783552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.296 qpair failed and we were unable to recover it. 00:25:01.296 [2024-07-24 22:34:26.783685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.296 [2024-07-24 22:34:26.783733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.296 qpair failed and we were unable to recover it. 00:25:01.296 [2024-07-24 22:34:26.783875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.296 [2024-07-24 22:34:26.783925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.296 qpair failed and we were unable to recover it. 00:25:01.296 [2024-07-24 22:34:26.784092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.296 [2024-07-24 22:34:26.784149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.296 qpair failed and we were unable to recover it. 00:25:01.296 [2024-07-24 22:34:26.784276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.296 [2024-07-24 22:34:26.784348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.296 qpair failed and we were unable to recover it. 00:25:01.296 [2024-07-24 22:34:26.784513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.296 [2024-07-24 22:34:26.784576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.296 qpair failed and we were unable to recover it. 00:25:01.296 [2024-07-24 22:34:26.784757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.296 [2024-07-24 22:34:26.784810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.296 qpair failed and we were unable to recover it. 00:25:01.296 [2024-07-24 22:34:26.784941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.296 [2024-07-24 22:34:26.784991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.296 qpair failed and we were unable to recover it. 00:25:01.296 [2024-07-24 22:34:26.785118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.296 [2024-07-24 22:34:26.785165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.296 qpair failed and we were unable to recover it. 00:25:01.296 [2024-07-24 22:34:26.785276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.296 [2024-07-24 22:34:26.785303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.296 qpair failed and we were unable to recover it. 00:25:01.296 [2024-07-24 22:34:26.785406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.296 [2024-07-24 22:34:26.785431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.296 qpair failed and we were unable to recover it. 00:25:01.296 [2024-07-24 22:34:26.785548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.296 [2024-07-24 22:34:26.785600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.296 qpair failed and we were unable to recover it. 00:25:01.296 [2024-07-24 22:34:26.785801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.296 [2024-07-24 22:34:26.785854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.296 qpair failed and we were unable to recover it. 00:25:01.296 [2024-07-24 22:34:26.785982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.296 [2024-07-24 22:34:26.786030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.296 qpair failed and we were unable to recover it. 00:25:01.296 [2024-07-24 22:34:26.786163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.296 [2024-07-24 22:34:26.786244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.296 qpair failed and we were unable to recover it. 00:25:01.296 [2024-07-24 22:34:26.786364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.296 [2024-07-24 22:34:26.786410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.296 qpair failed and we were unable to recover it. 00:25:01.296 [2024-07-24 22:34:26.786521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.296 [2024-07-24 22:34:26.786576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.296 qpair failed and we were unable to recover it. 00:25:01.296 [2024-07-24 22:34:26.786693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.296 [2024-07-24 22:34:26.786743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.296 qpair failed and we were unable to recover it. 00:25:01.296 [2024-07-24 22:34:26.786873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.296 [2024-07-24 22:34:26.786919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.296 qpair failed and we were unable to recover it. 00:25:01.296 [2024-07-24 22:34:26.787021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.296 [2024-07-24 22:34:26.787048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.296 qpair failed and we were unable to recover it. 00:25:01.296 [2024-07-24 22:34:26.787148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.296 [2024-07-24 22:34:26.787175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.296 qpair failed and we were unable to recover it. 00:25:01.296 [2024-07-24 22:34:26.787287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.296 [2024-07-24 22:34:26.787318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.296 qpair failed and we were unable to recover it. 00:25:01.296 [2024-07-24 22:34:26.787411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.296 [2024-07-24 22:34:26.787437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.296 qpair failed and we were unable to recover it. 00:25:01.296 [2024-07-24 22:34:26.787561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.296 [2024-07-24 22:34:26.787609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.296 qpair failed and we were unable to recover it. 00:25:01.296 [2024-07-24 22:34:26.787731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.296 [2024-07-24 22:34:26.787779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.296 qpair failed and we were unable to recover it. 00:25:01.296 [2024-07-24 22:34:26.787955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.296 [2024-07-24 22:34:26.788006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.296 qpair failed and we were unable to recover it. 00:25:01.296 [2024-07-24 22:34:26.788105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.296 [2024-07-24 22:34:26.788131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.296 qpair failed and we were unable to recover it. 00:25:01.296 [2024-07-24 22:34:26.788258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.296 [2024-07-24 22:34:26.788307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.296 qpair failed and we were unable to recover it. 00:25:01.296 [2024-07-24 22:34:26.788428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.296 [2024-07-24 22:34:26.788456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.296 qpair failed and we were unable to recover it. 00:25:01.296 [2024-07-24 22:34:26.788570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.296 [2024-07-24 22:34:26.788607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.296 qpair failed and we were unable to recover it. 00:25:01.296 [2024-07-24 22:34:26.788783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.296 [2024-07-24 22:34:26.788836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.296 qpair failed and we were unable to recover it. 00:25:01.296 [2024-07-24 22:34:26.788939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.296 [2024-07-24 22:34:26.788965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.296 qpair failed and we were unable to recover it. 00:25:01.296 [2024-07-24 22:34:26.789095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.296 [2024-07-24 22:34:26.789140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.296 qpair failed and we were unable to recover it. 00:25:01.296 [2024-07-24 22:34:26.789269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.296 [2024-07-24 22:34:26.789315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.296 qpair failed and we were unable to recover it. 00:25:01.296 [2024-07-24 22:34:26.789512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.296 [2024-07-24 22:34:26.789561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.296 qpair failed and we were unable to recover it. 00:25:01.296 [2024-07-24 22:34:26.789712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.297 [2024-07-24 22:34:26.789767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.297 qpair failed and we were unable to recover it. 00:25:01.297 [2024-07-24 22:34:26.789893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.297 [2024-07-24 22:34:26.789937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.297 qpair failed and we were unable to recover it. 00:25:01.297 [2024-07-24 22:34:26.790067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.297 [2024-07-24 22:34:26.790113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.297 qpair failed and we were unable to recover it. 00:25:01.297 [2024-07-24 22:34:26.790235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.297 [2024-07-24 22:34:26.790282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.297 qpair failed and we were unable to recover it. 00:25:01.297 [2024-07-24 22:34:26.790386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.297 [2024-07-24 22:34:26.790413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.297 qpair failed and we were unable to recover it. 00:25:01.297 [2024-07-24 22:34:26.790515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.297 [2024-07-24 22:34:26.790542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.297 qpair failed and we were unable to recover it. 00:25:01.297 [2024-07-24 22:34:26.790694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.297 [2024-07-24 22:34:26.790719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.297 qpair failed and we were unable to recover it. 00:25:01.297 [2024-07-24 22:34:26.790816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.297 [2024-07-24 22:34:26.790841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.297 qpair failed and we were unable to recover it. 00:25:01.297 [2024-07-24 22:34:26.790943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.297 [2024-07-24 22:34:26.790969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.297 qpair failed and we were unable to recover it. 00:25:01.297 [2024-07-24 22:34:26.791116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.297 [2024-07-24 22:34:26.791181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.297 qpair failed and we were unable to recover it. 00:25:01.297 [2024-07-24 22:34:26.791312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.297 [2024-07-24 22:34:26.791393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.297 qpair failed and we were unable to recover it. 00:25:01.297 [2024-07-24 22:34:26.791504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.297 [2024-07-24 22:34:26.791533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.297 qpair failed and we were unable to recover it. 00:25:01.297 [2024-07-24 22:34:26.791667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.297 [2024-07-24 22:34:26.791715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.297 qpair failed and we were unable to recover it. 00:25:01.297 [2024-07-24 22:34:26.791904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.297 [2024-07-24 22:34:26.791954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.297 qpair failed and we were unable to recover it. 00:25:01.297 [2024-07-24 22:34:26.792115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.297 [2024-07-24 22:34:26.792174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.297 qpair failed and we were unable to recover it. 00:25:01.297 [2024-07-24 22:34:26.792276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.297 [2024-07-24 22:34:26.792302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.297 qpair failed and we were unable to recover it. 00:25:01.297 [2024-07-24 22:34:26.792427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.297 [2024-07-24 22:34:26.792474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.297 qpair failed and we were unable to recover it. 00:25:01.297 [2024-07-24 22:34:26.792583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.297 [2024-07-24 22:34:26.792611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.297 qpair failed and we were unable to recover it. 00:25:01.297 [2024-07-24 22:34:26.792719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.297 [2024-07-24 22:34:26.792745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.297 qpair failed and we were unable to recover it. 00:25:01.297 [2024-07-24 22:34:26.792839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.297 [2024-07-24 22:34:26.792865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.297 qpair failed and we were unable to recover it. 00:25:01.297 [2024-07-24 22:34:26.793000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.297 [2024-07-24 22:34:26.793057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.297 qpair failed and we were unable to recover it. 00:25:01.297 [2024-07-24 22:34:26.793189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.297 [2024-07-24 22:34:26.793236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.297 qpair failed and we were unable to recover it. 00:25:01.297 [2024-07-24 22:34:26.793370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.297 [2024-07-24 22:34:26.793416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.297 qpair failed and we were unable to recover it. 00:25:01.297 [2024-07-24 22:34:26.793558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.297 [2024-07-24 22:34:26.793641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.297 qpair failed and we were unable to recover it. 00:25:01.297 [2024-07-24 22:34:26.793743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.297 [2024-07-24 22:34:26.793769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.297 qpair failed and we were unable to recover it. 00:25:01.297 [2024-07-24 22:34:26.793868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.297 [2024-07-24 22:34:26.793896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.297 qpair failed and we were unable to recover it. 00:25:01.297 [2024-07-24 22:34:26.794025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.297 [2024-07-24 22:34:26.794077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.297 qpair failed and we were unable to recover it. 00:25:01.297 [2024-07-24 22:34:26.794176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.297 [2024-07-24 22:34:26.794202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.297 qpair failed and we were unable to recover it. 00:25:01.297 [2024-07-24 22:34:26.794312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.297 [2024-07-24 22:34:26.794338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.297 qpair failed and we were unable to recover it. 00:25:01.297 [2024-07-24 22:34:26.794465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.297 [2024-07-24 22:34:26.794552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.297 qpair failed and we were unable to recover it. 00:25:01.297 [2024-07-24 22:34:26.794685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.297 [2024-07-24 22:34:26.794766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.297 qpair failed and we were unable to recover it. 00:25:01.297 [2024-07-24 22:34:26.794893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.297 [2024-07-24 22:34:26.794940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.297 qpair failed and we were unable to recover it. 00:25:01.297 [2024-07-24 22:34:26.795043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.298 [2024-07-24 22:34:26.795070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.298 qpair failed and we were unable to recover it. 00:25:01.298 [2024-07-24 22:34:26.795247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.298 [2024-07-24 22:34:26.795298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.298 qpair failed and we were unable to recover it. 00:25:01.298 [2024-07-24 22:34:26.795398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.298 [2024-07-24 22:34:26.795424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.298 qpair failed and we were unable to recover it. 00:25:01.298 [2024-07-24 22:34:26.795558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.298 [2024-07-24 22:34:26.795609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.298 qpair failed and we were unable to recover it. 00:25:01.298 [2024-07-24 22:34:26.795732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.298 [2024-07-24 22:34:26.795779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.298 qpair failed and we were unable to recover it. 00:25:01.298 [2024-07-24 22:34:26.795910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.298 [2024-07-24 22:34:26.795951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.298 qpair failed and we were unable to recover it. 00:25:01.298 [2024-07-24 22:34:26.796048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.298 [2024-07-24 22:34:26.796075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.298 qpair failed and we were unable to recover it. 00:25:01.298 [2024-07-24 22:34:26.796201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.298 [2024-07-24 22:34:26.796283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.298 qpair failed and we were unable to recover it. 00:25:01.298 [2024-07-24 22:34:26.796398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.298 [2024-07-24 22:34:26.796424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.298 qpair failed and we were unable to recover it. 00:25:01.298 [2024-07-24 22:34:26.796569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.298 [2024-07-24 22:34:26.796595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.298 qpair failed and we were unable to recover it. 00:25:01.298 [2024-07-24 22:34:26.796715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.298 [2024-07-24 22:34:26.796763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.298 qpair failed and we were unable to recover it. 00:25:01.298 [2024-07-24 22:34:26.796943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.298 [2024-07-24 22:34:26.797001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.298 qpair failed and we were unable to recover it. 00:25:01.298 [2024-07-24 22:34:26.797175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.298 [2024-07-24 22:34:26.797226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.298 qpair failed and we were unable to recover it. 00:25:01.298 [2024-07-24 22:34:26.797325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.298 [2024-07-24 22:34:26.797351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.298 qpair failed and we were unable to recover it. 00:25:01.298 [2024-07-24 22:34:26.797452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.298 [2024-07-24 22:34:26.797488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.298 qpair failed and we were unable to recover it. 00:25:01.298 [2024-07-24 22:34:26.797619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.298 [2024-07-24 22:34:26.797661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.298 qpair failed and we were unable to recover it. 00:25:01.298 [2024-07-24 22:34:26.797782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.298 [2024-07-24 22:34:26.797829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.298 qpair failed and we were unable to recover it. 00:25:01.298 [2024-07-24 22:34:26.797926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.298 [2024-07-24 22:34:26.797952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.298 qpair failed and we were unable to recover it. 00:25:01.298 [2024-07-24 22:34:26.798098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.298 [2024-07-24 22:34:26.798164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.298 qpair failed and we were unable to recover it. 00:25:01.298 [2024-07-24 22:34:26.798295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.298 [2024-07-24 22:34:26.798341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.298 qpair failed and we were unable to recover it. 00:25:01.298 [2024-07-24 22:34:26.798454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.298 [2024-07-24 22:34:26.798487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.298 qpair failed and we were unable to recover it. 00:25:01.298 [2024-07-24 22:34:26.798596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.298 [2024-07-24 22:34:26.798626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.298 qpair failed and we were unable to recover it. 00:25:01.298 [2024-07-24 22:34:26.798781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.298 [2024-07-24 22:34:26.798839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.298 qpair failed and we were unable to recover it. 00:25:01.298 [2024-07-24 22:34:26.798971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.298 [2024-07-24 22:34:26.799053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.298 qpair failed and we were unable to recover it. 00:25:01.298 [2024-07-24 22:34:26.799213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.298 [2024-07-24 22:34:26.799267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.298 qpair failed and we were unable to recover it. 00:25:01.298 [2024-07-24 22:34:26.799407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.298 [2024-07-24 22:34:26.799458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.298 qpair failed and we were unable to recover it. 00:25:01.298 [2024-07-24 22:34:26.799596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.298 [2024-07-24 22:34:26.799654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.298 qpair failed and we were unable to recover it. 00:25:01.298 [2024-07-24 22:34:26.799786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.298 [2024-07-24 22:34:26.799835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.298 qpair failed and we were unable to recover it. 00:25:01.298 [2024-07-24 22:34:26.799959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.298 [2024-07-24 22:34:26.800026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.298 qpair failed and we were unable to recover it. 00:25:01.298 [2024-07-24 22:34:26.800163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.298 [2024-07-24 22:34:26.800209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.298 qpair failed and we were unable to recover it. 00:25:01.298 [2024-07-24 22:34:26.800394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.298 [2024-07-24 22:34:26.800443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.298 qpair failed and we were unable to recover it. 00:25:01.298 [2024-07-24 22:34:26.800611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.298 [2024-07-24 22:34:26.800691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.298 qpair failed and we were unable to recover it. 00:25:01.298 [2024-07-24 22:34:26.800792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.298 [2024-07-24 22:34:26.800820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.298 qpair failed and we were unable to recover it. 00:25:01.298 [2024-07-24 22:34:26.800984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.298 [2024-07-24 22:34:26.801033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.298 qpair failed and we were unable to recover it. 00:25:01.298 [2024-07-24 22:34:26.801137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.298 [2024-07-24 22:34:26.801168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.298 qpair failed and we were unable to recover it. 00:25:01.298 [2024-07-24 22:34:26.801288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.298 [2024-07-24 22:34:26.801343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.298 qpair failed and we were unable to recover it. 00:25:01.298 [2024-07-24 22:34:26.801496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.298 [2024-07-24 22:34:26.801549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.298 qpair failed and we were unable to recover it. 00:25:01.298 [2024-07-24 22:34:26.801680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.299 [2024-07-24 22:34:26.801728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.299 qpair failed and we were unable to recover it. 00:25:01.299 [2024-07-24 22:34:26.801834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.299 [2024-07-24 22:34:26.801861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.299 qpair failed and we were unable to recover it. 00:25:01.299 [2024-07-24 22:34:26.801977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.299 [2024-07-24 22:34:26.802026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.299 qpair failed and we were unable to recover it. 00:25:01.299 [2024-07-24 22:34:26.802145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.299 [2024-07-24 22:34:26.802203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.299 qpair failed and we were unable to recover it. 00:25:01.299 [2024-07-24 22:34:26.802316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.299 [2024-07-24 22:34:26.802365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.299 qpair failed and we were unable to recover it. 00:25:01.299 [2024-07-24 22:34:26.802495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.299 [2024-07-24 22:34:26.802541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.299 qpair failed and we were unable to recover it. 00:25:01.299 [2024-07-24 22:34:26.802664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.299 [2024-07-24 22:34:26.802748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.299 qpair failed and we were unable to recover it. 00:25:01.299 [2024-07-24 22:34:26.802876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.299 [2024-07-24 22:34:26.802956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.299 qpair failed and we were unable to recover it. 00:25:01.299 [2024-07-24 22:34:26.803075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.299 [2024-07-24 22:34:26.803125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.299 qpair failed and we were unable to recover it. 00:25:01.299 [2024-07-24 22:34:26.803271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.299 [2024-07-24 22:34:26.803324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.299 qpair failed and we were unable to recover it. 00:25:01.299 [2024-07-24 22:34:26.803455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.299 [2024-07-24 22:34:26.803506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.299 qpair failed and we were unable to recover it. 00:25:01.299 [2024-07-24 22:34:26.803642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.299 [2024-07-24 22:34:26.803690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.299 qpair failed and we were unable to recover it. 00:25:01.299 [2024-07-24 22:34:26.803836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.299 [2024-07-24 22:34:26.803918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.299 qpair failed and we were unable to recover it. 00:25:01.299 [2024-07-24 22:34:26.804068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.299 [2024-07-24 22:34:26.804132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.299 qpair failed and we were unable to recover it. 00:25:01.299 [2024-07-24 22:34:26.804253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.299 [2024-07-24 22:34:26.804299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.299 qpair failed and we were unable to recover it. 00:25:01.299 [2024-07-24 22:34:26.804402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.299 [2024-07-24 22:34:26.804429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.299 qpair failed and we were unable to recover it. 00:25:01.299 [2024-07-24 22:34:26.804531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.299 [2024-07-24 22:34:26.804560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.299 qpair failed and we were unable to recover it. 00:25:01.299 [2024-07-24 22:34:26.804660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.299 [2024-07-24 22:34:26.804688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.299 qpair failed and we were unable to recover it. 00:25:01.299 [2024-07-24 22:34:26.804792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.299 [2024-07-24 22:34:26.804819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.299 qpair failed and we were unable to recover it. 00:25:01.299 [2024-07-24 22:34:26.804917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.299 [2024-07-24 22:34:26.804943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.299 qpair failed and we were unable to recover it. 00:25:01.299 [2024-07-24 22:34:26.805055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.299 [2024-07-24 22:34:26.805082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.299 qpair failed and we were unable to recover it. 00:25:01.299 [2024-07-24 22:34:26.805216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.299 [2024-07-24 22:34:26.805268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.299 qpair failed and we were unable to recover it. 00:25:01.299 [2024-07-24 22:34:26.805409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.299 [2024-07-24 22:34:26.805454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.299 qpair failed and we were unable to recover it. 00:25:01.299 [2024-07-24 22:34:26.805577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.299 [2024-07-24 22:34:26.805626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.299 qpair failed and we were unable to recover it. 00:25:01.299 [2024-07-24 22:34:26.805757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.299 [2024-07-24 22:34:26.805799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.299 qpair failed and we were unable to recover it. 00:25:01.299 [2024-07-24 22:34:26.805935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.299 [2024-07-24 22:34:26.805986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.299 qpair failed and we were unable to recover it. 00:25:01.299 [2024-07-24 22:34:26.806103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.299 [2024-07-24 22:34:26.806153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.299 qpair failed and we were unable to recover it. 00:25:01.299 [2024-07-24 22:34:26.806277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.299 [2024-07-24 22:34:26.806341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.299 qpair failed and we were unable to recover it. 00:25:01.299 [2024-07-24 22:34:26.806490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.299 [2024-07-24 22:34:26.806546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.299 qpair failed and we were unable to recover it. 00:25:01.299 [2024-07-24 22:34:26.806672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.299 [2024-07-24 22:34:26.806717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.299 qpair failed and we were unable to recover it. 00:25:01.299 [2024-07-24 22:34:26.806845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.299 [2024-07-24 22:34:26.806892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.299 qpair failed and we were unable to recover it. 00:25:01.299 [2024-07-24 22:34:26.807112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.299 [2024-07-24 22:34:26.807177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.299 qpair failed and we were unable to recover it. 00:25:01.299 [2024-07-24 22:34:26.807301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.299 [2024-07-24 22:34:26.807347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.299 qpair failed and we were unable to recover it. 00:25:01.299 [2024-07-24 22:34:26.807470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.299 [2024-07-24 22:34:26.807526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.299 qpair failed and we were unable to recover it. 00:25:01.299 [2024-07-24 22:34:26.807699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.299 [2024-07-24 22:34:26.807748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.299 qpair failed and we were unable to recover it. 00:25:01.299 [2024-07-24 22:34:26.807878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.299 [2024-07-24 22:34:26.807924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.299 qpair failed and we were unable to recover it. 00:25:01.299 [2024-07-24 22:34:26.808074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.300 [2024-07-24 22:34:26.808100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.300 qpair failed and we were unable to recover it. 00:25:01.300 [2024-07-24 22:34:26.808196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.300 [2024-07-24 22:34:26.808227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.300 qpair failed and we were unable to recover it. 00:25:01.300 [2024-07-24 22:34:26.808358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.300 [2024-07-24 22:34:26.808401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.300 qpair failed and we were unable to recover it. 00:25:01.300 [2024-07-24 22:34:26.808503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.300 [2024-07-24 22:34:26.808532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.300 qpair failed and we were unable to recover it. 00:25:01.300 [2024-07-24 22:34:26.808710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.300 [2024-07-24 22:34:26.808763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.300 qpair failed and we were unable to recover it. 00:25:01.300 [2024-07-24 22:34:26.808891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.300 [2024-07-24 22:34:26.808937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.300 qpair failed and we were unable to recover it. 00:25:01.300 [2024-07-24 22:34:26.809065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.300 [2024-07-24 22:34:26.809119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.300 qpair failed and we were unable to recover it. 00:25:01.300 [2024-07-24 22:34:26.809249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.300 [2024-07-24 22:34:26.809306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.300 qpair failed and we were unable to recover it. 00:25:01.300 [2024-07-24 22:34:26.809429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.300 [2024-07-24 22:34:26.809457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.300 qpair failed and we were unable to recover it. 00:25:01.300 [2024-07-24 22:34:26.809591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.300 [2024-07-24 22:34:26.809640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.300 qpair failed and we were unable to recover it. 00:25:01.300 [2024-07-24 22:34:26.809793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.300 [2024-07-24 22:34:26.809859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.300 qpair failed and we were unable to recover it. 00:25:01.300 [2024-07-24 22:34:26.809988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.300 [2024-07-24 22:34:26.810051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.300 qpair failed and we were unable to recover it. 00:25:01.300 [2024-07-24 22:34:26.810192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.300 [2024-07-24 22:34:26.810245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.300 qpair failed and we were unable to recover it. 00:25:01.300 [2024-07-24 22:34:26.810393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.300 [2024-07-24 22:34:26.810418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.300 qpair failed and we were unable to recover it. 00:25:01.300 [2024-07-24 22:34:26.810535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.300 [2024-07-24 22:34:26.810589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.300 qpair failed and we were unable to recover it. 00:25:01.300 [2024-07-24 22:34:26.810778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.300 [2024-07-24 22:34:26.810831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.300 qpair failed and we were unable to recover it. 00:25:01.300 [2024-07-24 22:34:26.810971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.300 [2024-07-24 22:34:26.811013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.300 qpair failed and we were unable to recover it. 00:25:01.300 [2024-07-24 22:34:26.811129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.300 [2024-07-24 22:34:26.811181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.300 qpair failed and we were unable to recover it. 00:25:01.300 [2024-07-24 22:34:26.811308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.300 [2024-07-24 22:34:26.811359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.300 qpair failed and we were unable to recover it. 00:25:01.300 [2024-07-24 22:34:26.811475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.300 [2024-07-24 22:34:26.811537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.300 qpair failed and we were unable to recover it. 00:25:01.300 [2024-07-24 22:34:26.811655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.300 [2024-07-24 22:34:26.811704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.300 qpair failed and we were unable to recover it. 00:25:01.300 [2024-07-24 22:34:26.811814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.300 [2024-07-24 22:34:26.811841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.300 qpair failed and we were unable to recover it. 00:25:01.300 [2024-07-24 22:34:26.811960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.300 [2024-07-24 22:34:26.812008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.300 qpair failed and we were unable to recover it. 00:25:01.300 [2024-07-24 22:34:26.812127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.300 [2024-07-24 22:34:26.812174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.300 qpair failed and we were unable to recover it. 00:25:01.300 [2024-07-24 22:34:26.812278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.300 [2024-07-24 22:34:26.812306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.300 qpair failed and we were unable to recover it. 00:25:01.300 [2024-07-24 22:34:26.812406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.300 [2024-07-24 22:34:26.812432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.300 qpair failed and we were unable to recover it. 00:25:01.300 [2024-07-24 22:34:26.812574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.300 [2024-07-24 22:34:26.812623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.300 qpair failed and we were unable to recover it. 00:25:01.300 [2024-07-24 22:34:26.812750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.300 [2024-07-24 22:34:26.812797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.300 qpair failed and we were unable to recover it. 00:25:01.300 [2024-07-24 22:34:26.812904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.300 [2024-07-24 22:34:26.812933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.300 qpair failed and we were unable to recover it. 00:25:01.300 [2024-07-24 22:34:26.813054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.300 [2024-07-24 22:34:26.813103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.300 qpair failed and we were unable to recover it. 00:25:01.300 [2024-07-24 22:34:26.813268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.300 [2024-07-24 22:34:26.813327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.300 qpair failed and we were unable to recover it. 00:25:01.300 [2024-07-24 22:34:26.813452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.300 [2024-07-24 22:34:26.813513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.300 qpair failed and we were unable to recover it. 00:25:01.300 [2024-07-24 22:34:26.813621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.300 [2024-07-24 22:34:26.813647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.300 qpair failed and we were unable to recover it. 00:25:01.300 [2024-07-24 22:34:26.813771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.300 [2024-07-24 22:34:26.813832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.300 qpair failed and we were unable to recover it. 00:25:01.300 [2024-07-24 22:34:26.813975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.300 [2024-07-24 22:34:26.814034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.300 qpair failed and we were unable to recover it. 00:25:01.300 [2024-07-24 22:34:26.814165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.300 [2024-07-24 22:34:26.814213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.300 qpair failed and we were unable to recover it. 00:25:01.300 [2024-07-24 22:34:26.814352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.300 [2024-07-24 22:34:26.814401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.300 qpair failed and we were unable to recover it. 00:25:01.300 [2024-07-24 22:34:26.814554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.301 [2024-07-24 22:34:26.814603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.301 qpair failed and we were unable to recover it. 00:25:01.301 [2024-07-24 22:34:26.814736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.301 [2024-07-24 22:34:26.814797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.301 qpair failed and we were unable to recover it. 00:25:01.301 [2024-07-24 22:34:26.814930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.301 [2024-07-24 22:34:26.814983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.301 qpair failed and we were unable to recover it. 00:25:01.301 [2024-07-24 22:34:26.815098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.301 [2024-07-24 22:34:26.815145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.301 qpair failed and we were unable to recover it. 00:25:01.301 [2024-07-24 22:34:26.815273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.301 [2024-07-24 22:34:26.815326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.301 qpair failed and we were unable to recover it. 00:25:01.301 [2024-07-24 22:34:26.815451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.301 [2024-07-24 22:34:26.815502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.301 qpair failed and we were unable to recover it. 00:25:01.301 [2024-07-24 22:34:26.815626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.301 [2024-07-24 22:34:26.815694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.301 qpair failed and we were unable to recover it. 00:25:01.301 [2024-07-24 22:34:26.815838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.301 [2024-07-24 22:34:26.815884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.301 qpair failed and we were unable to recover it. 00:25:01.301 [2024-07-24 22:34:26.816070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.301 [2024-07-24 22:34:26.816121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.301 qpair failed and we were unable to recover it. 00:25:01.301 [2024-07-24 22:34:26.816294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.301 [2024-07-24 22:34:26.816341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.301 qpair failed and we were unable to recover it. 00:25:01.301 [2024-07-24 22:34:26.816457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.301 [2024-07-24 22:34:26.816510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.301 qpair failed and we were unable to recover it. 00:25:01.301 [2024-07-24 22:34:26.816644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.301 [2024-07-24 22:34:26.816692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.301 qpair failed and we were unable to recover it. 00:25:01.301 [2024-07-24 22:34:26.816817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.301 [2024-07-24 22:34:26.816865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.301 qpair failed and we were unable to recover it. 00:25:01.301 [2024-07-24 22:34:26.816996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.301 [2024-07-24 22:34:26.817042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.301 qpair failed and we were unable to recover it. 00:25:01.301 [2024-07-24 22:34:26.817162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.301 [2024-07-24 22:34:26.817209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.301 qpair failed and we were unable to recover it. 00:25:01.301 [2024-07-24 22:34:26.817332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.301 [2024-07-24 22:34:26.817378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.301 qpair failed and we were unable to recover it. 00:25:01.301 [2024-07-24 22:34:26.817533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.301 [2024-07-24 22:34:26.817560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.301 qpair failed and we were unable to recover it. 00:25:01.301 [2024-07-24 22:34:26.817692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.301 [2024-07-24 22:34:26.817736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.301 qpair failed and we were unable to recover it. 00:25:01.301 [2024-07-24 22:34:26.817839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.301 [2024-07-24 22:34:26.817865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.301 qpair failed and we were unable to recover it. 00:25:01.301 [2024-07-24 22:34:26.817989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.301 [2024-07-24 22:34:26.818036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.301 qpair failed and we were unable to recover it. 00:25:01.301 [2024-07-24 22:34:26.818147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.301 [2024-07-24 22:34:26.818204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.301 qpair failed and we were unable to recover it. 00:25:01.301 [2024-07-24 22:34:26.818366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.301 [2024-07-24 22:34:26.818420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.301 qpair failed and we were unable to recover it. 00:25:01.301 [2024-07-24 22:34:26.818522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.301 [2024-07-24 22:34:26.818551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.301 qpair failed and we were unable to recover it. 00:25:01.301 [2024-07-24 22:34:26.818674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.301 [2024-07-24 22:34:26.818727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.301 qpair failed and we were unable to recover it. 00:25:01.301 [2024-07-24 22:34:26.818846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.301 [2024-07-24 22:34:26.818892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.301 qpair failed and we were unable to recover it. 00:25:01.301 [2024-07-24 22:34:26.819018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.301 [2024-07-24 22:34:26.819063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.301 qpair failed and we were unable to recover it. 00:25:01.301 [2024-07-24 22:34:26.819186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.301 [2024-07-24 22:34:26.819246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.301 qpair failed and we were unable to recover it. 00:25:01.301 [2024-07-24 22:34:26.819368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.301 [2024-07-24 22:34:26.819394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.301 qpair failed and we were unable to recover it. 00:25:01.301 [2024-07-24 22:34:26.819556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.301 [2024-07-24 22:34:26.819605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.301 qpair failed and we were unable to recover it. 00:25:01.301 [2024-07-24 22:34:26.819727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.301 [2024-07-24 22:34:26.819772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.301 qpair failed and we were unable to recover it. 00:25:01.301 [2024-07-24 22:34:26.819947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.301 [2024-07-24 22:34:26.819998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.301 qpair failed and we were unable to recover it. 00:25:01.301 [2024-07-24 22:34:26.820185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.301 [2024-07-24 22:34:26.820234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.301 qpair failed and we were unable to recover it. 00:25:01.301 [2024-07-24 22:34:26.820412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.301 [2024-07-24 22:34:26.820462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.301 qpair failed and we were unable to recover it. 00:25:01.301 [2024-07-24 22:34:26.820596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.301 [2024-07-24 22:34:26.820652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.301 qpair failed and we were unable to recover it. 00:25:01.301 [2024-07-24 22:34:26.820751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.301 [2024-07-24 22:34:26.820778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.301 qpair failed and we were unable to recover it. 00:25:01.301 [2024-07-24 22:34:26.820883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.301 [2024-07-24 22:34:26.820910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.301 qpair failed and we were unable to recover it. 00:25:01.301 [2024-07-24 22:34:26.821045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.301 [2024-07-24 22:34:26.821094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.302 qpair failed and we were unable to recover it. 00:25:01.302 [2024-07-24 22:34:26.821213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.302 [2024-07-24 22:34:26.821258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.302 qpair failed and we were unable to recover it. 00:25:01.302 [2024-07-24 22:34:26.821376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.302 [2024-07-24 22:34:26.821421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.302 qpair failed and we were unable to recover it. 00:25:01.302 [2024-07-24 22:34:26.821541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.302 [2024-07-24 22:34:26.821589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.302 qpair failed and we were unable to recover it. 00:25:01.302 [2024-07-24 22:34:26.821749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.302 [2024-07-24 22:34:26.821800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.302 qpair failed and we were unable to recover it. 00:25:01.302 [2024-07-24 22:34:26.821930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.302 [2024-07-24 22:34:26.821968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.302 qpair failed and we were unable to recover it. 00:25:01.302 [2024-07-24 22:34:26.822063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.302 [2024-07-24 22:34:26.822089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.302 qpair failed and we were unable to recover it. 00:25:01.302 [2024-07-24 22:34:26.822207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.302 [2024-07-24 22:34:26.822252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.302 qpair failed and we were unable to recover it. 00:25:01.302 [2024-07-24 22:34:26.822354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.302 [2024-07-24 22:34:26.822387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.302 qpair failed and we were unable to recover it. 00:25:01.302 [2024-07-24 22:34:26.822508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.302 [2024-07-24 22:34:26.822538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.302 qpair failed and we were unable to recover it. 00:25:01.302 [2024-07-24 22:34:26.822663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.302 [2024-07-24 22:34:26.822709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.302 qpair failed and we were unable to recover it. 00:25:01.302 [2024-07-24 22:34:26.822836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.302 [2024-07-24 22:34:26.822895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.302 qpair failed and we were unable to recover it. 00:25:01.302 [2024-07-24 22:34:26.823031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.302 [2024-07-24 22:34:26.823077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.302 qpair failed and we were unable to recover it. 00:25:01.302 [2024-07-24 22:34:26.823195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.302 [2024-07-24 22:34:26.823252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.302 qpair failed and we were unable to recover it. 00:25:01.302 [2024-07-24 22:34:26.823384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.302 [2024-07-24 22:34:26.823440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.302 qpair failed and we were unable to recover it. 00:25:01.302 [2024-07-24 22:34:26.823589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.302 [2024-07-24 22:34:26.823617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.302 qpair failed and we were unable to recover it. 00:25:01.302 [2024-07-24 22:34:26.823787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.302 [2024-07-24 22:34:26.823849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.302 qpair failed and we were unable to recover it. 00:25:01.302 [2024-07-24 22:34:26.824040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.302 [2024-07-24 22:34:26.824090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.302 qpair failed and we were unable to recover it. 00:25:01.302 [2024-07-24 22:34:26.824219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.302 [2024-07-24 22:34:26.824301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.302 qpair failed and we were unable to recover it. 00:25:01.302 [2024-07-24 22:34:26.824410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.302 [2024-07-24 22:34:26.824438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.302 qpair failed and we were unable to recover it. 00:25:01.302 [2024-07-24 22:34:26.824564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.302 [2024-07-24 22:34:26.824610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.302 qpair failed and we were unable to recover it. 00:25:01.302 [2024-07-24 22:34:26.824723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.302 [2024-07-24 22:34:26.824753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.302 qpair failed and we were unable to recover it. 00:25:01.302 [2024-07-24 22:34:26.824935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.302 [2024-07-24 22:34:26.824981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.302 qpair failed and we were unable to recover it. 00:25:01.302 [2024-07-24 22:34:26.825097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.302 [2024-07-24 22:34:26.825145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.302 qpair failed and we were unable to recover it. 00:25:01.302 [2024-07-24 22:34:26.825270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.302 [2024-07-24 22:34:26.825325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.302 qpair failed and we were unable to recover it. 00:25:01.302 [2024-07-24 22:34:26.825475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.302 [2024-07-24 22:34:26.825524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.302 qpair failed and we were unable to recover it. 00:25:01.302 [2024-07-24 22:34:26.825644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.302 [2024-07-24 22:34:26.825689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.302 qpair failed and we were unable to recover it. 00:25:01.302 [2024-07-24 22:34:26.825821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.302 [2024-07-24 22:34:26.825862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.302 qpair failed and we were unable to recover it. 00:25:01.302 [2024-07-24 22:34:26.825969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.302 [2024-07-24 22:34:26.825996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.302 qpair failed and we were unable to recover it. 00:25:01.302 [2024-07-24 22:34:26.826119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.302 [2024-07-24 22:34:26.826167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.302 qpair failed and we were unable to recover it. 00:25:01.302 [2024-07-24 22:34:26.826352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.303 [2024-07-24 22:34:26.826413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.303 qpair failed and we were unable to recover it. 00:25:01.303 [2024-07-24 22:34:26.826540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.303 [2024-07-24 22:34:26.826586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.303 qpair failed and we were unable to recover it. 00:25:01.303 [2024-07-24 22:34:26.826759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.303 [2024-07-24 22:34:26.826807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.303 qpair failed and we were unable to recover it. 00:25:01.303 [2024-07-24 22:34:26.826909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.303 [2024-07-24 22:34:26.826936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.303 qpair failed and we were unable to recover it. 00:25:01.303 [2024-07-24 22:34:26.827037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.303 [2024-07-24 22:34:26.827065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.303 qpair failed and we were unable to recover it. 00:25:01.303 [2024-07-24 22:34:26.827250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.303 [2024-07-24 22:34:26.827305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.303 qpair failed and we were unable to recover it. 00:25:01.303 [2024-07-24 22:34:26.827421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.303 [2024-07-24 22:34:26.827447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.303 qpair failed and we were unable to recover it. 00:25:01.303 [2024-07-24 22:34:26.827560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.303 [2024-07-24 22:34:26.827587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.303 qpair failed and we were unable to recover it. 00:25:01.303 [2024-07-24 22:34:26.827700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.303 [2024-07-24 22:34:26.827747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.303 qpair failed and we were unable to recover it. 00:25:01.303 [2024-07-24 22:34:26.827865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.303 [2024-07-24 22:34:26.827918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.303 qpair failed and we were unable to recover it. 00:25:01.303 [2024-07-24 22:34:26.828036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.303 [2024-07-24 22:34:26.828064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.303 qpair failed and we were unable to recover it. 00:25:01.303 [2024-07-24 22:34:26.828187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.303 [2024-07-24 22:34:26.828234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.303 qpair failed and we were unable to recover it. 00:25:01.303 [2024-07-24 22:34:26.828374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.303 [2024-07-24 22:34:26.828432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.303 qpair failed and we were unable to recover it. 00:25:01.303 [2024-07-24 22:34:26.828570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.303 [2024-07-24 22:34:26.828597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.303 qpair failed and we were unable to recover it. 00:25:01.303 [2024-07-24 22:34:26.828726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.303 [2024-07-24 22:34:26.828773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.303 qpair failed and we were unable to recover it. 00:25:01.303 [2024-07-24 22:34:26.828893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.303 [2024-07-24 22:34:26.828946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.303 qpair failed and we were unable to recover it. 00:25:01.303 [2024-07-24 22:34:26.829090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.303 [2024-07-24 22:34:26.829118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.303 qpair failed and we were unable to recover it. 00:25:01.303 [2024-07-24 22:34:26.829241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.303 [2024-07-24 22:34:26.829289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.303 qpair failed and we were unable to recover it. 00:25:01.303 [2024-07-24 22:34:26.829407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.303 [2024-07-24 22:34:26.829460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.303 qpair failed and we were unable to recover it. 00:25:01.303 [2024-07-24 22:34:26.829596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.303 [2024-07-24 22:34:26.829638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.303 qpair failed and we were unable to recover it. 00:25:01.303 [2024-07-24 22:34:26.829737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.303 [2024-07-24 22:34:26.829764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.303 qpair failed and we were unable to recover it. 00:25:01.303 [2024-07-24 22:34:26.829913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.303 [2024-07-24 22:34:26.829941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.303 qpair failed and we were unable to recover it. 00:25:01.303 [2024-07-24 22:34:26.830059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.303 [2024-07-24 22:34:26.830107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.303 qpair failed and we were unable to recover it. 00:25:01.303 [2024-07-24 22:34:26.830251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.303 [2024-07-24 22:34:26.830297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.303 qpair failed and we were unable to recover it. 00:25:01.303 [2024-07-24 22:34:26.830396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.303 [2024-07-24 22:34:26.830422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.303 qpair failed and we were unable to recover it. 00:25:01.303 [2024-07-24 22:34:26.830549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.303 [2024-07-24 22:34:26.830596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.303 qpair failed and we were unable to recover it. 00:25:01.303 [2024-07-24 22:34:26.830742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.303 [2024-07-24 22:34:26.830783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.303 qpair failed and we were unable to recover it. 00:25:01.303 [2024-07-24 22:34:26.830935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.303 [2024-07-24 22:34:26.830961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.303 qpair failed and we were unable to recover it. 00:25:01.303 [2024-07-24 22:34:26.831086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.303 [2024-07-24 22:34:26.831130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.303 qpair failed and we were unable to recover it. 00:25:01.303 [2024-07-24 22:34:26.831261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.303 [2024-07-24 22:34:26.831313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.303 qpair failed and we were unable to recover it. 00:25:01.303 [2024-07-24 22:34:26.831453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.303 [2024-07-24 22:34:26.831510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.303 qpair failed and we were unable to recover it. 00:25:01.303 [2024-07-24 22:34:26.831641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.303 [2024-07-24 22:34:26.831687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.303 qpair failed and we were unable to recover it. 00:25:01.303 [2024-07-24 22:34:26.831922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.303 [2024-07-24 22:34:26.831980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.303 qpair failed and we were unable to recover it. 00:25:01.303 [2024-07-24 22:34:26.832093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.303 [2024-07-24 22:34:26.832121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.303 qpair failed and we were unable to recover it. 00:25:01.303 [2024-07-24 22:34:26.832236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.303 [2024-07-24 22:34:26.832284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.303 qpair failed and we were unable to recover it. 00:25:01.303 [2024-07-24 22:34:26.832420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.303 [2024-07-24 22:34:26.832464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.303 qpair failed and we were unable to recover it. 00:25:01.303 [2024-07-24 22:34:26.832621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.303 [2024-07-24 22:34:26.832674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.304 qpair failed and we were unable to recover it. 00:25:01.304 [2024-07-24 22:34:26.832798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.304 [2024-07-24 22:34:26.832844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.304 qpair failed and we were unable to recover it. 00:25:01.304 [2024-07-24 22:34:26.832969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.304 [2024-07-24 22:34:26.833023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.304 qpair failed and we were unable to recover it. 00:25:01.304 [2024-07-24 22:34:26.833168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.304 [2024-07-24 22:34:26.833214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.304 qpair failed and we were unable to recover it. 00:25:01.304 [2024-07-24 22:34:26.833338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.304 [2024-07-24 22:34:26.833383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.304 qpair failed and we were unable to recover it. 00:25:01.304 [2024-07-24 22:34:26.833506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.304 [2024-07-24 22:34:26.833549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.304 qpair failed and we were unable to recover it. 00:25:01.304 [2024-07-24 22:34:26.833673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.304 [2024-07-24 22:34:26.833715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.304 qpair failed and we were unable to recover it. 00:25:01.304 [2024-07-24 22:34:26.833840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.304 [2024-07-24 22:34:26.833879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.304 qpair failed and we were unable to recover it. 00:25:01.304 [2024-07-24 22:34:26.833977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.304 [2024-07-24 22:34:26.834004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.304 qpair failed and we were unable to recover it. 00:25:01.304 [2024-07-24 22:34:26.834117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.304 [2024-07-24 22:34:26.834174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.304 qpair failed and we were unable to recover it. 00:25:01.304 [2024-07-24 22:34:26.834349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.304 [2024-07-24 22:34:26.834411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.304 qpair failed and we were unable to recover it. 00:25:01.304 [2024-07-24 22:34:26.834594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.304 [2024-07-24 22:34:26.834652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.304 qpair failed and we were unable to recover it. 00:25:01.304 [2024-07-24 22:34:26.834808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.304 [2024-07-24 22:34:26.834859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.304 qpair failed and we were unable to recover it. 00:25:01.304 [2024-07-24 22:34:26.834983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.304 [2024-07-24 22:34:26.835032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.304 qpair failed and we were unable to recover it. 00:25:01.304 [2024-07-24 22:34:26.835138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.304 [2024-07-24 22:34:26.835164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.304 qpair failed and we were unable to recover it. 00:25:01.304 [2024-07-24 22:34:26.835288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.304 [2024-07-24 22:34:26.835343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.304 qpair failed and we were unable to recover it. 00:25:01.304 [2024-07-24 22:34:26.835501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.304 [2024-07-24 22:34:26.835544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.304 qpair failed and we were unable to recover it. 00:25:01.304 [2024-07-24 22:34:26.835646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.304 [2024-07-24 22:34:26.835672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.304 qpair failed and we were unable to recover it. 00:25:01.304 [2024-07-24 22:34:26.835822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.304 [2024-07-24 22:34:26.835879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.304 qpair failed and we were unable to recover it. 00:25:01.304 [2024-07-24 22:34:26.835976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.304 [2024-07-24 22:34:26.836003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.304 qpair failed and we were unable to recover it. 00:25:01.304 [2024-07-24 22:34:26.836131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.304 [2024-07-24 22:34:26.836176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.304 qpair failed and we were unable to recover it. 00:25:01.304 [2024-07-24 22:34:26.836303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.304 [2024-07-24 22:34:26.836359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.304 qpair failed and we were unable to recover it. 00:25:01.304 [2024-07-24 22:34:26.836516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.304 [2024-07-24 22:34:26.836542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.304 qpair failed and we were unable to recover it. 00:25:01.304 [2024-07-24 22:34:26.836676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.304 [2024-07-24 22:34:26.836727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.304 qpair failed and we were unable to recover it. 00:25:01.304 [2024-07-24 22:34:26.836837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.304 [2024-07-24 22:34:26.836864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.304 qpair failed and we were unable to recover it. 00:25:01.304 [2024-07-24 22:34:26.836964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.304 [2024-07-24 22:34:26.836991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.304 qpair failed and we were unable to recover it. 00:25:01.304 [2024-07-24 22:34:26.837157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.304 [2024-07-24 22:34:26.837208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.304 qpair failed and we were unable to recover it. 00:25:01.304 [2024-07-24 22:34:26.837308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.304 [2024-07-24 22:34:26.837334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.304 qpair failed and we were unable to recover it. 00:25:01.304 [2024-07-24 22:34:26.837441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.304 [2024-07-24 22:34:26.837466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.304 qpair failed and we were unable to recover it. 00:25:01.304 [2024-07-24 22:34:26.837586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.304 [2024-07-24 22:34:26.837636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.304 qpair failed and we were unable to recover it. 00:25:01.304 [2024-07-24 22:34:26.837761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.304 [2024-07-24 22:34:26.837807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.304 qpair failed and we were unable to recover it. 00:25:01.304 [2024-07-24 22:34:26.837931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.304 [2024-07-24 22:34:26.837975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.304 qpair failed and we were unable to recover it. 00:25:01.304 [2024-07-24 22:34:26.838072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.304 [2024-07-24 22:34:26.838099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.304 qpair failed and we were unable to recover it. 00:25:01.304 [2024-07-24 22:34:26.838257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.304 [2024-07-24 22:34:26.838309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.304 qpair failed and we were unable to recover it. 00:25:01.304 [2024-07-24 22:34:26.838445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.304 [2024-07-24 22:34:26.838491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.304 qpair failed and we were unable to recover it. 00:25:01.304 [2024-07-24 22:34:26.838612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.304 [2024-07-24 22:34:26.838658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.304 qpair failed and we were unable to recover it. 00:25:01.304 [2024-07-24 22:34:26.838788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.304 [2024-07-24 22:34:26.838835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.304 qpair failed and we were unable to recover it. 00:25:01.304 [2024-07-24 22:34:26.838994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.304 [2024-07-24 22:34:26.839045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.304 qpair failed and we were unable to recover it. 00:25:01.305 [2024-07-24 22:34:26.839168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.305 [2024-07-24 22:34:26.839215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.305 qpair failed and we were unable to recover it. 00:25:01.305 [2024-07-24 22:34:26.839342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.305 [2024-07-24 22:34:26.839422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.305 qpair failed and we were unable to recover it. 00:25:01.305 [2024-07-24 22:34:26.839532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.305 [2024-07-24 22:34:26.839570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.305 qpair failed and we were unable to recover it. 00:25:01.305 [2024-07-24 22:34:26.839715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.305 [2024-07-24 22:34:26.839769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.305 qpair failed and we were unable to recover it. 00:25:01.305 [2024-07-24 22:34:26.839942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.305 [2024-07-24 22:34:26.839990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.305 qpair failed and we were unable to recover it. 00:25:01.305 [2024-07-24 22:34:26.840177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.305 [2024-07-24 22:34:26.840226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.305 qpair failed and we were unable to recover it. 00:25:01.305 [2024-07-24 22:34:26.840355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.305 [2024-07-24 22:34:26.840438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.305 qpair failed and we were unable to recover it. 00:25:01.305 [2024-07-24 22:34:26.840547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.305 [2024-07-24 22:34:26.840574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.305 qpair failed and we were unable to recover it. 00:25:01.305 [2024-07-24 22:34:26.840691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.305 [2024-07-24 22:34:26.840737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.305 qpair failed and we were unable to recover it. 00:25:01.305 [2024-07-24 22:34:26.840857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.305 [2024-07-24 22:34:26.840905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.305 qpair failed and we were unable to recover it. 00:25:01.305 [2024-07-24 22:34:26.841029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.305 [2024-07-24 22:34:26.841082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.305 qpair failed and we were unable to recover it. 00:25:01.305 [2024-07-24 22:34:26.841223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.305 [2024-07-24 22:34:26.841272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.305 qpair failed and we were unable to recover it. 00:25:01.305 [2024-07-24 22:34:26.841374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.305 [2024-07-24 22:34:26.841401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.305 qpair failed and we were unable to recover it. 00:25:01.305 [2024-07-24 22:34:26.841507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.305 [2024-07-24 22:34:26.841534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.305 qpair failed and we were unable to recover it. 00:25:01.305 [2024-07-24 22:34:26.841636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.305 [2024-07-24 22:34:26.841663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.305 qpair failed and we were unable to recover it. 00:25:01.305 [2024-07-24 22:34:26.841792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.305 [2024-07-24 22:34:26.841851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.305 qpair failed and we were unable to recover it. 00:25:01.305 [2024-07-24 22:34:26.842000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.305 [2024-07-24 22:34:26.842056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.305 qpair failed and we were unable to recover it. 00:25:01.305 [2024-07-24 22:34:26.842221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.305 [2024-07-24 22:34:26.842267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.305 qpair failed and we were unable to recover it. 00:25:01.305 [2024-07-24 22:34:26.842445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.305 [2024-07-24 22:34:26.842471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.305 qpair failed and we were unable to recover it. 00:25:01.305 [2024-07-24 22:34:26.842606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.305 [2024-07-24 22:34:26.842654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.305 qpair failed and we were unable to recover it. 00:25:01.305 [2024-07-24 22:34:26.842773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.305 [2024-07-24 22:34:26.842819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.305 qpair failed and we were unable to recover it. 00:25:01.305 [2024-07-24 22:34:26.842938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.305 [2024-07-24 22:34:26.842999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.305 qpair failed and we were unable to recover it. 00:25:01.305 [2024-07-24 22:34:26.843129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.305 [2024-07-24 22:34:26.843174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.305 qpair failed and we were unable to recover it. 00:25:01.305 [2024-07-24 22:34:26.843299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.305 [2024-07-24 22:34:26.843346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.305 qpair failed and we were unable to recover it. 00:25:01.305 [2024-07-24 22:34:26.843450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.305 [2024-07-24 22:34:26.843478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.305 qpair failed and we were unable to recover it. 00:25:01.305 [2024-07-24 22:34:26.843623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.305 [2024-07-24 22:34:26.843692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.305 qpair failed and we were unable to recover it. 00:25:01.305 [2024-07-24 22:34:26.843830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.305 [2024-07-24 22:34:26.843876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.305 qpair failed and we were unable to recover it. 00:25:01.305 [2024-07-24 22:34:26.843980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.305 [2024-07-24 22:34:26.844006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.305 qpair failed and we were unable to recover it. 00:25:01.305 [2024-07-24 22:34:26.844125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.305 [2024-07-24 22:34:26.844173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.305 qpair failed and we were unable to recover it. 00:25:01.305 [2024-07-24 22:34:26.844302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.305 [2024-07-24 22:34:26.844346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.305 qpair failed and we were unable to recover it. 00:25:01.305 [2024-07-24 22:34:26.844508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.305 [2024-07-24 22:34:26.844558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.305 qpair failed and we were unable to recover it. 00:25:01.305 [2024-07-24 22:34:26.844687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.305 [2024-07-24 22:34:26.844740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.305 qpair failed and we were unable to recover it. 00:25:01.305 [2024-07-24 22:34:26.844894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.305 [2024-07-24 22:34:26.844935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.305 qpair failed and we were unable to recover it. 00:25:01.305 [2024-07-24 22:34:26.845036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.305 [2024-07-24 22:34:26.845061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.305 qpair failed and we were unable to recover it. 00:25:01.305 [2024-07-24 22:34:26.845206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.305 [2024-07-24 22:34:26.845248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.305 qpair failed and we were unable to recover it. 00:25:01.305 [2024-07-24 22:34:26.845387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.305 [2024-07-24 22:34:26.845427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.305 qpair failed and we were unable to recover it. 00:25:01.306 [2024-07-24 22:34:26.845524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.306 [2024-07-24 22:34:26.845550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.306 qpair failed and we were unable to recover it. 00:25:01.306 [2024-07-24 22:34:26.845736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.306 [2024-07-24 22:34:26.845784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.306 qpair failed and we were unable to recover it. 00:25:01.306 [2024-07-24 22:34:26.845891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.306 [2024-07-24 22:34:26.845916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.306 qpair failed and we were unable to recover it. 00:25:01.306 [2024-07-24 22:34:26.846041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.306 [2024-07-24 22:34:26.846089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.306 qpair failed and we were unable to recover it. 00:25:01.306 [2024-07-24 22:34:26.846208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.306 [2024-07-24 22:34:26.846253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.306 qpair failed and we were unable to recover it. 00:25:01.306 [2024-07-24 22:34:26.846427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.306 [2024-07-24 22:34:26.846503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.306 qpair failed and we were unable to recover it. 00:25:01.306 [2024-07-24 22:34:26.846613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.306 [2024-07-24 22:34:26.846639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.306 qpair failed and we were unable to recover it. 00:25:01.306 [2024-07-24 22:34:26.846738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.306 [2024-07-24 22:34:26.846764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.306 qpair failed and we were unable to recover it. 00:25:01.306 [2024-07-24 22:34:26.846881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.306 [2024-07-24 22:34:26.846925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.306 qpair failed and we were unable to recover it. 00:25:01.306 [2024-07-24 22:34:26.847045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.306 [2024-07-24 22:34:26.847097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.306 qpair failed and we were unable to recover it. 00:25:01.306 [2024-07-24 22:34:26.847255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.306 [2024-07-24 22:34:26.847300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.306 qpair failed and we were unable to recover it. 00:25:01.306 [2024-07-24 22:34:26.847414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.306 [2024-07-24 22:34:26.847441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.306 qpair failed and we were unable to recover it. 00:25:01.306 [2024-07-24 22:34:26.847560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.306 [2024-07-24 22:34:26.847607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.306 qpair failed and we were unable to recover it. 00:25:01.306 [2024-07-24 22:34:26.847739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.306 [2024-07-24 22:34:26.847787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.306 qpair failed and we were unable to recover it. 00:25:01.306 [2024-07-24 22:34:26.847916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.306 [2024-07-24 22:34:26.847957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.306 qpair failed and we were unable to recover it. 00:25:01.306 [2024-07-24 22:34:26.848107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.306 [2024-07-24 22:34:26.848171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.306 qpair failed and we were unable to recover it. 00:25:01.306 [2024-07-24 22:34:26.848277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.306 [2024-07-24 22:34:26.848303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.306 qpair failed and we were unable to recover it. 00:25:01.306 [2024-07-24 22:34:26.848425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.306 [2024-07-24 22:34:26.848471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.306 qpair failed and we were unable to recover it. 00:25:01.306 [2024-07-24 22:34:26.848607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.306 [2024-07-24 22:34:26.848653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.306 qpair failed and we were unable to recover it. 00:25:01.306 [2024-07-24 22:34:26.848756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.306 [2024-07-24 22:34:26.848784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.306 qpair failed and we were unable to recover it. 00:25:01.306 [2024-07-24 22:34:26.848911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.306 [2024-07-24 22:34:26.848957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.306 qpair failed and we were unable to recover it. 00:25:01.306 [2024-07-24 22:34:26.849076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.306 [2024-07-24 22:34:26.849122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.306 qpair failed and we were unable to recover it. 00:25:01.306 [2024-07-24 22:34:26.849249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.306 [2024-07-24 22:34:26.849303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.306 qpair failed and we were unable to recover it. 00:25:01.306 [2024-07-24 22:34:26.849455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.306 [2024-07-24 22:34:26.849490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.306 qpair failed and we were unable to recover it. 00:25:01.306 [2024-07-24 22:34:26.849606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.306 [2024-07-24 22:34:26.849659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.306 qpair failed and we were unable to recover it. 00:25:01.306 [2024-07-24 22:34:26.849806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.306 [2024-07-24 22:34:26.849847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.306 qpair failed and we were unable to recover it. 00:25:01.306 [2024-07-24 22:34:26.849983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.306 [2024-07-24 22:34:26.850028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.306 qpair failed and we were unable to recover it. 00:25:01.306 [2024-07-24 22:34:26.850131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.306 [2024-07-24 22:34:26.850157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.306 qpair failed and we were unable to recover it. 00:25:01.306 [2024-07-24 22:34:26.850280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.306 [2024-07-24 22:34:26.850326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.306 qpair failed and we were unable to recover it. 00:25:01.306 [2024-07-24 22:34:26.850453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.306 [2024-07-24 22:34:26.850533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.306 qpair failed and we were unable to recover it. 00:25:01.306 [2024-07-24 22:34:26.850660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.306 [2024-07-24 22:34:26.850711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.306 qpair failed and we were unable to recover it. 00:25:01.306 [2024-07-24 22:34:26.850834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.306 [2024-07-24 22:34:26.850885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.306 qpair failed and we were unable to recover it. 00:25:01.306 [2024-07-24 22:34:26.851027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.306 [2024-07-24 22:34:26.851071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.306 qpair failed and we were unable to recover it. 00:25:01.306 [2024-07-24 22:34:26.851173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.306 [2024-07-24 22:34:26.851199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.306 qpair failed and we were unable to recover it. 00:25:01.306 [2024-07-24 22:34:26.851312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.306 [2024-07-24 22:34:26.851362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.306 qpair failed and we were unable to recover it. 00:25:01.306 [2024-07-24 22:34:26.851500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.306 [2024-07-24 22:34:26.851543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.306 qpair failed and we were unable to recover it. 00:25:01.306 [2024-07-24 22:34:26.851657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.306 [2024-07-24 22:34:26.851702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.306 qpair failed and we were unable to recover it. 00:25:01.307 [2024-07-24 22:34:26.851827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.307 [2024-07-24 22:34:26.851869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.307 qpair failed and we were unable to recover it. 00:25:01.307 [2024-07-24 22:34:26.851991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.307 [2024-07-24 22:34:26.852037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.307 qpair failed and we were unable to recover it. 00:25:01.307 [2024-07-24 22:34:26.852169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.307 [2024-07-24 22:34:26.852217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.307 qpair failed and we were unable to recover it. 00:25:01.307 [2024-07-24 22:34:26.852342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.307 [2024-07-24 22:34:26.852388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.307 qpair failed and we were unable to recover it. 00:25:01.307 [2024-07-24 22:34:26.852518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.307 [2024-07-24 22:34:26.852563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.307 qpair failed and we were unable to recover it. 00:25:01.307 [2024-07-24 22:34:26.852696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.307 [2024-07-24 22:34:26.852744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.307 qpair failed and we were unable to recover it. 00:25:01.307 [2024-07-24 22:34:26.852875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.307 [2024-07-24 22:34:26.852914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.307 qpair failed and we were unable to recover it. 00:25:01.307 [2024-07-24 22:34:26.853019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.307 [2024-07-24 22:34:26.853047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.307 qpair failed and we were unable to recover it. 00:25:01.307 [2024-07-24 22:34:26.853164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.307 [2024-07-24 22:34:26.853210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.307 qpair failed and we were unable to recover it. 00:25:01.307 [2024-07-24 22:34:26.853326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.307 [2024-07-24 22:34:26.853373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.307 qpair failed and we were unable to recover it. 00:25:01.307 [2024-07-24 22:34:26.853478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.307 [2024-07-24 22:34:26.853513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.307 qpair failed and we were unable to recover it. 00:25:01.307 [2024-07-24 22:34:26.853632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.307 [2024-07-24 22:34:26.853677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.307 qpair failed and we were unable to recover it. 00:25:01.307 [2024-07-24 22:34:26.853780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.307 [2024-07-24 22:34:26.853806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.307 qpair failed and we were unable to recover it. 00:25:01.307 [2024-07-24 22:34:26.853931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.307 [2024-07-24 22:34:26.853973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.307 qpair failed and we were unable to recover it. 00:25:01.307 [2024-07-24 22:34:26.854102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.307 [2024-07-24 22:34:26.854146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.307 qpair failed and we were unable to recover it. 00:25:01.307 [2024-07-24 22:34:26.854244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.307 [2024-07-24 22:34:26.854270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.307 qpair failed and we were unable to recover it. 00:25:01.307 [2024-07-24 22:34:26.854375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.307 [2024-07-24 22:34:26.854402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.307 qpair failed and we were unable to recover it. 00:25:01.307 [2024-07-24 22:34:26.854504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.307 [2024-07-24 22:34:26.854532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.307 qpair failed and we were unable to recover it. 00:25:01.307 [2024-07-24 22:34:26.854636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.307 [2024-07-24 22:34:26.854668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.307 qpair failed and we were unable to recover it. 00:25:01.307 [2024-07-24 22:34:26.854792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.307 [2024-07-24 22:34:26.854837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.307 qpair failed and we were unable to recover it. 00:25:01.307 [2024-07-24 22:34:26.854941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.307 [2024-07-24 22:34:26.854966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.307 qpair failed and we were unable to recover it. 00:25:01.307 [2024-07-24 22:34:26.855089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.307 [2024-07-24 22:34:26.855133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.307 qpair failed and we were unable to recover it. 00:25:01.307 [2024-07-24 22:34:26.855287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.307 [2024-07-24 22:34:26.855334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.307 qpair failed and we were unable to recover it. 00:25:01.307 [2024-07-24 22:34:26.855456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.307 [2024-07-24 22:34:26.855510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.307 qpair failed and we were unable to recover it. 00:25:01.307 [2024-07-24 22:34:26.855666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.307 [2024-07-24 22:34:26.855711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.307 qpair failed and we were unable to recover it. 00:25:01.307 [2024-07-24 22:34:26.855881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.307 [2024-07-24 22:34:26.855943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.307 qpair failed and we were unable to recover it. 00:25:01.307 [2024-07-24 22:34:26.856138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.307 [2024-07-24 22:34:26.856189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.307 qpair failed and we were unable to recover it. 00:25:01.307 [2024-07-24 22:34:26.856317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.307 [2024-07-24 22:34:26.856400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.307 qpair failed and we were unable to recover it. 00:25:01.307 [2024-07-24 22:34:26.856500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.307 [2024-07-24 22:34:26.856529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.307 qpair failed and we were unable to recover it. 00:25:01.307 [2024-07-24 22:34:26.856647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.307 [2024-07-24 22:34:26.856690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.307 qpair failed and we were unable to recover it. 00:25:01.307 [2024-07-24 22:34:26.856804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.307 [2024-07-24 22:34:26.856847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.307 qpair failed and we were unable to recover it. 00:25:01.307 [2024-07-24 22:34:26.856968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.308 [2024-07-24 22:34:26.857013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.308 qpair failed and we were unable to recover it. 00:25:01.308 [2024-07-24 22:34:26.857146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.308 [2024-07-24 22:34:26.857190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.308 qpair failed and we were unable to recover it. 00:25:01.308 [2024-07-24 22:34:26.857289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.308 [2024-07-24 22:34:26.857315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.308 qpair failed and we were unable to recover it. 00:25:01.308 [2024-07-24 22:34:26.857409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.308 [2024-07-24 22:34:26.857436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.308 qpair failed and we were unable to recover it. 00:25:01.308 [2024-07-24 22:34:26.857542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.308 [2024-07-24 22:34:26.857571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.308 qpair failed and we were unable to recover it. 00:25:01.308 [2024-07-24 22:34:26.857677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.308 [2024-07-24 22:34:26.857704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.308 qpair failed and we were unable to recover it. 00:25:01.308 [2024-07-24 22:34:26.857801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.308 [2024-07-24 22:34:26.857828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.308 qpair failed and we were unable to recover it. 00:25:01.308 [2024-07-24 22:34:26.857932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.308 [2024-07-24 22:34:26.857959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.308 qpair failed and we were unable to recover it. 00:25:01.308 [2024-07-24 22:34:26.858058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.308 [2024-07-24 22:34:26.858085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.308 qpair failed and we were unable to recover it. 00:25:01.308 [2024-07-24 22:34:26.858179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.308 [2024-07-24 22:34:26.858205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.308 qpair failed and we were unable to recover it. 00:25:01.308 [2024-07-24 22:34:26.858305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.308 [2024-07-24 22:34:26.858334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.308 qpair failed and we were unable to recover it. 00:25:01.308 [2024-07-24 22:34:26.858459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.308 [2024-07-24 22:34:26.858510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.308 qpair failed and we were unable to recover it. 00:25:01.308 [2024-07-24 22:34:26.858617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.308 [2024-07-24 22:34:26.858645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.308 qpair failed and we were unable to recover it. 00:25:01.308 [2024-07-24 22:34:26.858766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.308 [2024-07-24 22:34:26.858810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.308 qpair failed and we were unable to recover it. 00:25:01.308 [2024-07-24 22:34:26.858919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.308 [2024-07-24 22:34:26.858945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.308 qpair failed and we were unable to recover it. 00:25:01.308 [2024-07-24 22:34:26.859095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.308 [2024-07-24 22:34:26.859123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.308 qpair failed and we were unable to recover it. 00:25:01.308 [2024-07-24 22:34:26.859235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.308 [2024-07-24 22:34:26.859270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.308 qpair failed and we were unable to recover it. 00:25:01.308 [2024-07-24 22:34:26.859386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.308 [2024-07-24 22:34:26.859412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.308 qpair failed and we were unable to recover it. 00:25:01.308 [2024-07-24 22:34:26.859509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.308 [2024-07-24 22:34:26.859536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.308 qpair failed and we were unable to recover it. 00:25:01.308 [2024-07-24 22:34:26.859629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.308 [2024-07-24 22:34:26.859655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.308 qpair failed and we were unable to recover it. 00:25:01.308 [2024-07-24 22:34:26.859756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.308 [2024-07-24 22:34:26.859781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.308 qpair failed and we were unable to recover it. 00:25:01.308 [2024-07-24 22:34:26.859922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.308 [2024-07-24 22:34:26.859949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.308 qpair failed and we were unable to recover it. 00:25:01.308 [2024-07-24 22:34:26.860053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.308 [2024-07-24 22:34:26.860082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.308 qpair failed and we were unable to recover it. 00:25:01.308 [2024-07-24 22:34:26.860197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.308 [2024-07-24 22:34:26.860245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.308 qpair failed and we were unable to recover it. 00:25:01.308 [2024-07-24 22:34:26.860373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.308 [2024-07-24 22:34:26.860418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.308 qpair failed and we were unable to recover it. 00:25:01.308 [2024-07-24 22:34:26.860532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.308 [2024-07-24 22:34:26.860560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.308 qpair failed and we were unable to recover it. 00:25:01.308 [2024-07-24 22:34:26.860690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.308 [2024-07-24 22:34:26.860736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.308 qpair failed and we were unable to recover it. 00:25:01.308 [2024-07-24 22:34:26.860847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.308 [2024-07-24 22:34:26.860884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.308 qpair failed and we were unable to recover it. 00:25:01.308 [2024-07-24 22:34:26.861014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.308 [2024-07-24 22:34:26.861059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.308 qpair failed and we were unable to recover it. 00:25:01.308 [2024-07-24 22:34:26.861174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.308 [2024-07-24 22:34:26.861219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.308 qpair failed and we were unable to recover it. 00:25:01.308 [2024-07-24 22:34:26.861360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.308 [2024-07-24 22:34:26.861401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.308 qpair failed and we were unable to recover it. 00:25:01.308 [2024-07-24 22:34:26.861518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.308 [2024-07-24 22:34:26.861545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.308 qpair failed and we were unable to recover it. 00:25:01.308 [2024-07-24 22:34:26.861675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.308 [2024-07-24 22:34:26.861716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.308 qpair failed and we were unable to recover it. 00:25:01.308 [2024-07-24 22:34:26.861830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.308 [2024-07-24 22:34:26.861873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.308 qpair failed and we were unable to recover it. 00:25:01.308 [2024-07-24 22:34:26.861993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.308 [2024-07-24 22:34:26.862036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.308 qpair failed and we were unable to recover it. 00:25:01.308 [2024-07-24 22:34:26.862159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.308 [2024-07-24 22:34:26.862199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.308 qpair failed and we were unable to recover it. 00:25:01.308 [2024-07-24 22:34:26.862322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.308 [2024-07-24 22:34:26.862366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.308 qpair failed and we were unable to recover it. 00:25:01.308 [2024-07-24 22:34:26.862467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.309 [2024-07-24 22:34:26.862508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.309 qpair failed and we were unable to recover it. 00:25:01.309 [2024-07-24 22:34:26.862630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.309 [2024-07-24 22:34:26.862675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.309 qpair failed and we were unable to recover it. 00:25:01.309 [2024-07-24 22:34:26.862799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.309 [2024-07-24 22:34:26.862842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.309 qpair failed and we were unable to recover it. 00:25:01.309 [2024-07-24 22:34:26.862944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.309 [2024-07-24 22:34:26.862972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.309 qpair failed and we were unable to recover it. 00:25:01.309 [2024-07-24 22:34:26.863108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.309 [2024-07-24 22:34:26.863153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.309 qpair failed and we were unable to recover it. 00:25:01.309 [2024-07-24 22:34:26.863275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.309 [2024-07-24 22:34:26.863321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.309 qpair failed and we were unable to recover it. 00:25:01.309 [2024-07-24 22:34:26.863423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.309 [2024-07-24 22:34:26.863449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.309 qpair failed and we were unable to recover it. 00:25:01.309 [2024-07-24 22:34:26.863612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.309 [2024-07-24 22:34:26.863659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.309 qpair failed and we were unable to recover it. 00:25:01.309 [2024-07-24 22:34:26.863803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.309 [2024-07-24 22:34:26.863829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.309 qpair failed and we were unable to recover it. 00:25:01.309 [2024-07-24 22:34:26.863950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.309 [2024-07-24 22:34:26.863994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.309 qpair failed and we were unable to recover it. 00:25:01.309 [2024-07-24 22:34:26.864092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.309 [2024-07-24 22:34:26.864118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.309 qpair failed and we were unable to recover it. 00:25:01.309 [2024-07-24 22:34:26.864237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.309 [2024-07-24 22:34:26.864281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.309 qpair failed and we were unable to recover it. 00:25:01.309 [2024-07-24 22:34:26.864387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.309 [2024-07-24 22:34:26.864413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.309 qpair failed and we were unable to recover it. 00:25:01.309 [2024-07-24 22:34:26.864534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.309 [2024-07-24 22:34:26.864582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.309 qpair failed and we were unable to recover it. 00:25:01.309 [2024-07-24 22:34:26.864726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.309 [2024-07-24 22:34:26.864754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.309 qpair failed and we were unable to recover it. 00:25:01.309 [2024-07-24 22:34:26.864886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.309 [2024-07-24 22:34:26.864931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.309 qpair failed and we were unable to recover it. 00:25:01.309 [2024-07-24 22:34:26.865034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.309 [2024-07-24 22:34:26.865060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.309 qpair failed and we were unable to recover it. 00:25:01.309 [2024-07-24 22:34:26.865199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.309 [2024-07-24 22:34:26.865244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.309 qpair failed and we were unable to recover it. 00:25:01.309 [2024-07-24 22:34:26.865387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.309 [2024-07-24 22:34:26.865436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.309 qpair failed and we were unable to recover it. 00:25:01.309 [2024-07-24 22:34:26.865601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.309 [2024-07-24 22:34:26.865665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.309 qpair failed and we were unable to recover it. 00:25:01.309 [2024-07-24 22:34:26.865782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.309 [2024-07-24 22:34:26.865829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.309 qpair failed and we were unable to recover it. 00:25:01.309 [2024-07-24 22:34:26.865947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.309 [2024-07-24 22:34:26.866012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.309 qpair failed and we were unable to recover it. 00:25:01.309 [2024-07-24 22:34:26.866157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.309 [2024-07-24 22:34:26.866199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.309 qpair failed and we were unable to recover it. 00:25:01.309 [2024-07-24 22:34:26.866315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.309 [2024-07-24 22:34:26.866358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.309 qpair failed and we were unable to recover it. 00:25:01.309 [2024-07-24 22:34:26.866537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.309 [2024-07-24 22:34:26.866564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.309 qpair failed and we were unable to recover it. 00:25:01.309 [2024-07-24 22:34:26.866679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.309 [2024-07-24 22:34:26.866724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.309 qpair failed and we were unable to recover it. 00:25:01.309 [2024-07-24 22:34:26.866826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.309 [2024-07-24 22:34:26.866853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.309 qpair failed and we were unable to recover it. 00:25:01.309 [2024-07-24 22:34:26.866983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.309 [2024-07-24 22:34:26.867027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.309 qpair failed and we were unable to recover it. 00:25:01.309 [2024-07-24 22:34:26.867143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.309 [2024-07-24 22:34:26.867188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.309 qpair failed and we were unable to recover it. 00:25:01.309 [2024-07-24 22:34:26.867314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.309 [2024-07-24 22:34:26.867359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.309 qpair failed and we were unable to recover it. 00:25:01.309 [2024-07-24 22:34:26.867494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.309 [2024-07-24 22:34:26.867545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.309 qpair failed and we were unable to recover it. 00:25:01.309 [2024-07-24 22:34:26.867659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.309 [2024-07-24 22:34:26.867705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.309 qpair failed and we were unable to recover it. 00:25:01.309 [2024-07-24 22:34:26.867820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.309 [2024-07-24 22:34:26.867846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.309 qpair failed and we were unable to recover it. 00:25:01.309 [2024-07-24 22:34:26.867969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.309 [2024-07-24 22:34:26.868019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.309 qpair failed and we were unable to recover it. 00:25:01.309 [2024-07-24 22:34:26.868152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.309 [2024-07-24 22:34:26.868197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.309 qpair failed and we were unable to recover it. 00:25:01.309 [2024-07-24 22:34:26.868315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.309 [2024-07-24 22:34:26.868362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.309 qpair failed and we were unable to recover it. 00:25:01.309 [2024-07-24 22:34:26.868511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.309 [2024-07-24 22:34:26.868539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.309 qpair failed and we were unable to recover it. 00:25:01.309 [2024-07-24 22:34:26.868666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.309 [2024-07-24 22:34:26.868711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.309 qpair failed and we were unable to recover it. 00:25:01.309 [2024-07-24 22:34:26.868834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.309 [2024-07-24 22:34:26.868880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.310 qpair failed and we were unable to recover it. 00:25:01.310 [2024-07-24 22:34:26.868996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.310 [2024-07-24 22:34:26.869044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.310 qpair failed and we were unable to recover it. 00:25:01.310 [2024-07-24 22:34:26.869159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.310 [2024-07-24 22:34:26.869194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.310 qpair failed and we were unable to recover it. 00:25:01.310 [2024-07-24 22:34:26.869330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.310 [2024-07-24 22:34:26.869374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.310 qpair failed and we were unable to recover it. 00:25:01.310 [2024-07-24 22:34:26.869496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.310 [2024-07-24 22:34:26.869545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.310 qpair failed and we were unable to recover it. 00:25:01.310 [2024-07-24 22:34:26.869714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.310 [2024-07-24 22:34:26.869740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.310 qpair failed and we were unable to recover it. 00:25:01.310 [2024-07-24 22:34:26.869890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.310 [2024-07-24 22:34:26.869916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.310 qpair failed and we were unable to recover it. 00:25:01.310 [2024-07-24 22:34:26.870022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.310 [2024-07-24 22:34:26.870048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.310 qpair failed and we were unable to recover it. 00:25:01.310 [2024-07-24 22:34:26.870172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.310 [2024-07-24 22:34:26.870216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.310 qpair failed and we were unable to recover it. 00:25:01.310 [2024-07-24 22:34:26.870333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.310 [2024-07-24 22:34:26.870379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.310 qpair failed and we were unable to recover it. 00:25:01.310 [2024-07-24 22:34:26.870505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.310 [2024-07-24 22:34:26.870549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.310 qpair failed and we were unable to recover it. 00:25:01.310 [2024-07-24 22:34:26.870670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.310 [2024-07-24 22:34:26.870713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.310 qpair failed and we were unable to recover it. 00:25:01.310 [2024-07-24 22:34:26.870827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.310 [2024-07-24 22:34:26.870861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.310 qpair failed and we were unable to recover it. 00:25:01.310 [2024-07-24 22:34:26.870979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.310 [2024-07-24 22:34:26.871006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.310 qpair failed and we were unable to recover it. 00:25:01.310 [2024-07-24 22:34:26.871126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.310 [2024-07-24 22:34:26.871171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.310 qpair failed and we were unable to recover it. 00:25:01.310 [2024-07-24 22:34:26.871287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.310 [2024-07-24 22:34:26.871331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.310 qpair failed and we were unable to recover it. 00:25:01.310 [2024-07-24 22:34:26.871447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.310 [2024-07-24 22:34:26.871505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.310 qpair failed and we were unable to recover it. 00:25:01.310 [2024-07-24 22:34:26.871630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.310 [2024-07-24 22:34:26.871672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.310 qpair failed and we were unable to recover it. 00:25:01.310 [2024-07-24 22:34:26.871838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.310 [2024-07-24 22:34:26.871864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.310 qpair failed and we were unable to recover it. 00:25:01.310 [2024-07-24 22:34:26.872000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.310 [2024-07-24 22:34:26.872048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.310 qpair failed and we were unable to recover it. 00:25:01.310 [2024-07-24 22:34:26.872157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.310 [2024-07-24 22:34:26.872183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.310 qpair failed and we were unable to recover it. 00:25:01.310 [2024-07-24 22:34:26.872289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.310 [2024-07-24 22:34:26.872316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.310 qpair failed and we were unable to recover it. 00:25:01.310 [2024-07-24 22:34:26.872438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.310 [2024-07-24 22:34:26.872493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.310 qpair failed and we were unable to recover it. 00:25:01.310 [2024-07-24 22:34:26.872620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.310 [2024-07-24 22:34:26.872665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.310 qpair failed and we were unable to recover it. 00:25:01.310 [2024-07-24 22:34:26.872785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.310 [2024-07-24 22:34:26.872829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.310 qpair failed and we were unable to recover it. 00:25:01.310 [2024-07-24 22:34:26.872960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.310 [2024-07-24 22:34:26.873005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.310 qpair failed and we were unable to recover it. 00:25:01.310 [2024-07-24 22:34:26.873111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.310 [2024-07-24 22:34:26.873137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.310 qpair failed and we were unable to recover it. 00:25:01.310 [2024-07-24 22:34:26.873256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.310 [2024-07-24 22:34:26.873298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.310 qpair failed and we were unable to recover it. 00:25:01.310 [2024-07-24 22:34:26.873399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.310 [2024-07-24 22:34:26.873424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.310 qpair failed and we were unable to recover it. 00:25:01.310 [2024-07-24 22:34:26.873542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.310 [2024-07-24 22:34:26.873587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.310 qpair failed and we were unable to recover it. 00:25:01.310 [2024-07-24 22:34:26.873689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.310 [2024-07-24 22:34:26.873715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.310 qpair failed and we were unable to recover it. 00:25:01.310 [2024-07-24 22:34:26.873835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.310 [2024-07-24 22:34:26.873876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.310 qpair failed and we were unable to recover it. 00:25:01.310 [2024-07-24 22:34:26.873990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.310 [2024-07-24 22:34:26.874038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.310 qpair failed and we were unable to recover it. 00:25:01.310 [2024-07-24 22:34:26.874143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.310 [2024-07-24 22:34:26.874169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.310 qpair failed and we were unable to recover it. 00:25:01.310 [2024-07-24 22:34:26.874286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.310 [2024-07-24 22:34:26.874328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.310 qpair failed and we were unable to recover it. 00:25:01.310 [2024-07-24 22:34:26.874428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.310 [2024-07-24 22:34:26.874453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.310 qpair failed and we were unable to recover it. 00:25:01.310 [2024-07-24 22:34:26.874584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.310 [2024-07-24 22:34:26.874629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.310 qpair failed and we were unable to recover it. 00:25:01.310 [2024-07-24 22:34:26.874757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.310 [2024-07-24 22:34:26.874802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.310 qpair failed and we were unable to recover it. 00:25:01.310 [2024-07-24 22:34:26.874919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.310 [2024-07-24 22:34:26.874962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.310 qpair failed and we were unable to recover it. 00:25:01.310 [2024-07-24 22:34:26.875097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.310 [2024-07-24 22:34:26.875122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.310 qpair failed and we were unable to recover it. 00:25:01.310 [2024-07-24 22:34:26.875238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.311 [2024-07-24 22:34:26.875282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.311 qpair failed and we were unable to recover it. 00:25:01.311 [2024-07-24 22:34:26.875399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.311 [2024-07-24 22:34:26.875441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.311 qpair failed and we were unable to recover it. 00:25:01.311 [2024-07-24 22:34:26.875612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.311 [2024-07-24 22:34:26.875639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.311 qpair failed and we were unable to recover it. 00:25:01.311 [2024-07-24 22:34:26.875803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.311 [2024-07-24 22:34:26.875830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.311 qpair failed and we were unable to recover it. 00:25:01.311 [2024-07-24 22:34:26.875943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.311 [2024-07-24 22:34:26.875985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.311 qpair failed and we were unable to recover it. 00:25:01.311 [2024-07-24 22:34:26.876106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.311 [2024-07-24 22:34:26.876147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.311 qpair failed and we were unable to recover it. 00:25:01.311 [2024-07-24 22:34:26.876262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.311 [2024-07-24 22:34:26.876295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.311 qpair failed and we were unable to recover it. 00:25:01.311 [2024-07-24 22:34:26.876404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.311 [2024-07-24 22:34:26.876430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.311 qpair failed and we were unable to recover it. 00:25:01.311 [2024-07-24 22:34:26.876529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.311 [2024-07-24 22:34:26.876556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.311 qpair failed and we were unable to recover it. 00:25:01.311 [2024-07-24 22:34:26.876668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.311 [2024-07-24 22:34:26.876700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.311 qpair failed and we were unable to recover it. 00:25:01.311 [2024-07-24 22:34:26.876829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.311 [2024-07-24 22:34:26.876861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.311 qpair failed and we were unable to recover it. 00:25:01.311 [2024-07-24 22:34:26.876989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.311 [2024-07-24 22:34:26.877021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.311 qpair failed and we were unable to recover it. 00:25:01.311 [2024-07-24 22:34:26.877185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.311 [2024-07-24 22:34:26.877227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.311 qpair failed and we were unable to recover it. 00:25:01.311 [2024-07-24 22:34:26.877339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.311 [2024-07-24 22:34:26.877371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.311 qpair failed and we were unable to recover it. 00:25:01.311 [2024-07-24 22:34:26.877492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.311 [2024-07-24 22:34:26.877518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.311 qpair failed and we were unable to recover it. 00:25:01.311 [2024-07-24 22:34:26.877637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.311 [2024-07-24 22:34:26.877681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.311 qpair failed and we were unable to recover it. 00:25:01.311 [2024-07-24 22:34:26.877783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.311 [2024-07-24 22:34:26.877810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.311 qpair failed and we were unable to recover it. 00:25:01.311 [2024-07-24 22:34:26.877962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.311 [2024-07-24 22:34:26.878007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.311 qpair failed and we were unable to recover it. 00:25:01.311 [2024-07-24 22:34:26.878103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.311 [2024-07-24 22:34:26.878129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.311 qpair failed and we were unable to recover it. 00:25:01.311 [2024-07-24 22:34:26.878288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.311 [2024-07-24 22:34:26.878341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.311 qpair failed and we were unable to recover it. 00:25:01.311 [2024-07-24 22:34:26.878445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.311 [2024-07-24 22:34:26.878473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.311 qpair failed and we were unable to recover it. 00:25:01.311 [2024-07-24 22:34:26.878649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.311 [2024-07-24 22:34:26.878698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.311 qpair failed and we were unable to recover it. 00:25:01.311 [2024-07-24 22:34:26.878832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.311 [2024-07-24 22:34:26.878875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.311 qpair failed and we were unable to recover it. 00:25:01.311 [2024-07-24 22:34:26.878989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.311 [2024-07-24 22:34:26.879016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.311 qpair failed and we were unable to recover it. 00:25:01.311 [2024-07-24 22:34:26.879142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.311 [2024-07-24 22:34:26.879186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.311 qpair failed and we were unable to recover it. 00:25:01.311 [2024-07-24 22:34:26.879301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.311 [2024-07-24 22:34:26.879333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.311 qpair failed and we were unable to recover it. 00:25:01.311 [2024-07-24 22:34:26.879460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.311 [2024-07-24 22:34:26.879494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.311 qpair failed and we were unable to recover it. 00:25:01.311 [2024-07-24 22:34:26.879607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.311 [2024-07-24 22:34:26.879638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.311 qpair failed and we were unable to recover it. 00:25:01.311 [2024-07-24 22:34:26.879765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.311 [2024-07-24 22:34:26.879796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.311 qpair failed and we were unable to recover it. 00:25:01.311 [2024-07-24 22:34:26.879957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.311 [2024-07-24 22:34:26.880000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.311 qpair failed and we were unable to recover it. 00:25:01.311 [2024-07-24 22:34:26.880124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.311 [2024-07-24 22:34:26.880164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.311 qpair failed and we were unable to recover it. 00:25:01.311 [2024-07-24 22:34:26.880280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.311 [2024-07-24 22:34:26.880312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.311 qpair failed and we were unable to recover it. 00:25:01.311 [2024-07-24 22:34:26.880458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.311 [2024-07-24 22:34:26.880505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.311 qpair failed and we were unable to recover it. 00:25:01.311 [2024-07-24 22:34:26.880609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.311 [2024-07-24 22:34:26.880637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.311 qpair failed and we were unable to recover it. 00:25:01.311 [2024-07-24 22:34:26.880789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.311 [2024-07-24 22:34:26.880829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.311 qpair failed and we were unable to recover it. 00:25:01.311 [2024-07-24 22:34:26.880947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.312 [2024-07-24 22:34:26.880987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.312 qpair failed and we were unable to recover it. 00:25:01.312 [2024-07-24 22:34:26.881097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.312 [2024-07-24 22:34:26.881138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.312 qpair failed and we were unable to recover it. 00:25:01.312 [2024-07-24 22:34:26.881257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.312 [2024-07-24 22:34:26.881299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.312 qpair failed and we were unable to recover it. 00:25:01.312 [2024-07-24 22:34:26.881529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.312 [2024-07-24 22:34:26.881555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.312 qpair failed and we were unable to recover it. 00:25:01.312 [2024-07-24 22:34:26.881667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.312 [2024-07-24 22:34:26.881696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.312 qpair failed and we were unable to recover it. 00:25:01.312 [2024-07-24 22:34:26.881829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.312 [2024-07-24 22:34:26.881860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.312 qpair failed and we were unable to recover it. 00:25:01.312 [2024-07-24 22:34:26.882026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.312 [2024-07-24 22:34:26.882067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.312 qpair failed and we were unable to recover it. 00:25:01.312 [2024-07-24 22:34:26.882171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.312 [2024-07-24 22:34:26.882197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.312 qpair failed and we were unable to recover it. 00:25:01.312 [2024-07-24 22:34:26.882307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.312 [2024-07-24 22:34:26.882336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.312 qpair failed and we were unable to recover it. 00:25:01.312 [2024-07-24 22:34:26.882493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.312 [2024-07-24 22:34:26.882534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.312 qpair failed and we were unable to recover it. 00:25:01.312 [2024-07-24 22:34:26.882643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.312 [2024-07-24 22:34:26.882673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.312 qpair failed and we were unable to recover it. 00:25:01.312 [2024-07-24 22:34:26.882840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.312 [2024-07-24 22:34:26.882879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.312 qpair failed and we were unable to recover it. 00:25:01.312 [2024-07-24 22:34:26.882992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.312 [2024-07-24 22:34:26.883032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.312 qpair failed and we were unable to recover it. 00:25:01.312 [2024-07-24 22:34:26.883256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.312 [2024-07-24 22:34:26.883282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.312 qpair failed and we were unable to recover it. 00:25:01.312 [2024-07-24 22:34:26.883412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.312 [2024-07-24 22:34:26.883439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.312 qpair failed and we were unable to recover it. 00:25:01.312 [2024-07-24 22:34:26.883594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.312 [2024-07-24 22:34:26.883636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.312 qpair failed and we were unable to recover it. 00:25:01.312 [2024-07-24 22:34:26.883735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.312 [2024-07-24 22:34:26.883762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.312 qpair failed and we were unable to recover it. 00:25:01.312 [2024-07-24 22:34:26.883897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.312 [2024-07-24 22:34:26.883936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.312 qpair failed and we were unable to recover it. 00:25:01.312 [2024-07-24 22:34:26.884036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.312 [2024-07-24 22:34:26.884063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.312 qpair failed and we were unable to recover it. 00:25:01.312 [2024-07-24 22:34:26.884186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.312 [2024-07-24 22:34:26.884227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.312 qpair failed and we were unable to recover it. 00:25:01.312 [2024-07-24 22:34:26.884356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.312 [2024-07-24 22:34:26.884397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.312 qpair failed and we were unable to recover it. 00:25:01.312 [2024-07-24 22:34:26.884544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.312 [2024-07-24 22:34:26.884571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.312 qpair failed and we were unable to recover it. 00:25:01.312 [2024-07-24 22:34:26.884681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.312 [2024-07-24 22:34:26.884707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.312 qpair failed and we were unable to recover it. 00:25:01.312 [2024-07-24 22:34:26.884832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.312 [2024-07-24 22:34:26.884873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.312 qpair failed and we were unable to recover it. 00:25:01.312 [2024-07-24 22:34:26.884987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.312 [2024-07-24 22:34:26.885016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.312 qpair failed and we were unable to recover it. 00:25:01.312 [2024-07-24 22:34:26.885178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.312 [2024-07-24 22:34:26.885218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.312 qpair failed and we were unable to recover it. 00:25:01.312 [2024-07-24 22:34:26.885318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.312 [2024-07-24 22:34:26.885345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.312 qpair failed and we were unable to recover it. 00:25:01.312 [2024-07-24 22:34:26.885455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.312 [2024-07-24 22:34:26.885511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.312 qpair failed and we were unable to recover it. 00:25:01.312 [2024-07-24 22:34:26.885669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.312 [2024-07-24 22:34:26.885710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.312 qpair failed and we were unable to recover it. 00:25:01.312 [2024-07-24 22:34:26.885810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.312 [2024-07-24 22:34:26.885836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.312 qpair failed and we were unable to recover it. 00:25:01.312 [2024-07-24 22:34:26.885969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.312 [2024-07-24 22:34:26.886009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.312 qpair failed and we were unable to recover it. 00:25:01.312 [2024-07-24 22:34:26.886116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.312 [2024-07-24 22:34:26.886144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.312 qpair failed and we were unable to recover it. 00:25:01.312 [2024-07-24 22:34:26.886276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.312 [2024-07-24 22:34:26.886321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.312 qpair failed and we were unable to recover it. 00:25:01.312 [2024-07-24 22:34:26.886429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.312 [2024-07-24 22:34:26.886456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.312 qpair failed and we were unable to recover it. 00:25:01.312 [2024-07-24 22:34:26.886583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.312 [2024-07-24 22:34:26.886624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.312 qpair failed and we were unable to recover it. 00:25:01.312 [2024-07-24 22:34:26.886769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.312 [2024-07-24 22:34:26.886810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.312 qpair failed and we were unable to recover it. 00:25:01.312 [2024-07-24 22:34:26.886922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.312 [2024-07-24 22:34:26.886950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.312 qpair failed and we were unable to recover it. 00:25:01.312 [2024-07-24 22:34:26.887091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.312 [2024-07-24 22:34:26.887136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.312 qpair failed and we were unable to recover it. 00:25:01.312 [2024-07-24 22:34:26.887284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.312 [2024-07-24 22:34:26.887312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.312 qpair failed and we were unable to recover it. 00:25:01.312 [2024-07-24 22:34:26.887429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.312 [2024-07-24 22:34:26.887455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.312 qpair failed and we were unable to recover it. 00:25:01.312 [2024-07-24 22:34:26.887581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.312 [2024-07-24 22:34:26.887622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.313 qpair failed and we were unable to recover it. 00:25:01.313 [2024-07-24 22:34:26.887741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.313 [2024-07-24 22:34:26.887782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.313 qpair failed and we were unable to recover it. 00:25:01.313 [2024-07-24 22:34:26.887894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.313 [2024-07-24 22:34:26.887934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.313 qpair failed and we were unable to recover it. 00:25:01.313 [2024-07-24 22:34:26.888056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.313 [2024-07-24 22:34:26.888097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.313 qpair failed and we were unable to recover it. 00:25:01.313 [2024-07-24 22:34:26.888215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.313 [2024-07-24 22:34:26.888255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.313 qpair failed and we were unable to recover it. 00:25:01.313 [2024-07-24 22:34:26.888367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.313 [2024-07-24 22:34:26.888393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.313 qpair failed and we were unable to recover it. 00:25:01.313 [2024-07-24 22:34:26.888499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.313 [2024-07-24 22:34:26.888527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.313 qpair failed and we were unable to recover it. 00:25:01.313 [2024-07-24 22:34:26.888634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.313 [2024-07-24 22:34:26.888662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.313 qpair failed and we were unable to recover it. 00:25:01.313 [2024-07-24 22:34:26.888762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.313 [2024-07-24 22:34:26.888788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.313 qpair failed and we were unable to recover it. 00:25:01.313 [2024-07-24 22:34:26.888889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.313 [2024-07-24 22:34:26.888916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.313 qpair failed and we were unable to recover it. 00:25:01.313 [2024-07-24 22:34:26.889035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.313 [2024-07-24 22:34:26.889076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.313 qpair failed and we were unable to recover it. 00:25:01.313 [2024-07-24 22:34:26.889217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.313 [2024-07-24 22:34:26.889244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.313 qpair failed and we were unable to recover it. 00:25:01.313 [2024-07-24 22:34:26.889344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.313 [2024-07-24 22:34:26.889370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.313 qpair failed and we were unable to recover it. 00:25:01.313 [2024-07-24 22:34:26.889471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.313 [2024-07-24 22:34:26.889504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.313 qpair failed and we were unable to recover it. 00:25:01.313 [2024-07-24 22:34:26.889618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.313 [2024-07-24 22:34:26.889644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.313 qpair failed and we were unable to recover it. 00:25:01.313 [2024-07-24 22:34:26.889760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.313 [2024-07-24 22:34:26.889800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.313 qpair failed and we were unable to recover it. 00:25:01.313 [2024-07-24 22:34:26.889905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.313 [2024-07-24 22:34:26.889932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.313 qpair failed and we were unable to recover it. 00:25:01.313 [2024-07-24 22:34:26.890082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.313 [2024-07-24 22:34:26.890123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.313 qpair failed and we were unable to recover it. 00:25:01.313 [2024-07-24 22:34:26.890283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.313 [2024-07-24 22:34:26.890309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.313 qpair failed and we were unable to recover it. 00:25:01.313 [2024-07-24 22:34:26.890407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.313 [2024-07-24 22:34:26.890433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.313 qpair failed and we were unable to recover it. 00:25:01.313 [2024-07-24 22:34:26.890543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.313 [2024-07-24 22:34:26.890572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.313 qpair failed and we were unable to recover it. 00:25:01.313 [2024-07-24 22:34:26.890698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.313 [2024-07-24 22:34:26.890738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.313 qpair failed and we were unable to recover it. 00:25:01.313 [2024-07-24 22:34:26.890848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.313 [2024-07-24 22:34:26.890876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.313 qpair failed and we were unable to recover it. 00:25:01.313 [2024-07-24 22:34:26.891035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.313 [2024-07-24 22:34:26.891075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.313 qpair failed and we were unable to recover it. 00:25:01.313 [2024-07-24 22:34:26.891198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.313 [2024-07-24 22:34:26.891242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.313 qpair failed and we were unable to recover it. 00:25:01.313 [2024-07-24 22:34:26.891364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.313 [2024-07-24 22:34:26.891406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.313 qpair failed and we were unable to recover it. 00:25:01.313 [2024-07-24 22:34:26.891504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.313 [2024-07-24 22:34:26.891531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.313 qpair failed and we were unable to recover it. 00:25:01.313 [2024-07-24 22:34:26.891643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.313 [2024-07-24 22:34:26.891670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.313 qpair failed and we were unable to recover it. 00:25:01.313 [2024-07-24 22:34:26.891814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.313 [2024-07-24 22:34:26.891841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.313 qpair failed and we were unable to recover it. 00:25:01.313 [2024-07-24 22:34:26.891948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.313 [2024-07-24 22:34:26.891976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.313 qpair failed and we were unable to recover it. 00:25:01.313 [2024-07-24 22:34:26.892085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.313 [2024-07-24 22:34:26.892111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.313 qpair failed and we were unable to recover it. 00:25:01.313 [2024-07-24 22:34:26.892213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.313 [2024-07-24 22:34:26.892240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.313 qpair failed and we were unable to recover it. 00:25:01.313 [2024-07-24 22:34:26.892354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.313 [2024-07-24 22:34:26.892380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.313 qpair failed and we were unable to recover it. 00:25:01.313 [2024-07-24 22:34:26.892493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.313 [2024-07-24 22:34:26.892520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.313 qpair failed and we were unable to recover it. 00:25:01.313 [2024-07-24 22:34:26.892631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.313 [2024-07-24 22:34:26.892658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.313 qpair failed and we were unable to recover it. 00:25:01.313 [2024-07-24 22:34:26.892807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.313 [2024-07-24 22:34:26.892846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.313 qpair failed and we were unable to recover it. 00:25:01.313 [2024-07-24 22:34:26.892954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.313 [2024-07-24 22:34:26.892982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.313 qpair failed and we were unable to recover it. 00:25:01.313 [2024-07-24 22:34:26.893098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.313 [2024-07-24 22:34:26.893142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.313 qpair failed and we were unable to recover it. 00:25:01.313 [2024-07-24 22:34:26.893279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.313 [2024-07-24 22:34:26.893306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.313 qpair failed and we were unable to recover it. 00:25:01.313 [2024-07-24 22:34:26.893411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.313 [2024-07-24 22:34:26.893438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.313 qpair failed and we were unable to recover it. 00:25:01.314 [2024-07-24 22:34:26.893547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.314 [2024-07-24 22:34:26.893573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.314 qpair failed and we were unable to recover it. 00:25:01.314 [2024-07-24 22:34:26.893682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.314 [2024-07-24 22:34:26.893709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.314 qpair failed and we were unable to recover it. 00:25:01.314 [2024-07-24 22:34:26.893868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.314 [2024-07-24 22:34:26.893910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.314 qpair failed and we were unable to recover it. 00:25:01.314 [2024-07-24 22:34:26.894032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.314 [2024-07-24 22:34:26.894073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.314 qpair failed and we were unable to recover it. 00:25:01.314 [2024-07-24 22:34:26.894192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.314 [2024-07-24 22:34:26.894219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.314 qpair failed and we were unable to recover it. 00:25:01.314 [2024-07-24 22:34:26.894329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.314 [2024-07-24 22:34:26.894355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.314 qpair failed and we were unable to recover it. 00:25:01.314 [2024-07-24 22:34:26.894490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.314 [2024-07-24 22:34:26.894530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.314 qpair failed and we were unable to recover it. 00:25:01.314 [2024-07-24 22:34:26.894668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.314 [2024-07-24 22:34:26.894693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.314 qpair failed and we were unable to recover it. 00:25:01.314 [2024-07-24 22:34:26.894793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.314 [2024-07-24 22:34:26.894820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.314 qpair failed and we were unable to recover it. 00:25:01.314 [2024-07-24 22:34:26.894938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.314 [2024-07-24 22:34:26.894978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.314 qpair failed and we were unable to recover it. 00:25:01.314 [2024-07-24 22:34:26.895088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.314 [2024-07-24 22:34:26.895115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.314 qpair failed and we were unable to recover it. 00:25:01.314 [2024-07-24 22:34:26.895267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.314 [2024-07-24 22:34:26.895295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.314 qpair failed and we were unable to recover it. 00:25:01.314 [2024-07-24 22:34:26.895402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.314 [2024-07-24 22:34:26.895431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.314 qpair failed and we were unable to recover it. 00:25:01.314 [2024-07-24 22:34:26.895539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.314 [2024-07-24 22:34:26.895574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.314 qpair failed and we were unable to recover it. 00:25:01.314 [2024-07-24 22:34:26.895683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.314 [2024-07-24 22:34:26.895712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.314 qpair failed and we were unable to recover it. 00:25:01.314 [2024-07-24 22:34:26.895831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.314 [2024-07-24 22:34:26.895858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.314 qpair failed and we were unable to recover it. 00:25:01.314 [2024-07-24 22:34:26.895959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.314 [2024-07-24 22:34:26.895985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.314 qpair failed and we were unable to recover it. 00:25:01.314 [2024-07-24 22:34:26.896089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.314 [2024-07-24 22:34:26.896123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.314 qpair failed and we were unable to recover it. 00:25:01.314 [2024-07-24 22:34:26.896247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.314 [2024-07-24 22:34:26.896275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.314 qpair failed and we were unable to recover it. 00:25:01.314 [2024-07-24 22:34:26.896390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.314 [2024-07-24 22:34:26.896416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.314 qpair failed and we were unable to recover it. 00:25:01.314 [2024-07-24 22:34:26.896524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.314 [2024-07-24 22:34:26.896552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.314 qpair failed and we were unable to recover it. 00:25:01.314 [2024-07-24 22:34:26.896658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.314 [2024-07-24 22:34:26.896684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.314 qpair failed and we were unable to recover it. 00:25:01.314 [2024-07-24 22:34:26.896794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.314 [2024-07-24 22:34:26.896819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.314 qpair failed and we were unable to recover it. 00:25:01.314 [2024-07-24 22:34:26.896926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.314 [2024-07-24 22:34:26.896952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.314 qpair failed and we were unable to recover it. 00:25:01.314 [2024-07-24 22:34:26.897059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.314 [2024-07-24 22:34:26.897088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.314 qpair failed and we were unable to recover it. 00:25:01.314 [2024-07-24 22:34:26.897210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.314 [2024-07-24 22:34:26.897238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.314 qpair failed and we were unable to recover it. 00:25:01.314 [2024-07-24 22:34:26.897340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.314 [2024-07-24 22:34:26.897369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.314 qpair failed and we were unable to recover it. 00:25:01.314 [2024-07-24 22:34:26.897476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.314 [2024-07-24 22:34:26.897511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.314 qpair failed and we were unable to recover it. 00:25:01.314 [2024-07-24 22:34:26.897617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.314 [2024-07-24 22:34:26.897650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.314 qpair failed and we were unable to recover it. 00:25:01.314 [2024-07-24 22:34:26.897757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.314 [2024-07-24 22:34:26.897784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.314 qpair failed and we were unable to recover it. 00:25:01.314 [2024-07-24 22:34:26.897892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.314 [2024-07-24 22:34:26.897919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.314 qpair failed and we were unable to recover it. 00:25:01.314 [2024-07-24 22:34:26.898026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.314 [2024-07-24 22:34:26.898060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.314 qpair failed and we were unable to recover it. 00:25:01.314 [2024-07-24 22:34:26.898165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.314 [2024-07-24 22:34:26.898195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.314 qpair failed and we were unable to recover it. 00:25:01.314 [2024-07-24 22:34:26.898297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.314 [2024-07-24 22:34:26.898323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.314 qpair failed and we were unable to recover it. 00:25:01.314 [2024-07-24 22:34:26.898441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.314 [2024-07-24 22:34:26.898467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.314 qpair failed and we were unable to recover it. 00:25:01.314 [2024-07-24 22:34:26.898577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.314 [2024-07-24 22:34:26.898605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.314 qpair failed and we were unable to recover it. 00:25:01.314 [2024-07-24 22:34:26.898722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.314 [2024-07-24 22:34:26.898750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.314 qpair failed and we were unable to recover it. 00:25:01.314 [2024-07-24 22:34:26.898868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.314 [2024-07-24 22:34:26.898899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.314 qpair failed and we were unable to recover it. 00:25:01.314 [2024-07-24 22:34:26.899022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.314 [2024-07-24 22:34:26.899048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.314 qpair failed and we were unable to recover it. 00:25:01.314 [2024-07-24 22:34:26.899148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.314 [2024-07-24 22:34:26.899174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.314 qpair failed and we were unable to recover it. 00:25:01.314 [2024-07-24 22:34:26.899272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.314 [2024-07-24 22:34:26.899298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.314 qpair failed and we were unable to recover it. 00:25:01.314 [2024-07-24 22:34:26.899401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.314 [2024-07-24 22:34:26.899429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.314 qpair failed and we were unable to recover it. 00:25:01.314 [2024-07-24 22:34:26.899548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.314 [2024-07-24 22:34:26.899576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.314 qpair failed and we were unable to recover it. 00:25:01.315 [2024-07-24 22:34:26.899686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.315 [2024-07-24 22:34:26.899713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.315 qpair failed and we were unable to recover it. 00:25:01.315 [2024-07-24 22:34:26.899823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.315 [2024-07-24 22:34:26.899849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.315 qpair failed and we were unable to recover it. 00:25:01.315 [2024-07-24 22:34:26.899944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.315 [2024-07-24 22:34:26.899971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.315 qpair failed and we were unable to recover it. 00:25:01.315 [2024-07-24 22:34:26.900066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.315 [2024-07-24 22:34:26.900092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.315 qpair failed and we were unable to recover it. 00:25:01.315 [2024-07-24 22:34:26.900197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.315 [2024-07-24 22:34:26.900223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.315 qpair failed and we were unable to recover it. 00:25:01.315 [2024-07-24 22:34:26.900319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.315 [2024-07-24 22:34:26.900345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.315 qpair failed and we were unable to recover it. 00:25:01.315 [2024-07-24 22:34:26.900461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.315 [2024-07-24 22:34:26.900503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.315 qpair failed and we were unable to recover it. 00:25:01.315 [2024-07-24 22:34:26.900612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.315 [2024-07-24 22:34:26.900639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.315 qpair failed and we were unable to recover it. 00:25:01.315 [2024-07-24 22:34:26.900762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.315 [2024-07-24 22:34:26.900790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.315 qpair failed and we were unable to recover it. 00:25:01.315 [2024-07-24 22:34:26.900910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.315 [2024-07-24 22:34:26.900937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.315 qpair failed and we were unable to recover it. 00:25:01.315 [2024-07-24 22:34:26.901036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.315 [2024-07-24 22:34:26.901063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.315 qpair failed and we were unable to recover it. 00:25:01.315 [2024-07-24 22:34:26.901166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.315 [2024-07-24 22:34:26.901193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.315 qpair failed and we were unable to recover it. 00:25:01.315 [2024-07-24 22:34:26.901308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.315 [2024-07-24 22:34:26.901334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.315 qpair failed and we were unable to recover it. 00:25:01.315 [2024-07-24 22:34:26.901433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.315 [2024-07-24 22:34:26.901459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.315 qpair failed and we were unable to recover it. 00:25:01.315 [2024-07-24 22:34:26.901582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.315 [2024-07-24 22:34:26.901610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.315 qpair failed and we were unable to recover it. 00:25:01.315 [2024-07-24 22:34:26.901718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.315 [2024-07-24 22:34:26.901745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.315 qpair failed and we were unable to recover it. 00:25:01.315 [2024-07-24 22:34:26.901847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.315 [2024-07-24 22:34:26.901873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.315 qpair failed and we were unable to recover it. 00:25:01.315 [2024-07-24 22:34:26.901967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.315 [2024-07-24 22:34:26.901994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.315 qpair failed and we were unable to recover it. 00:25:01.315 [2024-07-24 22:34:26.902113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.315 [2024-07-24 22:34:26.902140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.315 qpair failed and we were unable to recover it. 00:25:01.315 [2024-07-24 22:34:26.902255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.315 [2024-07-24 22:34:26.902282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.315 qpair failed and we were unable to recover it. 00:25:01.315 [2024-07-24 22:34:26.902383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.315 [2024-07-24 22:34:26.902408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.315 qpair failed and we were unable to recover it. 00:25:01.315 [2024-07-24 22:34:26.902533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.315 [2024-07-24 22:34:26.902562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.315 qpair failed and we were unable to recover it. 00:25:01.315 [2024-07-24 22:34:26.902663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.315 [2024-07-24 22:34:26.902690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.315 qpair failed and we were unable to recover it. 00:25:01.315 [2024-07-24 22:34:26.902793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.315 [2024-07-24 22:34:26.902821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.315 qpair failed and we were unable to recover it. 00:25:01.315 [2024-07-24 22:34:26.902942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.315 [2024-07-24 22:34:26.902968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.315 qpair failed and we were unable to recover it. 00:25:01.315 [2024-07-24 22:34:26.903067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.315 [2024-07-24 22:34:26.903093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.315 qpair failed and we were unable to recover it. 00:25:01.315 [2024-07-24 22:34:26.903214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.315 [2024-07-24 22:34:26.903240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.315 qpair failed and we were unable to recover it. 00:25:01.315 [2024-07-24 22:34:26.903355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.315 [2024-07-24 22:34:26.903381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.315 qpair failed and we were unable to recover it. 00:25:01.315 [2024-07-24 22:34:26.903487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.315 [2024-07-24 22:34:26.903514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.315 qpair failed and we were unable to recover it. 00:25:01.315 [2024-07-24 22:34:26.903609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.315 [2024-07-24 22:34:26.903635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.315 qpair failed and we were unable to recover it. 00:25:01.315 [2024-07-24 22:34:26.903733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.315 [2024-07-24 22:34:26.903759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.315 qpair failed and we were unable to recover it. 00:25:01.315 [2024-07-24 22:34:26.903859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.315 [2024-07-24 22:34:26.903887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.315 qpair failed and we were unable to recover it. 00:25:01.315 [2024-07-24 22:34:26.904007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.315 [2024-07-24 22:34:26.904066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.315 qpair failed and we were unable to recover it. 00:25:01.315 [2024-07-24 22:34:26.904165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.315 [2024-07-24 22:34:26.904190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.315 qpair failed and we were unable to recover it. 00:25:01.315 [2024-07-24 22:34:26.904302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.315 [2024-07-24 22:34:26.904352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.315 qpair failed and we were unable to recover it. 00:25:01.315 [2024-07-24 22:34:26.904463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.315 [2024-07-24 22:34:26.904498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.315 qpair failed and we were unable to recover it. 00:25:01.315 [2024-07-24 22:34:26.904622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.315 [2024-07-24 22:34:26.904666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.315 qpair failed and we were unable to recover it. 00:25:01.315 [2024-07-24 22:34:26.904781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.315 [2024-07-24 22:34:26.904823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.315 qpair failed and we were unable to recover it. 00:25:01.315 [2024-07-24 22:34:26.904942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.315 [2024-07-24 22:34:26.904986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.315 qpair failed and we were unable to recover it. 00:25:01.315 [2024-07-24 22:34:26.905098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.315 [2024-07-24 22:34:26.905142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.315 qpair failed and we were unable to recover it. 00:25:01.315 [2024-07-24 22:34:26.905275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.315 [2024-07-24 22:34:26.905302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.315 qpair failed and we were unable to recover it. 00:25:01.315 [2024-07-24 22:34:26.905399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.315 [2024-07-24 22:34:26.905425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.315 qpair failed and we were unable to recover it. 00:25:01.316 [2024-07-24 22:34:26.905531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.316 [2024-07-24 22:34:26.905568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.316 qpair failed and we were unable to recover it. 00:25:01.316 [2024-07-24 22:34:26.905672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.316 [2024-07-24 22:34:26.905699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.316 qpair failed and we were unable to recover it. 00:25:01.316 [2024-07-24 22:34:26.905802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.316 [2024-07-24 22:34:26.905830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.316 qpair failed and we were unable to recover it. 00:25:01.316 [2024-07-24 22:34:26.905946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.316 [2024-07-24 22:34:26.905972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.316 qpair failed and we were unable to recover it. 00:25:01.316 [2024-07-24 22:34:26.906079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.316 [2024-07-24 22:34:26.906106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.316 qpair failed and we were unable to recover it. 00:25:01.316 [2024-07-24 22:34:26.906210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.316 [2024-07-24 22:34:26.906237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.316 qpair failed and we were unable to recover it. 00:25:01.316 [2024-07-24 22:34:26.906344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.316 [2024-07-24 22:34:26.906370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.316 qpair failed and we were unable to recover it. 00:25:01.316 [2024-07-24 22:34:26.906471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.316 [2024-07-24 22:34:26.906508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.316 qpair failed and we were unable to recover it. 00:25:01.316 [2024-07-24 22:34:26.906620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.316 [2024-07-24 22:34:26.906648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.316 qpair failed and we were unable to recover it. 00:25:01.316 [2024-07-24 22:34:26.906752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.316 [2024-07-24 22:34:26.906780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.316 qpair failed and we were unable to recover it. 00:25:01.316 [2024-07-24 22:34:26.906876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.316 [2024-07-24 22:34:26.906902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.316 qpair failed and we were unable to recover it. 00:25:01.316 [2024-07-24 22:34:26.907007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.316 [2024-07-24 22:34:26.907034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.316 qpair failed and we were unable to recover it. 00:25:01.316 [2024-07-24 22:34:26.907135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.316 [2024-07-24 22:34:26.907161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.316 qpair failed and we were unable to recover it. 00:25:01.316 [2024-07-24 22:34:26.907259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.316 [2024-07-24 22:34:26.907286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.316 qpair failed and we were unable to recover it. 00:25:01.316 [2024-07-24 22:34:26.907387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.316 [2024-07-24 22:34:26.907414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.316 qpair failed and we were unable to recover it. 00:25:01.316 [2024-07-24 22:34:26.907566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.316 [2024-07-24 22:34:26.907594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.316 qpair failed and we were unable to recover it. 00:25:01.316 [2024-07-24 22:34:26.907742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.316 [2024-07-24 22:34:26.907769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.316 qpair failed and we were unable to recover it. 00:25:01.316 [2024-07-24 22:34:26.907863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.316 [2024-07-24 22:34:26.907889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.316 qpair failed and we were unable to recover it. 00:25:01.316 [2024-07-24 22:34:26.907996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.316 [2024-07-24 22:34:26.908022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.316 qpair failed and we were unable to recover it. 00:25:01.316 [2024-07-24 22:34:26.908121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.316 [2024-07-24 22:34:26.908153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.316 qpair failed and we were unable to recover it. 00:25:01.316 [2024-07-24 22:34:26.908246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.316 [2024-07-24 22:34:26.908271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.316 qpair failed and we were unable to recover it. 00:25:01.316 [2024-07-24 22:34:26.908373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.316 [2024-07-24 22:34:26.908399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.316 qpair failed and we were unable to recover it. 00:25:01.316 [2024-07-24 22:34:26.908520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.316 [2024-07-24 22:34:26.908551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.316 qpair failed and we were unable to recover it. 00:25:01.316 [2024-07-24 22:34:26.908651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.316 [2024-07-24 22:34:26.908680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.316 qpair failed and we were unable to recover it. 00:25:01.316 [2024-07-24 22:34:26.908781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.316 [2024-07-24 22:34:26.908807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.316 qpair failed and we were unable to recover it. 00:25:01.316 [2024-07-24 22:34:26.908909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.316 [2024-07-24 22:34:26.908936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.316 qpair failed and we were unable to recover it. 00:25:01.316 [2024-07-24 22:34:26.909033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.316 [2024-07-24 22:34:26.909059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.316 qpair failed and we were unable to recover it. 00:25:01.316 [2024-07-24 22:34:26.909158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.317 [2024-07-24 22:34:26.909185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.317 qpair failed and we were unable to recover it. 00:25:01.317 [2024-07-24 22:34:26.909293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.317 [2024-07-24 22:34:26.909321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.317 qpair failed and we were unable to recover it. 00:25:01.317 [2024-07-24 22:34:26.909424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.317 [2024-07-24 22:34:26.909449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.317 qpair failed and we were unable to recover it. 00:25:01.317 [2024-07-24 22:34:26.909561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.317 [2024-07-24 22:34:26.909588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.317 qpair failed and we were unable to recover it. 00:25:01.317 [2024-07-24 22:34:26.909685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.317 [2024-07-24 22:34:26.909712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.317 qpair failed and we were unable to recover it. 00:25:01.317 [2024-07-24 22:34:26.909818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.317 [2024-07-24 22:34:26.909846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.317 qpair failed and we were unable to recover it. 00:25:01.317 [2024-07-24 22:34:26.909949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.317 [2024-07-24 22:34:26.909975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.317 qpair failed and we were unable to recover it. 00:25:01.317 [2024-07-24 22:34:26.910106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.317 [2024-07-24 22:34:26.910132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.317 qpair failed and we were unable to recover it. 00:25:01.317 [2024-07-24 22:34:26.910235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.317 [2024-07-24 22:34:26.910262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.317 qpair failed and we were unable to recover it. 00:25:01.317 [2024-07-24 22:34:26.910368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.317 [2024-07-24 22:34:26.910394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.317 qpair failed and we were unable to recover it. 00:25:01.317 [2024-07-24 22:34:26.910492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.317 [2024-07-24 22:34:26.910519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.317 qpair failed and we were unable to recover it. 00:25:01.317 [2024-07-24 22:34:26.910615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.317 [2024-07-24 22:34:26.910641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.317 qpair failed and we were unable to recover it. 00:25:01.317 [2024-07-24 22:34:26.910735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.317 [2024-07-24 22:34:26.910761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.317 qpair failed and we were unable to recover it. 00:25:01.317 [2024-07-24 22:34:26.910861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.317 [2024-07-24 22:34:26.910888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.317 qpair failed and we were unable to recover it. 00:25:01.317 [2024-07-24 22:34:26.910989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.317 [2024-07-24 22:34:26.911015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.317 qpair failed and we were unable to recover it. 00:25:01.317 [2024-07-24 22:34:26.911135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.317 [2024-07-24 22:34:26.911161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.317 qpair failed and we were unable to recover it. 00:25:01.317 [2024-07-24 22:34:26.911258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.317 [2024-07-24 22:34:26.911284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.317 qpair failed and we were unable to recover it. 00:25:01.317 [2024-07-24 22:34:26.911386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.317 [2024-07-24 22:34:26.911416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.317 qpair failed and we were unable to recover it. 00:25:01.317 [2024-07-24 22:34:26.911538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.317 [2024-07-24 22:34:26.911568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.317 qpair failed and we were unable to recover it. 00:25:01.317 [2024-07-24 22:34:26.911692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.317 [2024-07-24 22:34:26.911719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.317 qpair failed and we were unable to recover it. 00:25:01.317 [2024-07-24 22:34:26.911822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.317 [2024-07-24 22:34:26.911848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.317 qpair failed and we were unable to recover it. 00:25:01.317 [2024-07-24 22:34:26.911956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.317 [2024-07-24 22:34:26.911984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.317 qpair failed and we were unable to recover it. 00:25:01.317 [2024-07-24 22:34:26.912082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.317 [2024-07-24 22:34:26.912108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.317 qpair failed and we were unable to recover it. 00:25:01.317 [2024-07-24 22:34:26.912219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.317 [2024-07-24 22:34:26.912253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.317 qpair failed and we were unable to recover it. 00:25:01.317 [2024-07-24 22:34:26.912367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.317 [2024-07-24 22:34:26.912393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.317 qpair failed and we were unable to recover it. 00:25:01.317 [2024-07-24 22:34:26.912512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.317 [2024-07-24 22:34:26.912540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.317 qpair failed and we were unable to recover it. 00:25:01.317 [2024-07-24 22:34:26.912649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.317 [2024-07-24 22:34:26.912678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.317 qpair failed and we were unable to recover it. 00:25:01.317 [2024-07-24 22:34:26.912783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.317 [2024-07-24 22:34:26.912813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.317 qpair failed and we were unable to recover it. 00:25:01.317 [2024-07-24 22:34:26.912919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.317 [2024-07-24 22:34:26.912946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.317 qpair failed and we were unable to recover it. 00:25:01.317 [2024-07-24 22:34:26.913053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.317 [2024-07-24 22:34:26.913081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.317 qpair failed and we were unable to recover it. 00:25:01.317 [2024-07-24 22:34:26.913177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.317 [2024-07-24 22:34:26.913203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.317 qpair failed and we were unable to recover it. 00:25:01.317 [2024-07-24 22:34:26.913306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.317 [2024-07-24 22:34:26.913333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.317 qpair failed and we were unable to recover it. 00:25:01.317 [2024-07-24 22:34:26.913433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.317 [2024-07-24 22:34:26.913465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.317 qpair failed and we were unable to recover it. 00:25:01.317 [2024-07-24 22:34:26.913572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.317 [2024-07-24 22:34:26.913599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.317 qpair failed and we were unable to recover it. 00:25:01.317 [2024-07-24 22:34:26.913703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.317 [2024-07-24 22:34:26.913732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.317 qpair failed and we were unable to recover it. 00:25:01.317 [2024-07-24 22:34:26.913840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.317 [2024-07-24 22:34:26.913868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.317 qpair failed and we were unable to recover it. 00:25:01.317 [2024-07-24 22:34:26.913970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.317 [2024-07-24 22:34:26.913997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.317 qpair failed and we were unable to recover it. 00:25:01.317 [2024-07-24 22:34:26.914094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.318 [2024-07-24 22:34:26.914120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.318 qpair failed and we were unable to recover it. 00:25:01.318 [2024-07-24 22:34:26.914226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.318 [2024-07-24 22:34:26.914253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.318 qpair failed and we were unable to recover it. 00:25:01.318 [2024-07-24 22:34:26.914354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.318 [2024-07-24 22:34:26.914380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.318 qpair failed and we were unable to recover it. 00:25:01.318 [2024-07-24 22:34:26.914489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.318 [2024-07-24 22:34:26.914516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.318 qpair failed and we were unable to recover it. 00:25:01.318 [2024-07-24 22:34:26.914629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.318 [2024-07-24 22:34:26.914656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.318 qpair failed and we were unable to recover it. 00:25:01.318 [2024-07-24 22:34:26.914754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.318 [2024-07-24 22:34:26.914780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.318 qpair failed and we were unable to recover it. 00:25:01.318 [2024-07-24 22:34:26.914881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.318 [2024-07-24 22:34:26.914908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.318 qpair failed and we were unable to recover it. 00:25:01.318 [2024-07-24 22:34:26.915010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.318 [2024-07-24 22:34:26.915037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.318 qpair failed and we were unable to recover it. 00:25:01.318 [2024-07-24 22:34:26.915135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.318 [2024-07-24 22:34:26.915161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.318 qpair failed and we were unable to recover it. 00:25:01.318 [2024-07-24 22:34:26.915263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.318 [2024-07-24 22:34:26.915288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.318 qpair failed and we were unable to recover it. 00:25:01.318 [2024-07-24 22:34:26.915385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.318 [2024-07-24 22:34:26.915411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.318 qpair failed and we were unable to recover it. 00:25:01.318 [2024-07-24 22:34:26.915629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.318 [2024-07-24 22:34:26.915658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.318 qpair failed and we were unable to recover it. 00:25:01.318 [2024-07-24 22:34:26.915757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.318 [2024-07-24 22:34:26.915784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.318 qpair failed and we were unable to recover it. 00:25:01.318 [2024-07-24 22:34:26.915879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.318 [2024-07-24 22:34:26.915905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.318 qpair failed and we were unable to recover it. 00:25:01.318 [2024-07-24 22:34:26.916011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.318 [2024-07-24 22:34:26.916038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.318 qpair failed and we were unable to recover it. 00:25:01.318 [2024-07-24 22:34:26.916142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.318 [2024-07-24 22:34:26.916168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.318 qpair failed and we were unable to recover it. 00:25:01.318 [2024-07-24 22:34:26.916284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.318 [2024-07-24 22:34:26.916309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.318 qpair failed and we were unable to recover it. 00:25:01.318 [2024-07-24 22:34:26.916409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.318 [2024-07-24 22:34:26.916438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.318 qpair failed and we were unable to recover it. 00:25:01.318 [2024-07-24 22:34:26.916551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.318 [2024-07-24 22:34:26.916578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.318 qpair failed and we were unable to recover it. 00:25:01.318 [2024-07-24 22:34:26.916676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.318 [2024-07-24 22:34:26.916702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.318 qpair failed and we were unable to recover it. 00:25:01.318 [2024-07-24 22:34:26.916803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.318 [2024-07-24 22:34:26.916829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.318 qpair failed and we were unable to recover it. 00:25:01.318 [2024-07-24 22:34:26.916926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.318 [2024-07-24 22:34:26.916952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.318 qpair failed and we were unable to recover it. 00:25:01.318 [2024-07-24 22:34:26.917063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.318 [2024-07-24 22:34:26.917091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.318 qpair failed and we were unable to recover it. 00:25:01.318 [2024-07-24 22:34:26.917201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.318 [2024-07-24 22:34:26.917229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.318 qpair failed and we were unable to recover it. 00:25:01.318 [2024-07-24 22:34:26.917330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.318 [2024-07-24 22:34:26.917356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.318 qpair failed and we were unable to recover it. 00:25:01.318 [2024-07-24 22:34:26.917456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.318 [2024-07-24 22:34:26.917497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.318 qpair failed and we were unable to recover it. 00:25:01.318 [2024-07-24 22:34:26.917609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.318 [2024-07-24 22:34:26.917636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.318 qpair failed and we were unable to recover it. 00:25:01.318 [2024-07-24 22:34:26.917746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.318 [2024-07-24 22:34:26.917772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.318 qpair failed and we were unable to recover it. 00:25:01.318 [2024-07-24 22:34:26.917873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.318 [2024-07-24 22:34:26.917899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.318 qpair failed and we were unable to recover it. 00:25:01.318 [2024-07-24 22:34:26.917999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.318 [2024-07-24 22:34:26.918027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.318 qpair failed and we were unable to recover it. 00:25:01.318 [2024-07-24 22:34:26.918241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.318 [2024-07-24 22:34:26.918268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.318 qpair failed and we were unable to recover it. 00:25:01.318 [2024-07-24 22:34:26.918374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.318 [2024-07-24 22:34:26.918406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.318 qpair failed and we were unable to recover it. 00:25:01.318 [2024-07-24 22:34:26.918516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.318 [2024-07-24 22:34:26.918542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.318 qpair failed and we were unable to recover it. 00:25:01.318 [2024-07-24 22:34:26.918644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.318 [2024-07-24 22:34:26.918670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.318 qpair failed and we were unable to recover it. 00:25:01.318 [2024-07-24 22:34:26.918786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.318 [2024-07-24 22:34:26.918814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.318 qpair failed and we were unable to recover it. 00:25:01.318 [2024-07-24 22:34:26.918918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.318 [2024-07-24 22:34:26.918948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.318 qpair failed and we were unable to recover it. 00:25:01.318 [2024-07-24 22:34:26.919050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.318 [2024-07-24 22:34:26.919075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.318 qpair failed and we were unable to recover it. 00:25:01.318 [2024-07-24 22:34:26.919172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.319 [2024-07-24 22:34:26.919197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.319 qpair failed and we were unable to recover it. 00:25:01.319 [2024-07-24 22:34:26.919304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.319 [2024-07-24 22:34:26.919331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.319 qpair failed and we were unable to recover it. 00:25:01.319 [2024-07-24 22:34:26.919431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.319 [2024-07-24 22:34:26.919456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.319 qpair failed and we were unable to recover it. 00:25:01.319 [2024-07-24 22:34:26.919583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.319 [2024-07-24 22:34:26.919630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.319 qpair failed and we were unable to recover it. 00:25:01.319 [2024-07-24 22:34:26.919764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.319 [2024-07-24 22:34:26.919810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.319 qpair failed and we were unable to recover it. 00:25:01.319 [2024-07-24 22:34:26.919940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.319 [2024-07-24 22:34:26.919979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.319 qpair failed and we were unable to recover it. 00:25:01.319 [2024-07-24 22:34:26.920130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.319 [2024-07-24 22:34:26.920183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.319 qpair failed and we were unable to recover it. 00:25:01.319 [2024-07-24 22:34:26.920285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.319 [2024-07-24 22:34:26.920312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.319 qpair failed and we were unable to recover it. 00:25:01.319 [2024-07-24 22:34:26.920412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.319 [2024-07-24 22:34:26.920438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.319 qpair failed and we were unable to recover it. 00:25:01.319 [2024-07-24 22:34:26.920552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.319 [2024-07-24 22:34:26.920580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.319 qpair failed and we were unable to recover it. 00:25:01.319 [2024-07-24 22:34:26.920684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.319 [2024-07-24 22:34:26.920711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.319 qpair failed and we were unable to recover it. 00:25:01.319 [2024-07-24 22:34:26.920822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.319 [2024-07-24 22:34:26.920848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.319 qpair failed and we were unable to recover it. 00:25:01.319 [2024-07-24 22:34:26.921004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.319 [2024-07-24 22:34:26.921034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.319 qpair failed and we were unable to recover it. 00:25:01.319 [2024-07-24 22:34:26.921131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.319 [2024-07-24 22:34:26.921157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.319 qpair failed and we were unable to recover it. 00:25:01.319 [2024-07-24 22:34:26.921266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.319 [2024-07-24 22:34:26.921291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.319 qpair failed and we were unable to recover it. 00:25:01.319 [2024-07-24 22:34:26.921397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.319 [2024-07-24 22:34:26.921426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.319 qpair failed and we were unable to recover it. 00:25:01.319 [2024-07-24 22:34:26.921535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.319 [2024-07-24 22:34:26.921562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.319 qpair failed and we were unable to recover it. 00:25:01.319 [2024-07-24 22:34:26.921692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.319 [2024-07-24 22:34:26.921745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.319 qpair failed and we were unable to recover it. 00:25:01.319 [2024-07-24 22:34:26.921876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.319 [2024-07-24 22:34:26.921930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.319 qpair failed and we were unable to recover it. 00:25:01.319 [2024-07-24 22:34:26.922030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.319 [2024-07-24 22:34:26.922056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.319 qpair failed and we were unable to recover it. 00:25:01.319 [2024-07-24 22:34:26.922163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.319 [2024-07-24 22:34:26.922189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.319 qpair failed and we were unable to recover it. 00:25:01.319 [2024-07-24 22:34:26.922308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.319 [2024-07-24 22:34:26.922337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.319 qpair failed and we were unable to recover it. 00:25:01.319 [2024-07-24 22:34:26.922445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.319 [2024-07-24 22:34:26.922473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.319 qpair failed and we were unable to recover it. 00:25:01.319 [2024-07-24 22:34:26.922647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.319 [2024-07-24 22:34:26.922674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.319 qpair failed and we were unable to recover it. 00:25:01.319 [2024-07-24 22:34:26.922800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.319 [2024-07-24 22:34:26.922854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.319 qpair failed and we were unable to recover it. 00:25:01.319 [2024-07-24 22:34:26.922986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.319 [2024-07-24 22:34:26.923038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.319 qpair failed and we were unable to recover it. 00:25:01.319 [2024-07-24 22:34:26.923163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.319 [2024-07-24 22:34:26.923208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.319 qpair failed and we were unable to recover it. 00:25:01.319 [2024-07-24 22:34:26.923355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.319 [2024-07-24 22:34:26.923412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.319 qpair failed and we were unable to recover it. 00:25:01.319 [2024-07-24 22:34:26.923527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.319 [2024-07-24 22:34:26.923557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.319 qpair failed and we were unable to recover it. 00:25:01.319 [2024-07-24 22:34:26.923667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.319 [2024-07-24 22:34:26.923693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.319 qpair failed and we were unable to recover it. 00:25:01.319 [2024-07-24 22:34:26.923814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.319 [2024-07-24 22:34:26.923841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.319 qpair failed and we were unable to recover it. 00:25:01.319 [2024-07-24 22:34:26.923946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.319 [2024-07-24 22:34:26.923972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.319 qpair failed and we were unable to recover it. 00:25:01.319 [2024-07-24 22:34:26.924069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.319 [2024-07-24 22:34:26.924099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.319 qpair failed and we were unable to recover it. 00:25:01.319 [2024-07-24 22:34:26.924255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.319 [2024-07-24 22:34:26.924311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.319 qpair failed and we were unable to recover it. 00:25:01.319 [2024-07-24 22:34:26.924411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.319 [2024-07-24 22:34:26.924438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.319 qpair failed and we were unable to recover it. 00:25:01.319 [2024-07-24 22:34:26.924584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.319 [2024-07-24 22:34:26.924612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.319 qpair failed and we were unable to recover it. 00:25:01.319 [2024-07-24 22:34:26.924732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.319 [2024-07-24 22:34:26.924777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.319 qpair failed and we were unable to recover it. 00:25:01.319 [2024-07-24 22:34:26.924929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.320 [2024-07-24 22:34:26.924997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.320 qpair failed and we were unable to recover it. 00:25:01.320 [2024-07-24 22:34:26.925106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.320 [2024-07-24 22:34:26.925163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.320 qpair failed and we were unable to recover it. 00:25:01.320 [2024-07-24 22:34:26.925280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.320 [2024-07-24 22:34:26.925326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.320 qpair failed and we were unable to recover it. 00:25:01.320 [2024-07-24 22:34:26.925450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.320 [2024-07-24 22:34:26.925503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.320 qpair failed and we were unable to recover it. 00:25:01.320 [2024-07-24 22:34:26.925637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.320 [2024-07-24 22:34:26.925664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.320 qpair failed and we were unable to recover it. 00:25:01.320 [2024-07-24 22:34:26.925764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.320 [2024-07-24 22:34:26.925790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.320 qpair failed and we were unable to recover it. 00:25:01.320 [2024-07-24 22:34:26.925887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.320 [2024-07-24 22:34:26.925913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.320 qpair failed and we were unable to recover it. 00:25:01.320 [2024-07-24 22:34:26.926124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.320 [2024-07-24 22:34:26.926150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.320 qpair failed and we were unable to recover it. 00:25:01.320 [2024-07-24 22:34:26.926247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.320 [2024-07-24 22:34:26.926273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.320 qpair failed and we were unable to recover it. 00:25:01.320 [2024-07-24 22:34:26.926385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.320 [2024-07-24 22:34:26.926412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.320 qpair failed and we were unable to recover it. 00:25:01.320 [2024-07-24 22:34:26.926528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.320 [2024-07-24 22:34:26.926555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.320 qpair failed and we were unable to recover it. 00:25:01.320 [2024-07-24 22:34:26.926688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.320 [2024-07-24 22:34:26.926714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.320 qpair failed and we were unable to recover it. 00:25:01.320 [2024-07-24 22:34:26.926844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.320 [2024-07-24 22:34:26.926892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.320 qpair failed and we were unable to recover it. 00:25:01.320 [2024-07-24 22:34:26.927035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.320 [2024-07-24 22:34:26.927086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.320 qpair failed and we were unable to recover it. 00:25:01.320 [2024-07-24 22:34:26.927194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.320 [2024-07-24 22:34:26.927221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.320 qpair failed and we were unable to recover it. 00:25:01.320 [2024-07-24 22:34:26.927339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.320 [2024-07-24 22:34:26.927365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.320 qpair failed and we were unable to recover it. 00:25:01.320 [2024-07-24 22:34:26.927464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.320 [2024-07-24 22:34:26.927499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.320 qpair failed and we were unable to recover it. 00:25:01.320 [2024-07-24 22:34:26.927598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.320 [2024-07-24 22:34:26.927624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.320 qpair failed and we were unable to recover it. 00:25:01.320 [2024-07-24 22:34:26.927724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.320 [2024-07-24 22:34:26.927751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.320 qpair failed and we were unable to recover it. 00:25:01.320 [2024-07-24 22:34:26.927851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.320 [2024-07-24 22:34:26.927877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.320 qpair failed and we were unable to recover it. 00:25:01.320 [2024-07-24 22:34:26.928001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.320 [2024-07-24 22:34:26.928048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.320 qpair failed and we were unable to recover it. 00:25:01.320 [2024-07-24 22:34:26.928159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.320 [2024-07-24 22:34:26.928199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.320 qpair failed and we were unable to recover it. 00:25:01.320 [2024-07-24 22:34:26.928317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.320 [2024-07-24 22:34:26.928344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.320 qpair failed and we were unable to recover it. 00:25:01.320 [2024-07-24 22:34:26.928496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.320 [2024-07-24 22:34:26.928550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.320 qpair failed and we were unable to recover it. 00:25:01.320 [2024-07-24 22:34:26.928658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.320 [2024-07-24 22:34:26.928688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.320 qpair failed and we were unable to recover it. 00:25:01.320 [2024-07-24 22:34:26.928786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.320 [2024-07-24 22:34:26.928813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.320 qpair failed and we were unable to recover it. 00:25:01.320 [2024-07-24 22:34:26.928915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.320 [2024-07-24 22:34:26.928941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.320 qpair failed and we were unable to recover it. 00:25:01.320 [2024-07-24 22:34:26.929043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.320 [2024-07-24 22:34:26.929070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.320 qpair failed and we were unable to recover it. 00:25:01.320 [2024-07-24 22:34:26.929177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.320 [2024-07-24 22:34:26.929205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.320 qpair failed and we were unable to recover it. 00:25:01.320 [2024-07-24 22:34:26.929357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.320 [2024-07-24 22:34:26.929385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.320 qpair failed and we were unable to recover it. 00:25:01.320 [2024-07-24 22:34:26.929541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.320 [2024-07-24 22:34:26.929603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.321 qpair failed and we were unable to recover it. 00:25:01.321 [2024-07-24 22:34:26.929733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.321 [2024-07-24 22:34:26.929780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.321 qpair failed and we were unable to recover it. 00:25:01.321 [2024-07-24 22:34:26.929988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.321 [2024-07-24 22:34:26.930015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.321 qpair failed and we were unable to recover it. 00:25:01.321 [2024-07-24 22:34:26.930114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.321 [2024-07-24 22:34:26.930140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.321 qpair failed and we were unable to recover it. 00:25:01.321 [2024-07-24 22:34:26.930253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.321 [2024-07-24 22:34:26.930279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.321 qpair failed and we were unable to recover it. 00:25:01.321 [2024-07-24 22:34:26.930387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.321 [2024-07-24 22:34:26.930415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.321 qpair failed and we were unable to recover it. 00:25:01.321 [2024-07-24 22:34:26.930517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.321 [2024-07-24 22:34:26.930545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.321 qpair failed and we were unable to recover it. 00:25:01.321 [2024-07-24 22:34:26.930645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.321 [2024-07-24 22:34:26.930671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.321 qpair failed and we were unable to recover it. 00:25:01.321 [2024-07-24 22:34:26.930797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.321 [2024-07-24 22:34:26.930843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.321 qpair failed and we were unable to recover it. 00:25:01.321 [2024-07-24 22:34:26.930967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.321 [2024-07-24 22:34:26.931017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.321 qpair failed and we were unable to recover it. 00:25:01.321 [2024-07-24 22:34:26.931115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.321 [2024-07-24 22:34:26.931141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.321 qpair failed and we were unable to recover it. 00:25:01.321 [2024-07-24 22:34:26.931244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.321 [2024-07-24 22:34:26.931276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.321 qpair failed and we were unable to recover it. 00:25:01.321 [2024-07-24 22:34:26.931373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.321 [2024-07-24 22:34:26.931399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.321 qpair failed and we were unable to recover it. 00:25:01.321 [2024-07-24 22:34:26.931503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.321 [2024-07-24 22:34:26.931530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.321 qpair failed and we were unable to recover it. 00:25:01.321 [2024-07-24 22:34:26.931633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.321 [2024-07-24 22:34:26.931660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.321 qpair failed and we were unable to recover it. 00:25:01.321 [2024-07-24 22:34:26.931764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.321 [2024-07-24 22:34:26.931791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.321 qpair failed and we were unable to recover it. 00:25:01.321 [2024-07-24 22:34:26.931897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.321 [2024-07-24 22:34:26.931925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.321 qpair failed and we were unable to recover it. 00:25:01.321 [2024-07-24 22:34:26.932025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.321 [2024-07-24 22:34:26.932052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.321 qpair failed and we were unable to recover it. 00:25:01.321 [2024-07-24 22:34:26.932148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.321 [2024-07-24 22:34:26.932174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.321 qpair failed and we were unable to recover it. 00:25:01.321 [2024-07-24 22:34:26.932275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.321 [2024-07-24 22:34:26.932301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.321 qpair failed and we were unable to recover it. 00:25:01.321 [2024-07-24 22:34:26.932400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.321 [2024-07-24 22:34:26.932426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.321 qpair failed and we were unable to recover it. 00:25:01.321 [2024-07-24 22:34:26.932534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.321 [2024-07-24 22:34:26.932561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.321 qpair failed and we were unable to recover it. 00:25:01.321 [2024-07-24 22:34:26.932662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.321 [2024-07-24 22:34:26.932688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.321 qpair failed and we were unable to recover it. 00:25:01.321 [2024-07-24 22:34:26.932790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.321 [2024-07-24 22:34:26.932816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.321 qpair failed and we were unable to recover it. 00:25:01.321 [2024-07-24 22:34:26.932917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.321 [2024-07-24 22:34:26.932943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.321 qpair failed and we were unable to recover it. 00:25:01.321 [2024-07-24 22:34:26.933051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.321 [2024-07-24 22:34:26.933077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.321 qpair failed and we were unable to recover it. 00:25:01.321 [2024-07-24 22:34:26.933174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.321 [2024-07-24 22:34:26.933200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.321 qpair failed and we were unable to recover it. 00:25:01.321 [2024-07-24 22:34:26.933303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.321 [2024-07-24 22:34:26.933330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.321 qpair failed and we were unable to recover it. 00:25:01.321 [2024-07-24 22:34:26.936496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.321 [2024-07-24 22:34:26.936536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.321 qpair failed and we were unable to recover it. 00:25:01.321 [2024-07-24 22:34:26.936654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.321 [2024-07-24 22:34:26.936683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.321 qpair failed and we were unable to recover it. 00:25:01.321 [2024-07-24 22:34:26.936796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.321 [2024-07-24 22:34:26.936824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.321 qpair failed and we were unable to recover it. 00:25:01.321 [2024-07-24 22:34:26.936967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.321 [2024-07-24 22:34:26.936994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.321 qpair failed and we were unable to recover it. 00:25:01.321 [2024-07-24 22:34:26.937120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.321 [2024-07-24 22:34:26.937147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.321 qpair failed and we were unable to recover it. 00:25:01.321 [2024-07-24 22:34:26.937263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.322 [2024-07-24 22:34:26.937290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.322 qpair failed and we were unable to recover it. 00:25:01.322 [2024-07-24 22:34:26.937426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.322 [2024-07-24 22:34:26.937454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.322 qpair failed and we were unable to recover it. 00:25:01.322 [2024-07-24 22:34:26.937606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.322 [2024-07-24 22:34:26.937658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.322 qpair failed and we were unable to recover it. 00:25:01.322 [2024-07-24 22:34:26.937792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.322 [2024-07-24 22:34:26.937836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.322 qpair failed and we were unable to recover it. 00:25:01.322 [2024-07-24 22:34:26.937957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.322 [2024-07-24 22:34:26.937988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.322 qpair failed and we were unable to recover it. 00:25:01.322 [2024-07-24 22:34:26.938248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.322 [2024-07-24 22:34:26.938274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.322 qpair failed and we were unable to recover it. 00:25:01.322 [2024-07-24 22:34:26.938372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.322 [2024-07-24 22:34:26.938398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.322 qpair failed and we were unable to recover it. 00:25:01.322 [2024-07-24 22:34:26.938515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.322 [2024-07-24 22:34:26.938542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.322 qpair failed and we were unable to recover it. 00:25:01.322 [2024-07-24 22:34:26.938657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.322 [2024-07-24 22:34:26.938685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.322 qpair failed and we were unable to recover it. 00:25:01.322 [2024-07-24 22:34:26.938843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.322 [2024-07-24 22:34:26.938883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.322 qpair failed and we were unable to recover it. 00:25:01.322 [2024-07-24 22:34:26.939004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.322 [2024-07-24 22:34:26.939045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.322 qpair failed and we were unable to recover it. 00:25:01.322 [2024-07-24 22:34:26.939195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.322 [2024-07-24 22:34:26.939237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.322 qpair failed and we were unable to recover it. 00:25:01.322 [2024-07-24 22:34:26.939377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.322 [2024-07-24 22:34:26.939428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.322 qpair failed and we were unable to recover it. 00:25:01.322 [2024-07-24 22:34:26.939588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.322 [2024-07-24 22:34:26.939629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.322 qpair failed and we were unable to recover it. 00:25:01.322 [2024-07-24 22:34:26.939769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.322 [2024-07-24 22:34:26.939796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.322 qpair failed and we were unable to recover it. 00:25:01.322 [2024-07-24 22:34:26.939967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.322 [2024-07-24 22:34:26.939992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.322 qpair failed and we were unable to recover it. 00:25:01.322 [2024-07-24 22:34:26.940102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.322 [2024-07-24 22:34:26.940130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.322 qpair failed and we were unable to recover it. 00:25:01.322 [2024-07-24 22:34:26.940293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.322 [2024-07-24 22:34:26.940327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.322 qpair failed and we were unable to recover it. 00:25:01.322 [2024-07-24 22:34:26.940475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.322 [2024-07-24 22:34:26.940533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.322 qpair failed and we were unable to recover it. 00:25:01.322 [2024-07-24 22:34:26.940664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.322 [2024-07-24 22:34:26.940706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.322 qpair failed and we were unable to recover it. 00:25:01.322 [2024-07-24 22:34:26.940876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.322 [2024-07-24 22:34:26.940904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.322 qpair failed and we were unable to recover it. 00:25:01.322 [2024-07-24 22:34:26.941034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.322 [2024-07-24 22:34:26.941079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.322 qpair failed and we were unable to recover it. 00:25:01.322 [2024-07-24 22:34:26.941212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.322 [2024-07-24 22:34:26.941256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.322 qpair failed and we were unable to recover it. 00:25:01.322 [2024-07-24 22:34:26.941382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.322 [2024-07-24 22:34:26.941424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.322 qpair failed and we were unable to recover it. 00:25:01.322 [2024-07-24 22:34:26.941547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.322 [2024-07-24 22:34:26.941595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.322 qpair failed and we were unable to recover it. 00:25:01.322 [2024-07-24 22:34:26.941703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.322 [2024-07-24 22:34:26.941731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.322 qpair failed and we were unable to recover it. 00:25:01.322 [2024-07-24 22:34:26.941883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.322 [2024-07-24 22:34:26.941910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.322 qpair failed and we were unable to recover it. 00:25:01.322 [2024-07-24 22:34:26.942079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.322 [2024-07-24 22:34:26.942127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.322 qpair failed and we were unable to recover it. 00:25:01.322 [2024-07-24 22:34:26.942259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.322 [2024-07-24 22:34:26.942299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.322 qpair failed and we were unable to recover it. 00:25:01.322 [2024-07-24 22:34:26.942457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.322 [2024-07-24 22:34:26.942521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.322 qpair failed and we were unable to recover it. 00:25:01.322 [2024-07-24 22:34:26.942631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.322 [2024-07-24 22:34:26.942657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.322 qpair failed and we were unable to recover it. 00:25:01.322 [2024-07-24 22:34:26.942787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.322 [2024-07-24 22:34:26.942832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.322 qpair failed and we were unable to recover it. 00:25:01.322 [2024-07-24 22:34:26.942965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.322 [2024-07-24 22:34:26.943009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.322 qpair failed and we were unable to recover it. 00:25:01.322 [2024-07-24 22:34:26.943123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.322 [2024-07-24 22:34:26.943167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.322 qpair failed and we were unable to recover it. 00:25:01.322 [2024-07-24 22:34:26.943296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.322 [2024-07-24 22:34:26.943342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.322 qpair failed and we were unable to recover it. 00:25:01.322 [2024-07-24 22:34:26.943490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.322 [2024-07-24 22:34:26.943545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.322 qpair failed and we were unable to recover it. 00:25:01.322 [2024-07-24 22:34:26.943694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.322 [2024-07-24 22:34:26.943735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.322 qpair failed and we were unable to recover it. 00:25:01.322 [2024-07-24 22:34:26.943854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.323 [2024-07-24 22:34:26.943895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.323 qpair failed and we were unable to recover it. 00:25:01.323 [2024-07-24 22:34:26.944036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.323 [2024-07-24 22:34:26.944082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.323 qpair failed and we were unable to recover it. 00:25:01.323 [2024-07-24 22:34:26.944226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.323 [2024-07-24 22:34:26.944271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.323 qpair failed and we were unable to recover it. 00:25:01.323 [2024-07-24 22:34:26.944403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.323 [2024-07-24 22:34:26.944449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.323 qpair failed and we were unable to recover it. 00:25:01.323 [2024-07-24 22:34:26.944607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.323 [2024-07-24 22:34:26.944643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.323 qpair failed and we were unable to recover it. 00:25:01.608 [2024-07-24 22:34:26.944794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.608 [2024-07-24 22:34:26.944836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.608 qpair failed and we were unable to recover it. 00:25:01.608 [2024-07-24 22:34:26.944951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.608 [2024-07-24 22:34:26.944998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.608 qpair failed and we were unable to recover it. 00:25:01.608 [2024-07-24 22:34:26.945125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.608 [2024-07-24 22:34:26.945169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.608 qpair failed and we were unable to recover it. 00:25:01.608 [2024-07-24 22:34:26.945305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.608 [2024-07-24 22:34:26.945350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.608 qpair failed and we were unable to recover it. 00:25:01.608 [2024-07-24 22:34:26.945506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.608 [2024-07-24 22:34:26.945554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.608 qpair failed and we were unable to recover it. 00:25:01.608 [2024-07-24 22:34:26.945688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.608 [2024-07-24 22:34:26.945728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.608 qpair failed and we were unable to recover it. 00:25:01.609 [2024-07-24 22:34:26.945835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.609 [2024-07-24 22:34:26.945864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.609 qpair failed and we were unable to recover it. 00:25:01.609 [2024-07-24 22:34:26.946099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.609 [2024-07-24 22:34:26.946127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.609 qpair failed and we were unable to recover it. 00:25:01.609 [2024-07-24 22:34:26.946242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.609 [2024-07-24 22:34:26.946283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.609 qpair failed and we were unable to recover it. 00:25:01.609 [2024-07-24 22:34:26.946398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.609 [2024-07-24 22:34:26.946444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.609 qpair failed and we were unable to recover it. 00:25:01.609 [2024-07-24 22:34:26.946553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.609 [2024-07-24 22:34:26.946581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.609 qpair failed and we were unable to recover it. 00:25:01.609 [2024-07-24 22:34:26.946705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.609 [2024-07-24 22:34:26.946731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.609 qpair failed and we were unable to recover it. 00:25:01.609 [2024-07-24 22:34:26.946859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.609 [2024-07-24 22:34:26.946901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.609 qpair failed and we were unable to recover it. 00:25:01.609 [2024-07-24 22:34:26.947044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.609 [2024-07-24 22:34:26.947088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.609 qpair failed and we were unable to recover it. 00:25:01.609 [2024-07-24 22:34:26.947215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.609 [2024-07-24 22:34:26.947261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.609 qpair failed and we were unable to recover it. 00:25:01.609 [2024-07-24 22:34:26.947393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.609 [2024-07-24 22:34:26.947438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.609 qpair failed and we were unable to recover it. 00:25:01.609 [2024-07-24 22:34:26.947550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.609 [2024-07-24 22:34:26.947583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.609 qpair failed and we were unable to recover it. 00:25:01.609 [2024-07-24 22:34:26.947716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.609 [2024-07-24 22:34:26.947761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.609 qpair failed and we were unable to recover it. 00:25:01.609 [2024-07-24 22:34:26.947885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.609 [2024-07-24 22:34:26.947928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.609 qpair failed and we were unable to recover it. 00:25:01.609 [2024-07-24 22:34:26.948163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.609 [2024-07-24 22:34:26.948189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.609 qpair failed and we were unable to recover it. 00:25:01.609 [2024-07-24 22:34:26.948308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.609 [2024-07-24 22:34:26.948354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.609 qpair failed and we were unable to recover it. 00:25:01.609 [2024-07-24 22:34:26.948493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.609 [2024-07-24 22:34:26.948539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.609 qpair failed and we were unable to recover it. 00:25:01.609 [2024-07-24 22:34:26.948712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.609 [2024-07-24 22:34:26.948751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.609 qpair failed and we were unable to recover it. 00:25:01.609 [2024-07-24 22:34:26.948918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.609 [2024-07-24 22:34:26.948945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.609 qpair failed and we were unable to recover it. 00:25:01.609 [2024-07-24 22:34:26.949106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.609 [2024-07-24 22:34:26.949153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.609 qpair failed and we were unable to recover it. 00:25:01.609 [2024-07-24 22:34:26.949269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.609 [2024-07-24 22:34:26.949308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.609 qpair failed and we were unable to recover it. 00:25:01.609 [2024-07-24 22:34:26.949461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.609 [2024-07-24 22:34:26.949496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.609 qpair failed and we were unable to recover it. 00:25:01.609 [2024-07-24 22:34:26.949617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.609 [2024-07-24 22:34:26.949643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.609 qpair failed and we were unable to recover it. 00:25:01.609 [2024-07-24 22:34:26.949748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.609 [2024-07-24 22:34:26.949774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.609 qpair failed and we were unable to recover it. 00:25:01.609 [2024-07-24 22:34:26.949895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.609 [2024-07-24 22:34:26.949922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.609 qpair failed and we were unable to recover it. 00:25:01.609 [2024-07-24 22:34:26.950136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.609 [2024-07-24 22:34:26.950164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.609 qpair failed and we were unable to recover it. 00:25:01.609 [2024-07-24 22:34:26.950263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.609 [2024-07-24 22:34:26.950289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.609 qpair failed and we were unable to recover it. 00:25:01.609 [2024-07-24 22:34:26.950395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.609 [2024-07-24 22:34:26.950423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.609 qpair failed and we were unable to recover it. 00:25:01.609 [2024-07-24 22:34:26.950563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.609 [2024-07-24 22:34:26.950591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.609 qpair failed and we were unable to recover it. 00:25:01.609 [2024-07-24 22:34:26.950717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.609 [2024-07-24 22:34:26.950744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.609 qpair failed and we were unable to recover it. 00:25:01.609 [2024-07-24 22:34:26.950913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.609 [2024-07-24 22:34:26.950941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.609 qpair failed and we were unable to recover it. 00:25:01.609 [2024-07-24 22:34:26.951129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.609 [2024-07-24 22:34:26.951168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.609 qpair failed and we were unable to recover it. 00:25:01.609 [2024-07-24 22:34:26.951298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.609 [2024-07-24 22:34:26.951342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.609 qpair failed and we were unable to recover it. 00:25:01.609 [2024-07-24 22:34:26.951498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.609 [2024-07-24 22:34:26.951541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.609 qpair failed and we were unable to recover it. 00:25:01.609 [2024-07-24 22:34:26.951682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.609 [2024-07-24 22:34:26.951708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.609 qpair failed and we were unable to recover it. 00:25:01.609 [2024-07-24 22:34:26.951845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.609 [2024-07-24 22:34:26.951890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.609 qpair failed and we were unable to recover it. 00:25:01.609 [2024-07-24 22:34:26.952003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.609 [2024-07-24 22:34:26.952044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.609 qpair failed and we were unable to recover it. 00:25:01.609 [2024-07-24 22:34:26.952188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.609 [2024-07-24 22:34:26.952214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.609 qpair failed and we were unable to recover it. 00:25:01.609 [2024-07-24 22:34:26.952350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.610 [2024-07-24 22:34:26.952392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.610 qpair failed and we were unable to recover it. 00:25:01.610 [2024-07-24 22:34:26.952539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.610 [2024-07-24 22:34:26.952593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.610 qpair failed and we were unable to recover it. 00:25:01.610 [2024-07-24 22:34:26.952721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.610 [2024-07-24 22:34:26.952764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.610 qpair failed and we were unable to recover it. 00:25:01.610 [2024-07-24 22:34:26.952895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.610 [2024-07-24 22:34:26.952940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.610 qpair failed and we were unable to recover it. 00:25:01.610 [2024-07-24 22:34:26.953065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.610 [2024-07-24 22:34:26.953113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.610 qpair failed and we were unable to recover it. 00:25:01.610 [2024-07-24 22:34:26.953222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.610 [2024-07-24 22:34:26.953250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.610 qpair failed and we were unable to recover it. 00:25:01.610 [2024-07-24 22:34:26.953385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.610 [2024-07-24 22:34:26.953412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.610 qpair failed and we were unable to recover it. 00:25:01.610 [2024-07-24 22:34:26.953566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.610 [2024-07-24 22:34:26.953614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.610 qpair failed and we were unable to recover it. 00:25:01.610 [2024-07-24 22:34:26.953739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.610 [2024-07-24 22:34:26.953781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.610 qpair failed and we were unable to recover it. 00:25:01.610 [2024-07-24 22:34:26.953877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.610 [2024-07-24 22:34:26.953903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.610 qpair failed and we were unable to recover it. 00:25:01.610 [2024-07-24 22:34:26.954026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.610 [2024-07-24 22:34:26.954071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.610 qpair failed and we were unable to recover it. 00:25:01.610 [2024-07-24 22:34:26.954199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.610 [2024-07-24 22:34:26.954239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.610 qpair failed and we were unable to recover it. 00:25:01.610 [2024-07-24 22:34:26.954361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.610 [2024-07-24 22:34:26.954402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.610 qpair failed and we were unable to recover it. 00:25:01.610 [2024-07-24 22:34:26.954558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.610 [2024-07-24 22:34:26.954607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.610 qpair failed and we were unable to recover it. 00:25:01.610 [2024-07-24 22:34:26.954768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.610 [2024-07-24 22:34:26.954821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.610 qpair failed and we were unable to recover it. 00:25:01.610 [2024-07-24 22:34:26.954952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.610 [2024-07-24 22:34:26.954992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.610 qpair failed and we were unable to recover it. 00:25:01.610 [2024-07-24 22:34:26.955119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.610 [2024-07-24 22:34:26.955160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.610 qpair failed and we were unable to recover it. 00:25:01.610 [2024-07-24 22:34:26.955289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.610 [2024-07-24 22:34:26.955335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.610 qpair failed and we were unable to recover it. 00:25:01.610 [2024-07-24 22:34:26.955454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.610 [2024-07-24 22:34:26.955507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.610 qpair failed and we were unable to recover it. 00:25:01.610 [2024-07-24 22:34:26.955615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.610 [2024-07-24 22:34:26.955641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.610 qpair failed and we were unable to recover it. 00:25:01.610 [2024-07-24 22:34:26.955759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.610 [2024-07-24 22:34:26.955799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.610 qpair failed and we were unable to recover it. 00:25:01.610 [2024-07-24 22:34:26.955948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.610 [2024-07-24 22:34:26.955989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.610 qpair failed and we were unable to recover it. 00:25:01.610 [2024-07-24 22:34:26.956175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.610 [2024-07-24 22:34:26.956224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.610 qpair failed and we were unable to recover it. 00:25:01.610 [2024-07-24 22:34:26.956360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.610 [2024-07-24 22:34:26.956400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.610 qpair failed and we were unable to recover it. 00:25:01.610 [2024-07-24 22:34:26.956529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.610 [2024-07-24 22:34:26.956579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.610 qpair failed and we were unable to recover it. 00:25:01.610 [2024-07-24 22:34:26.956739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.610 [2024-07-24 22:34:26.956784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.610 qpair failed and we were unable to recover it. 00:25:01.610 [2024-07-24 22:34:26.956960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.610 [2024-07-24 22:34:26.957004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.610 qpair failed and we were unable to recover it. 00:25:01.610 [2024-07-24 22:34:26.957166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.610 [2024-07-24 22:34:26.957211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.610 qpair failed and we were unable to recover it. 00:25:01.610 [2024-07-24 22:34:26.957397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.610 [2024-07-24 22:34:26.957443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.610 qpair failed and we were unable to recover it. 00:25:01.610 [2024-07-24 22:34:26.957620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.610 [2024-07-24 22:34:26.957666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.610 qpair failed and we were unable to recover it. 00:25:01.610 [2024-07-24 22:34:26.957827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.610 [2024-07-24 22:34:26.957873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.610 qpair failed and we were unable to recover it. 00:25:01.610 [2024-07-24 22:34:26.958005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.610 [2024-07-24 22:34:26.958046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.610 qpair failed and we were unable to recover it. 00:25:01.610 [2024-07-24 22:34:26.958171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.610 [2024-07-24 22:34:26.958216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.610 qpair failed and we were unable to recover it. 00:25:01.610 [2024-07-24 22:34:26.958378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.610 [2024-07-24 22:34:26.958427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.610 qpair failed and we were unable to recover it. 00:25:01.610 [2024-07-24 22:34:26.958603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.610 [2024-07-24 22:34:26.958631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.610 qpair failed and we were unable to recover it. 00:25:01.610 [2024-07-24 22:34:26.958748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.610 [2024-07-24 22:34:26.958788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.610 qpair failed and we were unable to recover it. 00:25:01.610 [2024-07-24 22:34:26.958951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.610 [2024-07-24 22:34:26.958997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.610 qpair failed and we were unable to recover it. 00:25:01.610 [2024-07-24 22:34:26.959126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.610 [2024-07-24 22:34:26.959166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.610 qpair failed and we were unable to recover it. 00:25:01.611 [2024-07-24 22:34:26.959309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.611 [2024-07-24 22:34:26.959355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.611 qpair failed and we were unable to recover it. 00:25:01.611 [2024-07-24 22:34:26.959536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.611 [2024-07-24 22:34:26.959582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.611 qpair failed and we were unable to recover it. 00:25:01.611 [2024-07-24 22:34:26.959757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.611 [2024-07-24 22:34:26.959790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.611 qpair failed and we were unable to recover it. 00:25:01.611 [2024-07-24 22:34:26.959919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.611 [2024-07-24 22:34:26.959957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.611 qpair failed and we were unable to recover it. 00:25:01.611 [2024-07-24 22:34:26.960086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.611 [2024-07-24 22:34:26.960129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.611 qpair failed and we were unable to recover it. 00:25:01.611 [2024-07-24 22:34:26.960252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.611 [2024-07-24 22:34:26.960296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.611 qpair failed and we were unable to recover it. 00:25:01.611 [2024-07-24 22:34:26.960497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.611 [2024-07-24 22:34:26.960555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.611 qpair failed and we were unable to recover it. 00:25:01.611 [2024-07-24 22:34:26.960681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.611 [2024-07-24 22:34:26.960723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.611 qpair failed and we were unable to recover it. 00:25:01.611 [2024-07-24 22:34:26.960851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.611 [2024-07-24 22:34:26.960881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.611 qpair failed and we were unable to recover it. 00:25:01.611 [2024-07-24 22:34:26.961025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.611 [2024-07-24 22:34:26.961063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.611 qpair failed and we were unable to recover it. 00:25:01.611 [2024-07-24 22:34:26.961193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.611 [2024-07-24 22:34:26.961240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.611 qpair failed and we were unable to recover it. 00:25:01.611 [2024-07-24 22:34:26.961399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.611 [2024-07-24 22:34:26.961443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.611 qpair failed and we were unable to recover it. 00:25:01.611 [2024-07-24 22:34:26.961558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.611 [2024-07-24 22:34:26.961584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.611 qpair failed and we were unable to recover it. 00:25:01.611 [2024-07-24 22:34:26.961713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.611 [2024-07-24 22:34:26.961757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.611 qpair failed and we were unable to recover it. 00:25:01.611 [2024-07-24 22:34:26.961898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.611 [2024-07-24 22:34:26.961941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.611 qpair failed and we were unable to recover it. 00:25:01.611 [2024-07-24 22:34:26.962098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.611 [2024-07-24 22:34:26.962140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.611 qpair failed and we were unable to recover it. 00:25:01.611 [2024-07-24 22:34:26.962304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.611 [2024-07-24 22:34:26.962349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.611 qpair failed and we were unable to recover it. 00:25:01.611 [2024-07-24 22:34:26.962505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.611 [2024-07-24 22:34:26.962551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.611 qpair failed and we were unable to recover it. 00:25:01.611 [2024-07-24 22:34:26.962680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.611 [2024-07-24 22:34:26.962728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.611 qpair failed and we were unable to recover it. 00:25:01.611 [2024-07-24 22:34:26.962855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.611 [2024-07-24 22:34:26.962899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.611 qpair failed and we were unable to recover it. 00:25:01.611 [2024-07-24 22:34:26.963028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.611 [2024-07-24 22:34:26.963072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.611 qpair failed and we were unable to recover it. 00:25:01.611 [2024-07-24 22:34:26.963226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.611 [2024-07-24 22:34:26.963271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.611 qpair failed and we were unable to recover it. 00:25:01.611 [2024-07-24 22:34:26.963438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.611 [2024-07-24 22:34:26.963490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.611 qpair failed and we were unable to recover it. 00:25:01.611 [2024-07-24 22:34:26.963620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.611 [2024-07-24 22:34:26.963664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.611 qpair failed and we were unable to recover it. 00:25:01.611 [2024-07-24 22:34:26.963787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.611 [2024-07-24 22:34:26.963833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.611 qpair failed and we were unable to recover it. 00:25:01.611 [2024-07-24 22:34:26.963987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.611 [2024-07-24 22:34:26.964034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.611 qpair failed and we were unable to recover it. 00:25:01.611 [2024-07-24 22:34:26.964139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.611 [2024-07-24 22:34:26.964164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.611 qpair failed and we were unable to recover it. 00:25:01.611 [2024-07-24 22:34:26.964314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.611 [2024-07-24 22:34:26.964343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.611 qpair failed and we were unable to recover it. 00:25:01.611 [2024-07-24 22:34:26.964491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.611 [2024-07-24 22:34:26.964535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.611 qpair failed and we were unable to recover it. 00:25:01.611 [2024-07-24 22:34:26.964712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.611 [2024-07-24 22:34:26.964738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.611 qpair failed and we were unable to recover it. 00:25:01.611 [2024-07-24 22:34:26.964855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.611 [2024-07-24 22:34:26.964901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.611 qpair failed and we were unable to recover it. 00:25:01.611 [2024-07-24 22:34:26.965028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.611 [2024-07-24 22:34:26.965077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.611 qpair failed and we were unable to recover it. 00:25:01.611 [2024-07-24 22:34:26.965204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.611 [2024-07-24 22:34:26.965248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.611 qpair failed and we were unable to recover it. 00:25:01.611 [2024-07-24 22:34:26.965407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.611 [2024-07-24 22:34:26.965455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.611 qpair failed and we were unable to recover it. 00:25:01.611 [2024-07-24 22:34:26.965640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.611 [2024-07-24 22:34:26.965667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.611 qpair failed and we were unable to recover it. 00:25:01.611 [2024-07-24 22:34:26.965769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.611 [2024-07-24 22:34:26.965795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.611 qpair failed and we were unable to recover it. 00:25:01.611 [2024-07-24 22:34:26.965930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.611 [2024-07-24 22:34:26.965973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.611 qpair failed and we were unable to recover it. 00:25:01.611 [2024-07-24 22:34:26.966133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.611 [2024-07-24 22:34:26.966182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.612 qpair failed and we were unable to recover it. 00:25:01.612 [2024-07-24 22:34:26.966356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.612 [2024-07-24 22:34:26.966381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.612 qpair failed and we were unable to recover it. 00:25:01.612 [2024-07-24 22:34:26.966527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.612 [2024-07-24 22:34:26.966563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.612 qpair failed and we were unable to recover it. 00:25:01.612 [2024-07-24 22:34:26.966754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.612 [2024-07-24 22:34:26.966798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.612 qpair failed and we were unable to recover it. 00:25:01.612 [2024-07-24 22:34:26.966932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.612 [2024-07-24 22:34:26.966977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.612 qpair failed and we were unable to recover it. 00:25:01.612 [2024-07-24 22:34:26.967108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.612 [2024-07-24 22:34:26.967157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.612 qpair failed and we were unable to recover it. 00:25:01.612 [2024-07-24 22:34:26.967291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.612 [2024-07-24 22:34:26.967333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.612 qpair failed and we were unable to recover it. 00:25:01.612 [2024-07-24 22:34:26.967463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.612 [2024-07-24 22:34:26.967517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.612 qpair failed and we were unable to recover it. 00:25:01.612 [2024-07-24 22:34:26.967673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.612 [2024-07-24 22:34:26.967717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.612 qpair failed and we were unable to recover it. 00:25:01.612 [2024-07-24 22:34:26.967821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.612 [2024-07-24 22:34:26.967847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.612 qpair failed and we were unable to recover it. 00:25:01.612 [2024-07-24 22:34:26.968009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.612 [2024-07-24 22:34:26.968054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.612 qpair failed and we were unable to recover it. 00:25:01.612 [2024-07-24 22:34:26.968184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.612 [2024-07-24 22:34:26.968232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.612 qpair failed and we were unable to recover it. 00:25:01.612 [2024-07-24 22:34:26.968389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.612 [2024-07-24 22:34:26.968436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.612 qpair failed and we were unable to recover it. 00:25:01.612 [2024-07-24 22:34:26.968562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.612 [2024-07-24 22:34:26.968607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.612 qpair failed and we were unable to recover it. 00:25:01.612 [2024-07-24 22:34:26.968745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.612 [2024-07-24 22:34:26.968789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.612 qpair failed and we were unable to recover it. 00:25:01.612 [2024-07-24 22:34:26.968912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.612 [2024-07-24 22:34:26.968954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.612 qpair failed and we were unable to recover it. 00:25:01.612 [2024-07-24 22:34:26.969085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.612 [2024-07-24 22:34:26.969130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.612 qpair failed and we were unable to recover it. 00:25:01.612 [2024-07-24 22:34:26.969228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.612 [2024-07-24 22:34:26.969254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.612 qpair failed and we were unable to recover it. 00:25:01.612 [2024-07-24 22:34:26.969380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.612 [2024-07-24 22:34:26.969405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.612 qpair failed and we were unable to recover it. 00:25:01.612 [2024-07-24 22:34:26.969537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.612 [2024-07-24 22:34:26.969584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.612 qpair failed and we were unable to recover it. 00:25:01.612 [2024-07-24 22:34:26.969690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.612 [2024-07-24 22:34:26.969717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.612 qpair failed and we were unable to recover it. 00:25:01.612 [2024-07-24 22:34:26.969883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.612 [2024-07-24 22:34:26.969928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.612 qpair failed and we were unable to recover it. 00:25:01.612 [2024-07-24 22:34:26.970064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.612 [2024-07-24 22:34:26.970108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.612 qpair failed and we were unable to recover it. 00:25:01.612 [2024-07-24 22:34:26.970239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.612 [2024-07-24 22:34:26.970287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.612 qpair failed and we were unable to recover it. 00:25:01.612 [2024-07-24 22:34:26.970427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.612 [2024-07-24 22:34:26.970475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.612 qpair failed and we were unable to recover it. 00:25:01.612 [2024-07-24 22:34:26.970649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.612 [2024-07-24 22:34:26.970694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.612 qpair failed and we were unable to recover it. 00:25:01.612 [2024-07-24 22:34:26.970826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.612 [2024-07-24 22:34:26.970871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.612 qpair failed and we were unable to recover it. 00:25:01.612 [2024-07-24 22:34:26.971071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.612 [2024-07-24 22:34:26.971115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.612 qpair failed and we were unable to recover it. 00:25:01.612 [2024-07-24 22:34:26.971273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.612 [2024-07-24 22:34:26.971312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.612 qpair failed and we were unable to recover it. 00:25:01.612 [2024-07-24 22:34:26.971469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.612 [2024-07-24 22:34:26.971521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.612 qpair failed and we were unable to recover it. 00:25:01.612 [2024-07-24 22:34:26.971685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.612 [2024-07-24 22:34:26.971712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.612 qpair failed and we were unable to recover it. 00:25:01.612 [2024-07-24 22:34:26.971871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.612 [2024-07-24 22:34:26.971917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.612 qpair failed and we were unable to recover it. 00:25:01.612 [2024-07-24 22:34:26.972041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.612 [2024-07-24 22:34:26.972091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.612 qpair failed and we were unable to recover it. 00:25:01.612 [2024-07-24 22:34:26.972265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.612 [2024-07-24 22:34:26.972312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.612 qpair failed and we were unable to recover it. 00:25:01.612 [2024-07-24 22:34:26.972417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.612 [2024-07-24 22:34:26.972443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.612 qpair failed and we were unable to recover it. 00:25:01.612 [2024-07-24 22:34:26.972644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.612 [2024-07-24 22:34:26.972678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.612 qpair failed and we were unable to recover it. 00:25:01.612 [2024-07-24 22:34:26.972820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.612 [2024-07-24 22:34:26.972859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.612 qpair failed and we were unable to recover it. 00:25:01.612 [2024-07-24 22:34:26.972988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.612 [2024-07-24 22:34:26.973032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.612 qpair failed and we were unable to recover it. 00:25:01.612 [2024-07-24 22:34:26.973135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.613 [2024-07-24 22:34:26.973161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.613 qpair failed and we were unable to recover it. 00:25:01.613 [2024-07-24 22:34:26.973287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.613 [2024-07-24 22:34:26.973328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.613 qpair failed and we were unable to recover it. 00:25:01.613 [2024-07-24 22:34:26.973466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.613 [2024-07-24 22:34:26.973514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.613 qpair failed and we were unable to recover it. 00:25:01.613 [2024-07-24 22:34:26.973633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.613 [2024-07-24 22:34:26.973676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.613 qpair failed and we were unable to recover it. 00:25:01.613 [2024-07-24 22:34:26.973844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.613 [2024-07-24 22:34:26.973872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.613 qpair failed and we were unable to recover it. 00:25:01.613 [2024-07-24 22:34:26.974034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.613 [2024-07-24 22:34:26.974078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.613 qpair failed and we were unable to recover it. 00:25:01.613 [2024-07-24 22:34:26.974228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.613 [2024-07-24 22:34:26.974260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.613 qpair failed and we were unable to recover it. 00:25:01.613 [2024-07-24 22:34:26.974378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.613 [2024-07-24 22:34:26.974405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.613 qpair failed and we were unable to recover it. 00:25:01.613 [2024-07-24 22:34:26.974571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.613 [2024-07-24 22:34:26.974606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.613 qpair failed and we were unable to recover it. 00:25:01.613 [2024-07-24 22:34:26.974721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.613 [2024-07-24 22:34:26.974746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.613 qpair failed and we were unable to recover it. 00:25:01.613 [2024-07-24 22:34:26.974877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.613 [2024-07-24 22:34:26.974916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.613 qpair failed and we were unable to recover it. 00:25:01.613 [2024-07-24 22:34:26.975040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.613 [2024-07-24 22:34:26.975082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.613 qpair failed and we were unable to recover it. 00:25:01.613 [2024-07-24 22:34:26.975233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.613 [2024-07-24 22:34:26.975275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.613 qpair failed and we were unable to recover it. 00:25:01.613 [2024-07-24 22:34:26.975382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.613 [2024-07-24 22:34:26.975410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.613 qpair failed and we were unable to recover it. 00:25:01.613 [2024-07-24 22:34:26.975569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.613 [2024-07-24 22:34:26.975612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.613 qpair failed and we were unable to recover it. 00:25:01.613 [2024-07-24 22:34:26.975775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.613 [2024-07-24 22:34:26.975819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.613 qpair failed and we were unable to recover it. 00:25:01.613 [2024-07-24 22:34:26.975939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.613 [2024-07-24 22:34:26.975979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.613 qpair failed and we were unable to recover it. 00:25:01.613 [2024-07-24 22:34:26.976097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.613 [2024-07-24 22:34:26.976130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.613 qpair failed and we were unable to recover it. 00:25:01.613 [2024-07-24 22:34:26.976311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.613 [2024-07-24 22:34:26.976356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.613 qpair failed and we were unable to recover it. 00:25:01.613 [2024-07-24 22:34:26.976522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.613 [2024-07-24 22:34:26.976554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.613 qpair failed and we were unable to recover it. 00:25:01.613 [2024-07-24 22:34:26.976731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.613 [2024-07-24 22:34:26.976774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.613 qpair failed and we were unable to recover it. 00:25:01.613 [2024-07-24 22:34:26.976910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.613 [2024-07-24 22:34:26.976958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.613 qpair failed and we were unable to recover it. 00:25:01.613 [2024-07-24 22:34:26.977088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.613 [2024-07-24 22:34:26.977136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.613 qpair failed and we were unable to recover it. 00:25:01.613 [2024-07-24 22:34:26.977292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.613 [2024-07-24 22:34:26.977337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.613 qpair failed and we were unable to recover it. 00:25:01.613 [2024-07-24 22:34:26.977461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.613 [2024-07-24 22:34:26.977510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.613 qpair failed and we were unable to recover it. 00:25:01.613 [2024-07-24 22:34:26.977683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.613 [2024-07-24 22:34:26.977709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.613 qpair failed and we were unable to recover it. 00:25:01.613 [2024-07-24 22:34:26.977864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.613 [2024-07-24 22:34:26.977906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.613 qpair failed and we were unable to recover it. 00:25:01.613 [2024-07-24 22:34:26.978030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.613 [2024-07-24 22:34:26.978070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.613 qpair failed and we were unable to recover it. 00:25:01.613 [2024-07-24 22:34:26.978175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.613 [2024-07-24 22:34:26.978201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.613 qpair failed and we were unable to recover it. 00:25:01.613 [2024-07-24 22:34:26.978346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.613 [2024-07-24 22:34:26.978377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.613 qpair failed and we were unable to recover it. 00:25:01.613 [2024-07-24 22:34:26.978497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.613 [2024-07-24 22:34:26.978527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.613 qpair failed and we were unable to recover it. 00:25:01.613 [2024-07-24 22:34:26.978688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.613 [2024-07-24 22:34:26.978731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.614 qpair failed and we were unable to recover it. 00:25:01.614 [2024-07-24 22:34:26.978877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.614 [2024-07-24 22:34:26.978924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.614 qpair failed and we were unable to recover it. 00:25:01.614 [2024-07-24 22:34:26.979045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.614 [2024-07-24 22:34:26.979076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.614 qpair failed and we were unable to recover it. 00:25:01.614 [2024-07-24 22:34:26.979244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.614 [2024-07-24 22:34:26.979290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.614 qpair failed and we were unable to recover it. 00:25:01.614 [2024-07-24 22:34:26.979445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.614 [2024-07-24 22:34:26.979490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.614 qpair failed and we were unable to recover it. 00:25:01.614 [2024-07-24 22:34:26.979616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.614 [2024-07-24 22:34:26.979658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.614 qpair failed and we were unable to recover it. 00:25:01.614 [2024-07-24 22:34:26.979810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.614 [2024-07-24 22:34:26.979851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.614 qpair failed and we were unable to recover it. 00:25:01.614 [2024-07-24 22:34:26.980009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.614 [2024-07-24 22:34:26.980039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.614 qpair failed and we were unable to recover it. 00:25:01.614 [2024-07-24 22:34:26.980167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.614 [2024-07-24 22:34:26.980206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.614 qpair failed and we were unable to recover it. 00:25:01.614 [2024-07-24 22:34:26.980330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.614 [2024-07-24 22:34:26.980373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.614 qpair failed and we were unable to recover it. 00:25:01.614 [2024-07-24 22:34:26.980501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.614 [2024-07-24 22:34:26.980550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.614 qpair failed and we were unable to recover it. 00:25:01.614 [2024-07-24 22:34:26.980680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.614 [2024-07-24 22:34:26.980724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.614 qpair failed and we were unable to recover it. 00:25:01.614 [2024-07-24 22:34:26.980870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.614 [2024-07-24 22:34:26.980916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.614 qpair failed and we were unable to recover it. 00:25:01.614 [2024-07-24 22:34:26.981039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.614 [2024-07-24 22:34:26.981081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.614 qpair failed and we were unable to recover it. 00:25:01.614 [2024-07-24 22:34:26.981228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.614 [2024-07-24 22:34:26.981270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.614 qpair failed and we were unable to recover it. 00:25:01.614 [2024-07-24 22:34:26.981393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.614 [2024-07-24 22:34:26.981438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.614 qpair failed and we were unable to recover it. 00:25:01.614 [2024-07-24 22:34:26.981571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.614 [2024-07-24 22:34:26.981612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.614 qpair failed and we were unable to recover it. 00:25:01.614 [2024-07-24 22:34:26.981742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.614 [2024-07-24 22:34:26.981787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.614 qpair failed and we were unable to recover it. 00:25:01.614 [2024-07-24 22:34:26.981909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.614 [2024-07-24 22:34:26.981950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.614 qpair failed and we were unable to recover it. 00:25:01.614 [2024-07-24 22:34:26.982097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.614 [2024-07-24 22:34:26.982137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.614 qpair failed and we were unable to recover it. 00:25:01.614 [2024-07-24 22:34:26.982265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.614 [2024-07-24 22:34:26.982307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.614 qpair failed and we were unable to recover it. 00:25:01.614 [2024-07-24 22:34:26.982461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.614 [2024-07-24 22:34:26.982513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.614 qpair failed and we were unable to recover it. 00:25:01.614 [2024-07-24 22:34:26.982631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.614 [2024-07-24 22:34:26.982671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.614 qpair failed and we were unable to recover it. 00:25:01.614 [2024-07-24 22:34:26.982817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.614 [2024-07-24 22:34:26.982858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.614 qpair failed and we were unable to recover it. 00:25:01.614 [2024-07-24 22:34:26.983012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.614 [2024-07-24 22:34:26.983051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.614 qpair failed and we were unable to recover it. 00:25:01.614 [2024-07-24 22:34:26.983151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.614 [2024-07-24 22:34:26.983177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.614 qpair failed and we were unable to recover it. 00:25:01.614 [2024-07-24 22:34:26.983279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.614 [2024-07-24 22:34:26.983304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.614 qpair failed and we were unable to recover it. 00:25:01.614 [2024-07-24 22:34:26.983423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.614 [2024-07-24 22:34:26.983463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.614 qpair failed and we were unable to recover it. 00:25:01.614 [2024-07-24 22:34:26.983631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.614 [2024-07-24 22:34:26.983671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.614 qpair failed and we were unable to recover it. 00:25:01.614 [2024-07-24 22:34:26.983769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.614 [2024-07-24 22:34:26.983794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.614 qpair failed and we were unable to recover it. 00:25:01.614 [2024-07-24 22:34:26.983923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.614 [2024-07-24 22:34:26.983963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.614 qpair failed and we were unable to recover it. 00:25:01.614 [2024-07-24 22:34:26.984117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.614 [2024-07-24 22:34:26.984155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.614 qpair failed and we were unable to recover it. 00:25:01.614 [2024-07-24 22:34:26.984271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.614 [2024-07-24 22:34:26.984313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.614 qpair failed and we were unable to recover it. 00:25:01.614 [2024-07-24 22:34:26.984470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.614 [2024-07-24 22:34:26.984517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.614 qpair failed and we were unable to recover it. 00:25:01.614 [2024-07-24 22:34:26.984635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.614 [2024-07-24 22:34:26.984676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.614 qpair failed and we were unable to recover it. 00:25:01.614 [2024-07-24 22:34:26.984806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.614 [2024-07-24 22:34:26.984846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.614 qpair failed and we were unable to recover it. 00:25:01.614 [2024-07-24 22:34:26.984978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.614 [2024-07-24 22:34:26.985021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.614 qpair failed and we were unable to recover it. 00:25:01.614 [2024-07-24 22:34:26.985160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.614 [2024-07-24 22:34:26.985201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.614 qpair failed and we were unable to recover it. 00:25:01.614 [2024-07-24 22:34:26.985325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.615 [2024-07-24 22:34:26.985368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.615 qpair failed and we were unable to recover it. 00:25:01.615 [2024-07-24 22:34:26.985492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.615 [2024-07-24 22:34:26.985533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.615 qpair failed and we were unable to recover it. 00:25:01.615 [2024-07-24 22:34:26.985680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.615 [2024-07-24 22:34:26.985719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.615 qpair failed and we were unable to recover it. 00:25:01.615 [2024-07-24 22:34:26.985822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.615 [2024-07-24 22:34:26.985849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.615 qpair failed and we were unable to recover it. 00:25:01.615 [2024-07-24 22:34:26.985969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.615 [2024-07-24 22:34:26.986008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.615 qpair failed and we were unable to recover it. 00:25:01.615 [2024-07-24 22:34:26.986156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.615 [2024-07-24 22:34:26.986194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.615 qpair failed and we were unable to recover it. 00:25:01.615 [2024-07-24 22:34:26.986324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.615 [2024-07-24 22:34:26.986355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.615 qpair failed and we were unable to recover it. 00:25:01.615 [2024-07-24 22:34:26.986492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.615 [2024-07-24 22:34:26.986533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.615 qpair failed and we were unable to recover it. 00:25:01.615 [2024-07-24 22:34:26.986656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.615 [2024-07-24 22:34:26.986698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.615 qpair failed and we were unable to recover it. 00:25:01.615 [2024-07-24 22:34:26.986816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.615 [2024-07-24 22:34:26.986856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.615 qpair failed and we were unable to recover it. 00:25:01.615 [2024-07-24 22:34:26.987004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.615 [2024-07-24 22:34:26.987043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.615 qpair failed and we were unable to recover it. 00:25:01.615 [2024-07-24 22:34:26.987190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.615 [2024-07-24 22:34:26.987230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.615 qpair failed and we were unable to recover it. 00:25:01.615 [2024-07-24 22:34:26.987351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.615 [2024-07-24 22:34:26.987392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.615 qpair failed and we were unable to recover it. 00:25:01.615 [2024-07-24 22:34:26.987513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.615 [2024-07-24 22:34:26.987544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.615 qpair failed and we were unable to recover it. 00:25:01.615 [2024-07-24 22:34:26.987704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.615 [2024-07-24 22:34:26.987732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.615 qpair failed and we were unable to recover it. 00:25:01.615 [2024-07-24 22:34:26.987883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.615 [2024-07-24 22:34:26.987924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.615 qpair failed and we were unable to recover it. 00:25:01.615 [2024-07-24 22:34:26.988042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.615 [2024-07-24 22:34:26.988083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.615 qpair failed and we were unable to recover it. 00:25:01.615 [2024-07-24 22:34:26.988206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.615 [2024-07-24 22:34:26.988246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.615 qpair failed and we were unable to recover it. 00:25:01.615 [2024-07-24 22:34:26.988357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.615 [2024-07-24 22:34:26.988386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.615 qpair failed and we were unable to recover it. 00:25:01.615 [2024-07-24 22:34:26.988504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.615 [2024-07-24 22:34:26.988543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.615 qpair failed and we were unable to recover it. 00:25:01.615 [2024-07-24 22:34:26.988672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.615 [2024-07-24 22:34:26.988711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.615 qpair failed and we were unable to recover it. 00:25:01.615 [2024-07-24 22:34:26.988829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.615 [2024-07-24 22:34:26.988868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.615 qpair failed and we were unable to recover it. 00:25:01.615 [2024-07-24 22:34:26.988984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.615 [2024-07-24 22:34:26.989022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.615 qpair failed and we were unable to recover it. 00:25:01.615 [2024-07-24 22:34:26.989180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.615 [2024-07-24 22:34:26.989219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.615 qpair failed and we were unable to recover it. 00:25:01.615 [2024-07-24 22:34:26.989345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.615 [2024-07-24 22:34:26.989386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.615 qpair failed and we were unable to recover it. 00:25:01.615 [2024-07-24 22:34:26.989506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.615 [2024-07-24 22:34:26.989533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.615 qpair failed and we were unable to recover it. 00:25:01.615 [2024-07-24 22:34:26.989660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.615 [2024-07-24 22:34:26.989698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.615 qpair failed and we were unable to recover it. 00:25:01.615 [2024-07-24 22:34:26.989806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.615 [2024-07-24 22:34:26.989831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.615 qpair failed and we were unable to recover it. 00:25:01.615 [2024-07-24 22:34:26.989969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.615 [2024-07-24 22:34:26.990011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.615 qpair failed and we were unable to recover it. 00:25:01.615 [2024-07-24 22:34:26.990130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.615 [2024-07-24 22:34:26.990169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.615 qpair failed and we were unable to recover it. 00:25:01.615 [2024-07-24 22:34:26.990309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.615 [2024-07-24 22:34:26.990348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.615 qpair failed and we were unable to recover it. 00:25:01.615 [2024-07-24 22:34:26.990471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.615 [2024-07-24 22:34:26.990524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.615 qpair failed and we were unable to recover it. 00:25:01.615 [2024-07-24 22:34:26.990644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.615 [2024-07-24 22:34:26.990683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.615 qpair failed and we were unable to recover it. 00:25:01.615 [2024-07-24 22:34:26.990834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.615 [2024-07-24 22:34:26.990872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.615 qpair failed and we were unable to recover it. 00:25:01.615 [2024-07-24 22:34:26.991009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.615 [2024-07-24 22:34:26.991048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.615 qpair failed and we were unable to recover it. 00:25:01.615 [2024-07-24 22:34:26.991204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.615 [2024-07-24 22:34:26.991257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.615 qpair failed and we were unable to recover it. 00:25:01.615 [2024-07-24 22:34:26.991375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.615 [2024-07-24 22:34:26.991416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.615 qpair failed and we were unable to recover it. 00:25:01.616 [2024-07-24 22:34:26.991534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.616 [2024-07-24 22:34:26.991563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.616 qpair failed and we were unable to recover it. 00:25:01.616 [2024-07-24 22:34:26.991692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.616 [2024-07-24 22:34:26.991733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.616 qpair failed and we were unable to recover it. 00:25:01.616 [2024-07-24 22:34:26.991866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.616 [2024-07-24 22:34:26.991895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.616 qpair failed and we were unable to recover it. 00:25:01.616 [2024-07-24 22:34:26.992023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.616 [2024-07-24 22:34:26.992062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.616 qpair failed and we were unable to recover it. 00:25:01.616 [2024-07-24 22:34:26.992181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.616 [2024-07-24 22:34:26.992209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.616 qpair failed and we were unable to recover it. 00:25:01.616 [2024-07-24 22:34:26.992368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.616 [2024-07-24 22:34:26.992407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.616 qpair failed and we were unable to recover it. 00:25:01.616 [2024-07-24 22:34:26.992514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.616 [2024-07-24 22:34:26.992543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.616 qpair failed and we were unable to recover it. 00:25:01.616 [2024-07-24 22:34:26.992646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.616 [2024-07-24 22:34:26.992672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.616 qpair failed and we were unable to recover it. 00:25:01.616 [2024-07-24 22:34:26.992802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.616 [2024-07-24 22:34:26.992827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.616 qpair failed and we were unable to recover it. 00:25:01.616 [2024-07-24 22:34:26.992959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.616 [2024-07-24 22:34:26.992990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.616 qpair failed and we were unable to recover it. 00:25:01.616 [2024-07-24 22:34:26.993109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.616 [2024-07-24 22:34:26.993135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.616 qpair failed and we were unable to recover it. 00:25:01.616 [2024-07-24 22:34:26.993251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.616 [2024-07-24 22:34:26.993277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.616 qpair failed and we were unable to recover it. 00:25:01.616 [2024-07-24 22:34:26.993395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.616 [2024-07-24 22:34:26.993424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.616 qpair failed and we were unable to recover it. 00:25:01.616 [2024-07-24 22:34:26.993528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.616 [2024-07-24 22:34:26.993556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.616 qpair failed and we were unable to recover it. 00:25:01.616 [2024-07-24 22:34:26.993663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.616 [2024-07-24 22:34:26.993691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.616 qpair failed and we were unable to recover it. 00:25:01.616 [2024-07-24 22:34:26.993798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.616 [2024-07-24 22:34:26.993826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.616 qpair failed and we were unable to recover it. 00:25:01.616 [2024-07-24 22:34:26.993931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.616 [2024-07-24 22:34:26.993957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.616 qpair failed and we were unable to recover it. 00:25:01.616 [2024-07-24 22:34:26.994095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.616 [2024-07-24 22:34:26.994125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.616 qpair failed and we were unable to recover it. 00:25:01.616 [2024-07-24 22:34:26.994233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.616 [2024-07-24 22:34:26.994261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.616 qpair failed and we were unable to recover it. 00:25:01.616 [2024-07-24 22:34:26.994364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.616 [2024-07-24 22:34:26.994389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.616 qpair failed and we were unable to recover it. 00:25:01.616 [2024-07-24 22:34:26.994513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.616 [2024-07-24 22:34:26.994539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.616 qpair failed and we were unable to recover it. 00:25:01.616 [2024-07-24 22:34:26.994642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.616 [2024-07-24 22:34:26.994667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.616 qpair failed and we were unable to recover it. 00:25:01.616 [2024-07-24 22:34:26.994774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.616 [2024-07-24 22:34:26.994799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.616 qpair failed and we were unable to recover it. 00:25:01.616 [2024-07-24 22:34:26.994931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.616 [2024-07-24 22:34:26.994955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.616 qpair failed and we were unable to recover it. 00:25:01.616 [2024-07-24 22:34:26.995061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.616 [2024-07-24 22:34:26.995091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.616 qpair failed and we were unable to recover it. 00:25:01.616 [2024-07-24 22:34:26.995199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.616 [2024-07-24 22:34:26.995225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.616 qpair failed and we were unable to recover it. 00:25:01.616 [2024-07-24 22:34:26.995335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.616 [2024-07-24 22:34:26.995364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.616 qpair failed and we were unable to recover it. 00:25:01.616 [2024-07-24 22:34:26.995465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.616 [2024-07-24 22:34:26.995497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.616 qpair failed and we were unable to recover it. 00:25:01.616 [2024-07-24 22:34:26.995634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.616 [2024-07-24 22:34:26.995660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.616 qpair failed and we were unable to recover it. 00:25:01.616 [2024-07-24 22:34:26.995797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.616 [2024-07-24 22:34:26.995824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.616 qpair failed and we were unable to recover it. 00:25:01.616 [2024-07-24 22:34:26.995931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.616 [2024-07-24 22:34:26.995959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.616 qpair failed and we were unable to recover it. 00:25:01.616 [2024-07-24 22:34:26.996066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.616 [2024-07-24 22:34:26.996096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.616 qpair failed and we were unable to recover it. 00:25:01.616 [2024-07-24 22:34:26.996210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.616 [2024-07-24 22:34:26.996236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.616 qpair failed and we were unable to recover it. 00:25:01.616 [2024-07-24 22:34:26.996339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.616 [2024-07-24 22:34:26.996364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.616 qpair failed and we were unable to recover it. 00:25:01.616 [2024-07-24 22:34:26.996470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.616 [2024-07-24 22:34:26.996504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.616 qpair failed and we were unable to recover it. 00:25:01.616 [2024-07-24 22:34:26.996622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.616 [2024-07-24 22:34:26.996648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.616 qpair failed and we were unable to recover it. 00:25:01.616 [2024-07-24 22:34:26.996753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.616 [2024-07-24 22:34:26.996786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.616 qpair failed and we were unable to recover it. 00:25:01.616 [2024-07-24 22:34:26.996913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.617 [2024-07-24 22:34:26.996940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.617 qpair failed and we were unable to recover it. 00:25:01.617 [2024-07-24 22:34:26.997080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.617 [2024-07-24 22:34:26.997109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.617 qpair failed and we were unable to recover it. 00:25:01.617 [2024-07-24 22:34:26.997263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.617 [2024-07-24 22:34:26.997292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.617 qpair failed and we were unable to recover it. 00:25:01.617 [2024-07-24 22:34:26.997390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.617 [2024-07-24 22:34:26.997417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.617 qpair failed and we were unable to recover it. 00:25:01.617 [2024-07-24 22:34:26.997525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.617 [2024-07-24 22:34:26.997553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.617 qpair failed and we were unable to recover it. 00:25:01.617 [2024-07-24 22:34:26.997687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.617 [2024-07-24 22:34:26.997714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.617 qpair failed and we were unable to recover it. 00:25:01.617 [2024-07-24 22:34:26.997819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.617 [2024-07-24 22:34:26.997845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.617 qpair failed and we were unable to recover it. 00:25:01.617 [2024-07-24 22:34:26.997953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.617 [2024-07-24 22:34:26.997980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.617 qpair failed and we were unable to recover it. 00:25:01.617 [2024-07-24 22:34:26.998125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.617 [2024-07-24 22:34:26.998153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.617 qpair failed and we were unable to recover it. 00:25:01.617 [2024-07-24 22:34:26.998268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.617 [2024-07-24 22:34:26.998296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.617 qpair failed and we were unable to recover it. 00:25:01.617 [2024-07-24 22:34:26.998418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.617 [2024-07-24 22:34:26.998444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.617 qpair failed and we were unable to recover it. 00:25:01.617 [2024-07-24 22:34:26.998555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.617 [2024-07-24 22:34:26.998582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.617 qpair failed and we were unable to recover it. 00:25:01.617 [2024-07-24 22:34:26.998700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.617 [2024-07-24 22:34:26.998726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.617 qpair failed and we were unable to recover it. 00:25:01.617 [2024-07-24 22:34:26.998833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.617 [2024-07-24 22:34:26.998860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.617 qpair failed and we were unable to recover it. 00:25:01.617 [2024-07-24 22:34:26.998981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.617 [2024-07-24 22:34:26.999007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.617 qpair failed and we were unable to recover it. 00:25:01.617 [2024-07-24 22:34:26.999117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.617 [2024-07-24 22:34:26.999145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.617 qpair failed and we were unable to recover it. 00:25:01.617 [2024-07-24 22:34:26.999257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.617 [2024-07-24 22:34:26.999285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.617 qpair failed and we were unable to recover it. 00:25:01.617 [2024-07-24 22:34:26.999426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.617 [2024-07-24 22:34:26.999451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.617 qpair failed and we were unable to recover it. 00:25:01.617 [2024-07-24 22:34:26.999591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.617 [2024-07-24 22:34:26.999617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.617 qpair failed and we were unable to recover it. 00:25:01.617 [2024-07-24 22:34:26.999723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.617 [2024-07-24 22:34:26.999748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.617 qpair failed and we were unable to recover it. 00:25:01.617 [2024-07-24 22:34:26.999856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.617 [2024-07-24 22:34:26.999885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.617 qpair failed and we were unable to recover it. 00:25:01.617 [2024-07-24 22:34:26.999993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.617 [2024-07-24 22:34:27.000021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.617 qpair failed and we were unable to recover it. 00:25:01.617 [2024-07-24 22:34:27.000137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.617 [2024-07-24 22:34:27.000163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.617 qpair failed and we were unable to recover it. 00:25:01.617 [2024-07-24 22:34:27.000297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.617 [2024-07-24 22:34:27.000323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.617 qpair failed and we were unable to recover it. 00:25:01.617 [2024-07-24 22:34:27.000425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.617 [2024-07-24 22:34:27.000452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.617 qpair failed and we were unable to recover it. 00:25:01.617 [2024-07-24 22:34:27.000592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.617 [2024-07-24 22:34:27.000619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.617 qpair failed and we were unable to recover it. 00:25:01.617 [2024-07-24 22:34:27.000755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.617 [2024-07-24 22:34:27.000782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.617 qpair failed and we were unable to recover it. 00:25:01.617 [2024-07-24 22:34:27.000934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.617 [2024-07-24 22:34:27.000962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.617 qpair failed and we were unable to recover it. 00:25:01.617 [2024-07-24 22:34:27.001069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.617 [2024-07-24 22:34:27.001095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.617 qpair failed and we were unable to recover it. 00:25:01.617 [2024-07-24 22:34:27.001231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.617 [2024-07-24 22:34:27.001258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.617 qpair failed and we were unable to recover it. 00:25:01.617 [2024-07-24 22:34:27.001358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.617 [2024-07-24 22:34:27.001385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.617 qpair failed and we were unable to recover it. 00:25:01.617 [2024-07-24 22:34:27.001495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.617 [2024-07-24 22:34:27.001521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.617 qpair failed and we were unable to recover it. 00:25:01.617 [2024-07-24 22:34:27.001626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.617 [2024-07-24 22:34:27.001650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.617 qpair failed and we were unable to recover it. 00:25:01.617 [2024-07-24 22:34:27.001778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.617 [2024-07-24 22:34:27.001803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.617 qpair failed and we were unable to recover it. 00:25:01.617 [2024-07-24 22:34:27.001907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.617 [2024-07-24 22:34:27.001933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.617 qpair failed and we were unable to recover it. 00:25:01.617 [2024-07-24 22:34:27.002036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.617 [2024-07-24 22:34:27.002060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.617 qpair failed and we were unable to recover it. 00:25:01.617 [2024-07-24 22:34:27.002194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.617 [2024-07-24 22:34:27.002219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.617 qpair failed and we were unable to recover it. 00:25:01.617 [2024-07-24 22:34:27.002324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.617 [2024-07-24 22:34:27.002353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.618 qpair failed and we were unable to recover it. 00:25:01.618 [2024-07-24 22:34:27.002490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.618 [2024-07-24 22:34:27.002516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.618 qpair failed and we were unable to recover it. 00:25:01.618 [2024-07-24 22:34:27.002654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.618 [2024-07-24 22:34:27.002685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.618 qpair failed and we were unable to recover it. 00:25:01.618 [2024-07-24 22:34:27.002798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.618 [2024-07-24 22:34:27.002825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.618 qpair failed and we were unable to recover it. 00:25:01.618 [2024-07-24 22:34:27.002950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.618 [2024-07-24 22:34:27.002979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.618 qpair failed and we were unable to recover it. 00:25:01.618 [2024-07-24 22:34:27.003117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.618 [2024-07-24 22:34:27.003146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.618 qpair failed and we were unable to recover it. 00:25:01.618 [2024-07-24 22:34:27.003256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.618 [2024-07-24 22:34:27.003284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.618 qpair failed and we were unable to recover it. 00:25:01.618 [2024-07-24 22:34:27.003390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.618 [2024-07-24 22:34:27.003415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.618 qpair failed and we were unable to recover it. 00:25:01.618 [2024-07-24 22:34:27.003521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.618 [2024-07-24 22:34:27.003546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.618 qpair failed and we were unable to recover it. 00:25:01.618 [2024-07-24 22:34:27.003665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.618 [2024-07-24 22:34:27.003690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.618 qpair failed and we were unable to recover it. 00:25:01.618 [2024-07-24 22:34:27.003791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.618 [2024-07-24 22:34:27.003816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.618 qpair failed and we were unable to recover it. 00:25:01.618 [2024-07-24 22:34:27.003921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.618 [2024-07-24 22:34:27.003946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.618 qpair failed and we were unable to recover it. 00:25:01.618 [2024-07-24 22:34:27.004079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.618 [2024-07-24 22:34:27.004103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.618 qpair failed and we were unable to recover it. 00:25:01.618 [2024-07-24 22:34:27.004243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.618 [2024-07-24 22:34:27.004271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.618 qpair failed and we were unable to recover it. 00:25:01.618 [2024-07-24 22:34:27.004383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.618 [2024-07-24 22:34:27.004411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.618 qpair failed and we were unable to recover it. 00:25:01.618 [2024-07-24 22:34:27.004561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.618 [2024-07-24 22:34:27.004589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.618 qpair failed and we were unable to recover it. 00:25:01.618 [2024-07-24 22:34:27.004701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.618 [2024-07-24 22:34:27.004727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.618 qpair failed and we were unable to recover it. 00:25:01.618 [2024-07-24 22:34:27.004844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.618 [2024-07-24 22:34:27.004872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.618 qpair failed and we were unable to recover it. 00:25:01.618 [2024-07-24 22:34:27.004975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.618 [2024-07-24 22:34:27.005002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.618 qpair failed and we were unable to recover it. 00:25:01.618 [2024-07-24 22:34:27.005110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.618 [2024-07-24 22:34:27.005137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.618 qpair failed and we were unable to recover it. 00:25:01.618 [2024-07-24 22:34:27.005274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.618 [2024-07-24 22:34:27.005301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.618 qpair failed and we were unable to recover it. 00:25:01.618 [2024-07-24 22:34:27.005406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.618 [2024-07-24 22:34:27.005434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.618 qpair failed and we were unable to recover it. 00:25:01.618 [2024-07-24 22:34:27.005556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.618 [2024-07-24 22:34:27.005585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.618 qpair failed and we were unable to recover it. 00:25:01.618 [2024-07-24 22:34:27.005701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.618 [2024-07-24 22:34:27.005726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.618 qpair failed and we were unable to recover it. 00:25:01.618 [2024-07-24 22:34:27.005826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.618 [2024-07-24 22:34:27.005851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.618 qpair failed and we were unable to recover it. 00:25:01.618 [2024-07-24 22:34:27.005984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.618 [2024-07-24 22:34:27.006009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.618 qpair failed and we were unable to recover it. 00:25:01.618 [2024-07-24 22:34:27.006123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.618 [2024-07-24 22:34:27.006148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.618 qpair failed and we were unable to recover it. 00:25:01.618 [2024-07-24 22:34:27.006297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.618 [2024-07-24 22:34:27.006326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.618 qpair failed and we were unable to recover it. 00:25:01.618 [2024-07-24 22:34:27.006432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.618 [2024-07-24 22:34:27.006460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.618 qpair failed and we were unable to recover it. 00:25:01.618 [2024-07-24 22:34:27.006580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.618 [2024-07-24 22:34:27.006612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.618 qpair failed and we were unable to recover it. 00:25:01.618 [2024-07-24 22:34:27.006717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.618 [2024-07-24 22:34:27.006743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.618 qpair failed and we were unable to recover it. 00:25:01.618 [2024-07-24 22:34:27.006860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.618 [2024-07-24 22:34:27.006885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.618 qpair failed and we were unable to recover it. 00:25:01.619 [2024-07-24 22:34:27.006993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.619 [2024-07-24 22:34:27.007021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.619 qpair failed and we were unable to recover it. 00:25:01.619 [2024-07-24 22:34:27.007141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.619 [2024-07-24 22:34:27.007168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.619 qpair failed and we were unable to recover it. 00:25:01.619 [2024-07-24 22:34:27.007293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.619 [2024-07-24 22:34:27.007321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.619 qpair failed and we were unable to recover it. 00:25:01.619 [2024-07-24 22:34:27.007426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.619 [2024-07-24 22:34:27.007453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.619 qpair failed and we were unable to recover it. 00:25:01.619 [2024-07-24 22:34:27.007563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.619 [2024-07-24 22:34:27.007591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.619 qpair failed and we were unable to recover it. 00:25:01.619 [2024-07-24 22:34:27.007714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.619 [2024-07-24 22:34:27.007741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.619 qpair failed and we were unable to recover it. 00:25:01.619 [2024-07-24 22:34:27.007845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.619 [2024-07-24 22:34:27.007872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.619 qpair failed and we were unable to recover it. 00:25:01.619 [2024-07-24 22:34:27.007974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.619 [2024-07-24 22:34:27.008000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.619 qpair failed and we were unable to recover it. 00:25:01.619 [2024-07-24 22:34:27.008119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.619 [2024-07-24 22:34:27.008146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.619 qpair failed and we were unable to recover it. 00:25:01.619 [2024-07-24 22:34:27.008263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.619 [2024-07-24 22:34:27.008290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.619 qpair failed and we were unable to recover it. 00:25:01.619 [2024-07-24 22:34:27.008393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.619 [2024-07-24 22:34:27.008422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.619 qpair failed and we were unable to recover it. 00:25:01.619 [2024-07-24 22:34:27.008549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.619 [2024-07-24 22:34:27.008578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.619 qpair failed and we were unable to recover it. 00:25:01.619 [2024-07-24 22:34:27.008692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.619 [2024-07-24 22:34:27.008721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.619 qpair failed and we were unable to recover it. 00:25:01.619 [2024-07-24 22:34:27.008831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.619 [2024-07-24 22:34:27.008857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.619 qpair failed and we were unable to recover it. 00:25:01.619 [2024-07-24 22:34:27.008973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.619 [2024-07-24 22:34:27.008998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.619 qpair failed and we were unable to recover it. 00:25:01.619 [2024-07-24 22:34:27.009114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.619 [2024-07-24 22:34:27.009140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.619 qpair failed and we were unable to recover it. 00:25:01.619 [2024-07-24 22:34:27.009244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.619 [2024-07-24 22:34:27.009269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.619 qpair failed and we were unable to recover it. 00:25:01.619 [2024-07-24 22:34:27.009388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.619 [2024-07-24 22:34:27.009416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.619 qpair failed and we were unable to recover it. 00:25:01.619 [2024-07-24 22:34:27.009522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.619 [2024-07-24 22:34:27.009566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.619 qpair failed and we were unable to recover it. 00:25:01.619 [2024-07-24 22:34:27.009692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.619 [2024-07-24 22:34:27.009718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.619 qpair failed and we were unable to recover it. 00:25:01.619 [2024-07-24 22:34:27.009822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.619 [2024-07-24 22:34:27.009849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.619 qpair failed and we were unable to recover it. 00:25:01.619 [2024-07-24 22:34:27.009952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.619 [2024-07-24 22:34:27.009978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.619 qpair failed and we were unable to recover it. 00:25:01.619 [2024-07-24 22:34:27.010078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.619 [2024-07-24 22:34:27.010104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.619 qpair failed and we were unable to recover it. 00:25:01.619 [2024-07-24 22:34:27.010209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.619 [2024-07-24 22:34:27.010237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.619 qpair failed and we were unable to recover it. 00:25:01.619 [2024-07-24 22:34:27.010357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.619 [2024-07-24 22:34:27.010385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.619 qpair failed and we were unable to recover it. 00:25:01.619 [2024-07-24 22:34:27.010492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.619 [2024-07-24 22:34:27.010518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.619 qpair failed and we were unable to recover it. 00:25:01.619 [2024-07-24 22:34:27.010647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.619 [2024-07-24 22:34:27.010672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.619 qpair failed and we were unable to recover it. 00:25:01.619 [2024-07-24 22:34:27.010781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.619 [2024-07-24 22:34:27.010807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.619 qpair failed and we were unable to recover it. 00:25:01.619 [2024-07-24 22:34:27.010917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.619 [2024-07-24 22:34:27.010948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.619 qpair failed and we were unable to recover it. 00:25:01.619 [2024-07-24 22:34:27.011068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.619 [2024-07-24 22:34:27.011096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.619 qpair failed and we were unable to recover it. 00:25:01.619 [2024-07-24 22:34:27.011218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.619 [2024-07-24 22:34:27.011246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.619 qpair failed and we were unable to recover it. 00:25:01.619 [2024-07-24 22:34:27.011350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.619 [2024-07-24 22:34:27.011377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.619 qpair failed and we were unable to recover it. 00:25:01.619 [2024-07-24 22:34:27.011488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.619 [2024-07-24 22:34:27.011517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.619 qpair failed and we were unable to recover it. 00:25:01.619 [2024-07-24 22:34:27.011634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.619 [2024-07-24 22:34:27.011660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.619 qpair failed and we were unable to recover it. 00:25:01.619 [2024-07-24 22:34:27.011763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.619 [2024-07-24 22:34:27.011789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.619 qpair failed and we were unable to recover it. 00:25:01.619 [2024-07-24 22:34:27.011898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.619 [2024-07-24 22:34:27.011927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.619 qpair failed and we were unable to recover it. 00:25:01.619 [2024-07-24 22:34:27.012034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.619 [2024-07-24 22:34:27.012062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.619 qpair failed and we were unable to recover it. 00:25:01.619 [2024-07-24 22:34:27.012166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.619 [2024-07-24 22:34:27.012201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.620 qpair failed and we were unable to recover it. 00:25:01.620 [2024-07-24 22:34:27.012307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.620 [2024-07-24 22:34:27.012334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.620 qpair failed and we were unable to recover it. 00:25:01.620 [2024-07-24 22:34:27.012449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.620 [2024-07-24 22:34:27.012476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.620 qpair failed and we were unable to recover it. 00:25:01.620 [2024-07-24 22:34:27.012613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.620 [2024-07-24 22:34:27.012639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.620 qpair failed and we were unable to recover it. 00:25:01.620 [2024-07-24 22:34:27.012742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.620 [2024-07-24 22:34:27.012768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.620 qpair failed and we were unable to recover it. 00:25:01.620 [2024-07-24 22:34:27.012891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.620 [2024-07-24 22:34:27.012917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.620 qpair failed and we were unable to recover it. 00:25:01.620 [2024-07-24 22:34:27.013041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.620 [2024-07-24 22:34:27.013070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.620 qpair failed and we were unable to recover it. 00:25:01.620 [2024-07-24 22:34:27.013181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.620 [2024-07-24 22:34:27.013206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.620 qpair failed and we were unable to recover it. 00:25:01.620 [2024-07-24 22:34:27.013324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.620 [2024-07-24 22:34:27.013352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.620 qpair failed and we were unable to recover it. 00:25:01.620 [2024-07-24 22:34:27.013463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.620 [2024-07-24 22:34:27.013496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.620 qpair failed and we were unable to recover it. 00:25:01.620 [2024-07-24 22:34:27.013627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.620 [2024-07-24 22:34:27.013653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.620 qpair failed and we were unable to recover it. 00:25:01.620 [2024-07-24 22:34:27.013765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.620 [2024-07-24 22:34:27.013790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.620 qpair failed and we were unable to recover it. 00:25:01.620 [2024-07-24 22:34:27.013893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.620 [2024-07-24 22:34:27.013920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.620 qpair failed and we were unable to recover it. 00:25:01.620 [2024-07-24 22:34:27.014042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.620 [2024-07-24 22:34:27.014068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.620 qpair failed and we were unable to recover it. 00:25:01.620 [2024-07-24 22:34:27.014179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.620 [2024-07-24 22:34:27.014206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.620 qpair failed and we were unable to recover it. 00:25:01.620 [2024-07-24 22:34:27.014308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.620 [2024-07-24 22:34:27.014334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.620 qpair failed and we were unable to recover it. 00:25:01.620 [2024-07-24 22:34:27.014439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.620 [2024-07-24 22:34:27.014465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.620 qpair failed and we were unable to recover it. 00:25:01.620 [2024-07-24 22:34:27.014579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.620 [2024-07-24 22:34:27.014606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.620 qpair failed and we were unable to recover it. 00:25:01.620 [2024-07-24 22:34:27.014723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.620 [2024-07-24 22:34:27.014749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.620 qpair failed and we were unable to recover it. 00:25:01.620 [2024-07-24 22:34:27.014863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.620 [2024-07-24 22:34:27.014889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.620 qpair failed and we were unable to recover it. 00:25:01.620 [2024-07-24 22:34:27.015006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.620 [2024-07-24 22:34:27.015032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.620 qpair failed and we were unable to recover it. 00:25:01.620 [2024-07-24 22:34:27.015142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.620 [2024-07-24 22:34:27.015169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.620 qpair failed and we were unable to recover it. 00:25:01.620 [2024-07-24 22:34:27.015275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.620 [2024-07-24 22:34:27.015301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.620 qpair failed and we were unable to recover it. 00:25:01.620 [2024-07-24 22:34:27.015420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.620 [2024-07-24 22:34:27.015449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.620 qpair failed and we were unable to recover it. 00:25:01.620 [2024-07-24 22:34:27.015574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.620 [2024-07-24 22:34:27.015600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.620 qpair failed and we were unable to recover it. 00:25:01.620 [2024-07-24 22:34:27.015703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.620 [2024-07-24 22:34:27.015729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.620 qpair failed and we were unable to recover it. 00:25:01.620 [2024-07-24 22:34:27.015831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.620 [2024-07-24 22:34:27.015857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.620 qpair failed and we were unable to recover it. 00:25:01.620 [2024-07-24 22:34:27.015969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.620 [2024-07-24 22:34:27.015998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.620 qpair failed and we were unable to recover it. 00:25:01.620 [2024-07-24 22:34:27.016102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.620 [2024-07-24 22:34:27.016129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.620 qpair failed and we were unable to recover it. 00:25:01.620 [2024-07-24 22:34:27.016234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.620 [2024-07-24 22:34:27.016262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.620 qpair failed and we were unable to recover it. 00:25:01.620 [2024-07-24 22:34:27.016381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.620 [2024-07-24 22:34:27.016407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.620 qpair failed and we were unable to recover it. 00:25:01.620 [2024-07-24 22:34:27.016508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.620 [2024-07-24 22:34:27.016534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.620 qpair failed and we were unable to recover it. 00:25:01.620 [2024-07-24 22:34:27.016642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.620 [2024-07-24 22:34:27.016669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.620 qpair failed and we were unable to recover it. 00:25:01.620 [2024-07-24 22:34:27.016790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.620 [2024-07-24 22:34:27.016816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.620 qpair failed and we were unable to recover it. 00:25:01.620 [2024-07-24 22:34:27.016920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.620 [2024-07-24 22:34:27.016946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.620 qpair failed and we were unable to recover it. 00:25:01.620 [2024-07-24 22:34:27.017049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.620 [2024-07-24 22:34:27.017075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.620 qpair failed and we were unable to recover it. 00:25:01.620 [2024-07-24 22:34:27.017209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.620 [2024-07-24 22:34:27.017234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.620 qpair failed and we were unable to recover it. 00:25:01.620 [2024-07-24 22:34:27.017335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.620 [2024-07-24 22:34:27.017361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.620 qpair failed and we were unable to recover it. 00:25:01.620 [2024-07-24 22:34:27.017459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.620 [2024-07-24 22:34:27.017489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.621 qpair failed and we were unable to recover it. 00:25:01.621 [2024-07-24 22:34:27.017601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.621 [2024-07-24 22:34:27.017627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.621 qpair failed and we were unable to recover it. 00:25:01.621 [2024-07-24 22:34:27.017743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.621 [2024-07-24 22:34:27.017769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.621 qpair failed and we were unable to recover it. 00:25:01.621 [2024-07-24 22:34:27.017880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.621 [2024-07-24 22:34:27.017905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.621 qpair failed and we were unable to recover it. 00:25:01.621 [2024-07-24 22:34:27.018010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.621 [2024-07-24 22:34:27.018035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.621 qpair failed and we were unable to recover it. 00:25:01.621 [2024-07-24 22:34:27.018147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.621 [2024-07-24 22:34:27.018181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.621 qpair failed and we were unable to recover it. 00:25:01.621 [2024-07-24 22:34:27.018296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.621 [2024-07-24 22:34:27.018323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.621 qpair failed and we were unable to recover it. 00:25:01.621 [2024-07-24 22:34:27.018433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.621 [2024-07-24 22:34:27.018462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.621 qpair failed and we were unable to recover it. 00:25:01.621 [2024-07-24 22:34:27.018592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.621 [2024-07-24 22:34:27.018618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.621 qpair failed and we were unable to recover it. 00:25:01.621 [2024-07-24 22:34:27.018724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.621 [2024-07-24 22:34:27.018748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.621 qpair failed and we were unable to recover it. 00:25:01.621 [2024-07-24 22:34:27.018871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.621 [2024-07-24 22:34:27.018896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.621 qpair failed and we were unable to recover it. 00:25:01.621 [2024-07-24 22:34:27.019004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.621 [2024-07-24 22:34:27.019029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.621 qpair failed and we were unable to recover it. 00:25:01.621 [2024-07-24 22:34:27.019156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.621 [2024-07-24 22:34:27.019180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.621 qpair failed and we were unable to recover it. 00:25:01.621 [2024-07-24 22:34:27.019285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.621 [2024-07-24 22:34:27.019312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.621 qpair failed and we were unable to recover it. 00:25:01.621 [2024-07-24 22:34:27.019419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.621 [2024-07-24 22:34:27.019445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.621 qpair failed and we were unable to recover it. 00:25:01.621 [2024-07-24 22:34:27.019563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.621 [2024-07-24 22:34:27.019594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.621 qpair failed and we were unable to recover it. 00:25:01.621 [2024-07-24 22:34:27.019717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.621 [2024-07-24 22:34:27.019747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.621 qpair failed and we were unable to recover it. 00:25:01.621 [2024-07-24 22:34:27.019866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.621 [2024-07-24 22:34:27.019892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.621 qpair failed and we were unable to recover it. 00:25:01.621 [2024-07-24 22:34:27.019996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.621 [2024-07-24 22:34:27.020023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.621 qpair failed and we were unable to recover it. 00:25:01.621 [2024-07-24 22:34:27.020129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.621 [2024-07-24 22:34:27.020155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.621 qpair failed and we were unable to recover it. 00:25:01.621 [2024-07-24 22:34:27.020268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.621 [2024-07-24 22:34:27.020295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.621 qpair failed and we were unable to recover it. 00:25:01.621 [2024-07-24 22:34:27.020398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.621 [2024-07-24 22:34:27.020426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.621 qpair failed and we were unable to recover it. 00:25:01.621 [2024-07-24 22:34:27.020564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.621 [2024-07-24 22:34:27.020593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.621 qpair failed and we were unable to recover it. 00:25:01.621 [2024-07-24 22:34:27.020699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.621 [2024-07-24 22:34:27.020726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.621 qpair failed and we were unable to recover it. 00:25:01.621 [2024-07-24 22:34:27.020827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.621 [2024-07-24 22:34:27.020853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.621 qpair failed and we were unable to recover it. 00:25:01.621 [2024-07-24 22:34:27.020955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.621 [2024-07-24 22:34:27.020981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.621 qpair failed and we were unable to recover it. 00:25:01.621 [2024-07-24 22:34:27.021085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.621 [2024-07-24 22:34:27.021112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.621 qpair failed and we were unable to recover it. 00:25:01.621 [2024-07-24 22:34:27.021237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.621 [2024-07-24 22:34:27.021265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.621 qpair failed and we were unable to recover it. 00:25:01.621 [2024-07-24 22:34:27.021382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.621 [2024-07-24 22:34:27.021410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.621 qpair failed and we were unable to recover it. 00:25:01.621 [2024-07-24 22:34:27.021525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.621 [2024-07-24 22:34:27.021551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.621 qpair failed and we were unable to recover it. 00:25:01.621 [2024-07-24 22:34:27.021660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.621 [2024-07-24 22:34:27.021686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.621 qpair failed and we were unable to recover it. 00:25:01.621 [2024-07-24 22:34:27.021797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.621 [2024-07-24 22:34:27.021825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.621 qpair failed and we were unable to recover it. 00:25:01.621 [2024-07-24 22:34:27.021944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.621 [2024-07-24 22:34:27.021971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.621 qpair failed and we were unable to recover it. 00:25:01.621 [2024-07-24 22:34:27.022075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.621 [2024-07-24 22:34:27.022103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.621 qpair failed and we were unable to recover it. 00:25:01.621 [2024-07-24 22:34:27.022209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.621 [2024-07-24 22:34:27.022235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.621 qpair failed and we were unable to recover it. 00:25:01.621 [2024-07-24 22:34:27.022331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.621 [2024-07-24 22:34:27.022356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.621 qpair failed and we were unable to recover it. 00:25:01.621 [2024-07-24 22:34:27.022470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.621 [2024-07-24 22:34:27.022512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.621 qpair failed and we were unable to recover it. 00:25:01.621 [2024-07-24 22:34:27.022629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.621 [2024-07-24 22:34:27.022654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.621 qpair failed and we were unable to recover it. 00:25:01.621 [2024-07-24 22:34:27.022787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.621 [2024-07-24 22:34:27.022812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.622 qpair failed and we were unable to recover it. 00:25:01.622 [2024-07-24 22:34:27.022936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.622 [2024-07-24 22:34:27.022961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.622 qpair failed and we were unable to recover it. 00:25:01.622 [2024-07-24 22:34:27.023066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.622 [2024-07-24 22:34:27.023093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.622 qpair failed and we were unable to recover it. 00:25:01.622 [2024-07-24 22:34:27.023210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.622 [2024-07-24 22:34:27.023236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.622 qpair failed and we were unable to recover it. 00:25:01.622 [2024-07-24 22:34:27.023335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.622 [2024-07-24 22:34:27.023359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.622 qpair failed and we were unable to recover it. 00:25:01.622 [2024-07-24 22:34:27.023461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.622 [2024-07-24 22:34:27.023497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.622 qpair failed and we were unable to recover it. 00:25:01.622 [2024-07-24 22:34:27.023616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.622 [2024-07-24 22:34:27.023642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.622 qpair failed and we were unable to recover it. 00:25:01.622 [2024-07-24 22:34:27.023752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.622 [2024-07-24 22:34:27.023779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.622 qpair failed and we were unable to recover it. 00:25:01.622 [2024-07-24 22:34:27.023885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.622 [2024-07-24 22:34:27.023911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.622 qpair failed and we were unable to recover it. 00:25:01.622 [2024-07-24 22:34:27.024037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.622 [2024-07-24 22:34:27.024062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.622 qpair failed and we were unable to recover it. 00:25:01.622 [2024-07-24 22:34:27.024174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.622 [2024-07-24 22:34:27.024201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.622 qpair failed and we were unable to recover it. 00:25:01.622 [2024-07-24 22:34:27.024298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.622 [2024-07-24 22:34:27.024324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.622 qpair failed and we were unable to recover it. 00:25:01.622 [2024-07-24 22:34:27.024441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.622 [2024-07-24 22:34:27.024467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.622 qpair failed and we were unable to recover it. 00:25:01.622 [2024-07-24 22:34:27.024582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.622 [2024-07-24 22:34:27.024609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.622 qpair failed and we were unable to recover it. 00:25:01.622 [2024-07-24 22:34:27.024738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.622 [2024-07-24 22:34:27.024764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.622 qpair failed and we were unable to recover it. 00:25:01.622 [2024-07-24 22:34:27.024869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.622 [2024-07-24 22:34:27.024896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.622 qpair failed and we were unable to recover it. 00:25:01.622 [2024-07-24 22:34:27.025012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.622 [2024-07-24 22:34:27.025038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.622 qpair failed and we were unable to recover it. 00:25:01.622 [2024-07-24 22:34:27.025155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.622 [2024-07-24 22:34:27.025181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.622 qpair failed and we were unable to recover it. 00:25:01.622 [2024-07-24 22:34:27.025300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.622 [2024-07-24 22:34:27.025334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.622 qpair failed and we were unable to recover it. 00:25:01.622 [2024-07-24 22:34:27.025455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.622 [2024-07-24 22:34:27.025490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.622 qpair failed and we were unable to recover it. 00:25:01.622 [2024-07-24 22:34:27.025598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.622 [2024-07-24 22:34:27.025624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.622 qpair failed and we were unable to recover it. 00:25:01.622 [2024-07-24 22:34:27.025734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.622 [2024-07-24 22:34:27.025761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.622 qpair failed and we were unable to recover it. 00:25:01.622 [2024-07-24 22:34:27.025880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.622 [2024-07-24 22:34:27.025906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.622 qpair failed and we were unable to recover it. 00:25:01.622 [2024-07-24 22:34:27.026010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.622 [2024-07-24 22:34:27.026037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.622 qpair failed and we were unable to recover it. 00:25:01.622 [2024-07-24 22:34:27.026144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.622 [2024-07-24 22:34:27.026169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.622 qpair failed and we were unable to recover it. 00:25:01.622 [2024-07-24 22:34:27.026278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.622 [2024-07-24 22:34:27.026309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.622 qpair failed and we were unable to recover it. 00:25:01.622 [2024-07-24 22:34:27.026429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.622 [2024-07-24 22:34:27.026455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.622 qpair failed and we were unable to recover it. 00:25:01.622 [2024-07-24 22:34:27.026581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.622 [2024-07-24 22:34:27.026612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.622 qpair failed and we were unable to recover it. 00:25:01.622 [2024-07-24 22:34:27.026730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.622 [2024-07-24 22:34:27.026758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.622 qpair failed and we were unable to recover it. 00:25:01.622 [2024-07-24 22:34:27.026875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.622 [2024-07-24 22:34:27.026903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.622 qpair failed and we were unable to recover it. 00:25:01.622 [2024-07-24 22:34:27.027008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.622 [2024-07-24 22:34:27.027034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.622 qpair failed and we were unable to recover it. 00:25:01.622 [2024-07-24 22:34:27.027147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.622 [2024-07-24 22:34:27.027173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.622 qpair failed and we were unable to recover it. 00:25:01.622 [2024-07-24 22:34:27.027300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.622 [2024-07-24 22:34:27.027327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.622 qpair failed and we were unable to recover it. 00:25:01.622 [2024-07-24 22:34:27.027457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.623 [2024-07-24 22:34:27.027488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.623 qpair failed and we were unable to recover it. 00:25:01.623 [2024-07-24 22:34:27.027615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.623 [2024-07-24 22:34:27.027643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.623 qpair failed and we were unable to recover it. 00:25:01.623 [2024-07-24 22:34:27.027747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.623 [2024-07-24 22:34:27.027773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.623 qpair failed and we were unable to recover it. 00:25:01.623 [2024-07-24 22:34:27.027876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.623 [2024-07-24 22:34:27.027902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.623 qpair failed and we were unable to recover it. 00:25:01.623 [2024-07-24 22:34:27.028030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.623 [2024-07-24 22:34:27.028055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.623 qpair failed and we were unable to recover it. 00:25:01.623 [2024-07-24 22:34:27.028162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.623 [2024-07-24 22:34:27.028187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.623 qpair failed and we were unable to recover it. 00:25:01.623 [2024-07-24 22:34:27.028297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.623 [2024-07-24 22:34:27.028327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.623 qpair failed and we were unable to recover it. 00:25:01.623 [2024-07-24 22:34:27.028431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.623 [2024-07-24 22:34:27.028457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.623 qpair failed and we were unable to recover it. 00:25:01.623 [2024-07-24 22:34:27.028586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.623 [2024-07-24 22:34:27.028618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.623 qpair failed and we were unable to recover it. 00:25:01.623 [2024-07-24 22:34:27.028747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.623 [2024-07-24 22:34:27.028776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.623 qpair failed and we were unable to recover it. 00:25:01.623 [2024-07-24 22:34:27.028890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.623 [2024-07-24 22:34:27.028916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.623 qpair failed and we were unable to recover it. 00:25:01.623 [2024-07-24 22:34:27.029034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.623 [2024-07-24 22:34:27.029060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.623 qpair failed and we were unable to recover it. 00:25:01.623 [2024-07-24 22:34:27.029186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.623 [2024-07-24 22:34:27.029222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.623 qpair failed and we were unable to recover it. 00:25:01.623 [2024-07-24 22:34:27.029342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.623 [2024-07-24 22:34:27.029374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.623 qpair failed and we were unable to recover it. 00:25:01.623 [2024-07-24 22:34:27.029528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.623 [2024-07-24 22:34:27.029559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.623 qpair failed and we were unable to recover it. 00:25:01.623 [2024-07-24 22:34:27.029683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.623 [2024-07-24 22:34:27.029709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.623 qpair failed and we were unable to recover it. 00:25:01.623 [2024-07-24 22:34:27.029823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.623 [2024-07-24 22:34:27.029850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.623 qpair failed and we were unable to recover it. 00:25:01.623 [2024-07-24 22:34:27.029955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.623 [2024-07-24 22:34:27.029980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.623 qpair failed and we were unable to recover it. 00:25:01.623 [2024-07-24 22:34:27.030085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.623 [2024-07-24 22:34:27.030110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.623 qpair failed and we were unable to recover it. 00:25:01.623 [2024-07-24 22:34:27.030216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.623 [2024-07-24 22:34:27.030248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.623 qpair failed and we were unable to recover it. 00:25:01.623 [2024-07-24 22:34:27.030357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.623 [2024-07-24 22:34:27.030384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.623 qpair failed and we were unable to recover it. 00:25:01.623 [2024-07-24 22:34:27.030499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.623 [2024-07-24 22:34:27.030525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.623 qpair failed and we were unable to recover it. 00:25:01.623 [2024-07-24 22:34:27.030652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.623 [2024-07-24 22:34:27.030678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.623 qpair failed and we were unable to recover it. 00:25:01.623 [2024-07-24 22:34:27.030791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.623 [2024-07-24 22:34:27.030818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.623 qpair failed and we were unable to recover it. 00:25:01.623 [2024-07-24 22:34:27.030935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.623 [2024-07-24 22:34:27.030962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.623 qpair failed and we were unable to recover it. 00:25:01.623 [2024-07-24 22:34:27.031070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.623 [2024-07-24 22:34:27.031099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.623 qpair failed and we were unable to recover it. 00:25:01.623 [2024-07-24 22:34:27.031211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.623 [2024-07-24 22:34:27.031238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.623 qpair failed and we were unable to recover it. 00:25:01.623 [2024-07-24 22:34:27.031353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.623 [2024-07-24 22:34:27.031385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.623 qpair failed and we were unable to recover it. 00:25:01.623 [2024-07-24 22:34:27.031502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.623 [2024-07-24 22:34:27.031530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.623 qpair failed and we were unable to recover it. 00:25:01.623 [2024-07-24 22:34:27.031743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.623 [2024-07-24 22:34:27.031770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.623 qpair failed and we were unable to recover it. 00:25:01.623 [2024-07-24 22:34:27.031876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.623 [2024-07-24 22:34:27.031902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.623 qpair failed and we were unable to recover it. 00:25:01.623 [2024-07-24 22:34:27.032017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.623 [2024-07-24 22:34:27.032043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.623 qpair failed and we were unable to recover it. 00:25:01.623 [2024-07-24 22:34:27.032149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.623 [2024-07-24 22:34:27.032175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.623 qpair failed and we were unable to recover it. 00:25:01.623 [2024-07-24 22:34:27.032276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.623 [2024-07-24 22:34:27.032304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.623 qpair failed and we were unable to recover it. 00:25:01.623 [2024-07-24 22:34:27.032418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.623 [2024-07-24 22:34:27.032447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.623 qpair failed and we were unable to recover it. 00:25:01.623 [2024-07-24 22:34:27.032578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.623 [2024-07-24 22:34:27.032606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.623 qpair failed and we were unable to recover it. 00:25:01.623 [2024-07-24 22:34:27.032713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.623 [2024-07-24 22:34:27.032738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.623 qpair failed and we were unable to recover it. 00:25:01.623 [2024-07-24 22:34:27.032847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.623 [2024-07-24 22:34:27.032875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.623 qpair failed and we were unable to recover it. 00:25:01.623 [2024-07-24 22:34:27.032997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.624 [2024-07-24 22:34:27.033024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.624 qpair failed and we were unable to recover it. 00:25:01.624 [2024-07-24 22:34:27.033149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.624 [2024-07-24 22:34:27.033175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.624 qpair failed and we were unable to recover it. 00:25:01.624 [2024-07-24 22:34:27.033281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.624 [2024-07-24 22:34:27.033307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.624 qpair failed and we were unable to recover it. 00:25:01.624 [2024-07-24 22:34:27.033419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.624 [2024-07-24 22:34:27.033444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.624 qpair failed and we were unable to recover it. 00:25:01.624 [2024-07-24 22:34:27.033569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.624 [2024-07-24 22:34:27.033595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.624 qpair failed and we were unable to recover it. 00:25:01.624 [2024-07-24 22:34:27.033696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.624 [2024-07-24 22:34:27.033722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.624 qpair failed and we were unable to recover it. 00:25:01.624 [2024-07-24 22:34:27.033828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.624 [2024-07-24 22:34:27.033855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.624 qpair failed and we were unable to recover it. 00:25:01.624 [2024-07-24 22:34:27.033966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.624 [2024-07-24 22:34:27.033994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.624 qpair failed and we were unable to recover it. 00:25:01.624 [2024-07-24 22:34:27.034104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.624 [2024-07-24 22:34:27.034133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.624 qpair failed and we were unable to recover it. 00:25:01.624 [2024-07-24 22:34:27.034250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.624 [2024-07-24 22:34:27.034277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.624 qpair failed and we were unable to recover it. 00:25:01.624 [2024-07-24 22:34:27.034374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.624 [2024-07-24 22:34:27.034399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.624 qpair failed and we were unable to recover it. 00:25:01.624 [2024-07-24 22:34:27.034506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.624 [2024-07-24 22:34:27.034532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.624 qpair failed and we were unable to recover it. 00:25:01.624 [2024-07-24 22:34:27.034653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.624 [2024-07-24 22:34:27.034680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.624 qpair failed and we were unable to recover it. 00:25:01.624 [2024-07-24 22:34:27.034781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.624 [2024-07-24 22:34:27.034806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.624 qpair failed and we were unable to recover it. 00:25:01.624 [2024-07-24 22:34:27.034917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.624 [2024-07-24 22:34:27.034947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.624 qpair failed and we were unable to recover it. 00:25:01.624 [2024-07-24 22:34:27.035050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.624 [2024-07-24 22:34:27.035076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.624 qpair failed and we were unable to recover it. 00:25:01.624 [2024-07-24 22:34:27.035182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.624 [2024-07-24 22:34:27.035209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.624 qpair failed and we were unable to recover it. 00:25:01.624 [2024-07-24 22:34:27.035314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.624 [2024-07-24 22:34:27.035339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.624 qpair failed and we were unable to recover it. 00:25:01.624 [2024-07-24 22:34:27.035434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.624 [2024-07-24 22:34:27.035459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.624 qpair failed and we were unable to recover it. 00:25:01.624 [2024-07-24 22:34:27.035582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.624 [2024-07-24 22:34:27.035617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.624 qpair failed and we were unable to recover it. 00:25:01.624 [2024-07-24 22:34:27.035722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.624 [2024-07-24 22:34:27.035749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.624 qpair failed and we were unable to recover it. 00:25:01.624 [2024-07-24 22:34:27.035856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.624 [2024-07-24 22:34:27.035884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.624 qpair failed and we were unable to recover it. 00:25:01.624 [2024-07-24 22:34:27.035999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.624 [2024-07-24 22:34:27.036026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.624 qpair failed and we were unable to recover it. 00:25:01.624 [2024-07-24 22:34:27.036132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.624 [2024-07-24 22:34:27.036159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.624 qpair failed and we were unable to recover it. 00:25:01.624 [2024-07-24 22:34:27.036258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.624 [2024-07-24 22:34:27.036285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.624 qpair failed and we were unable to recover it. 00:25:01.624 [2024-07-24 22:34:27.036395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.624 [2024-07-24 22:34:27.036421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.624 qpair failed and we were unable to recover it. 00:25:01.624 [2024-07-24 22:34:27.036551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.624 [2024-07-24 22:34:27.036580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.624 qpair failed and we were unable to recover it. 00:25:01.624 [2024-07-24 22:34:27.036689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.624 [2024-07-24 22:34:27.036716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.624 qpair failed and we were unable to recover it. 00:25:01.624 [2024-07-24 22:34:27.036837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.624 [2024-07-24 22:34:27.036866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.624 qpair failed and we were unable to recover it. 00:25:01.624 [2024-07-24 22:34:27.036967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.624 [2024-07-24 22:34:27.036992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.624 qpair failed and we were unable to recover it. 00:25:01.624 [2024-07-24 22:34:27.037106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.624 [2024-07-24 22:34:27.037132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.624 qpair failed and we were unable to recover it. 00:25:01.624 [2024-07-24 22:34:27.037234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.624 [2024-07-24 22:34:27.037259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.624 qpair failed and we were unable to recover it. 00:25:01.624 [2024-07-24 22:34:27.037364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.624 [2024-07-24 22:34:27.037389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.624 qpair failed and we were unable to recover it. 00:25:01.624 [2024-07-24 22:34:27.037501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.624 [2024-07-24 22:34:27.037527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.624 qpair failed and we were unable to recover it. 00:25:01.624 [2024-07-24 22:34:27.037639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.624 [2024-07-24 22:34:27.037666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.624 qpair failed and we were unable to recover it. 00:25:01.624 [2024-07-24 22:34:27.037768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.624 [2024-07-24 22:34:27.037794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.624 qpair failed and we were unable to recover it. 00:25:01.624 [2024-07-24 22:34:27.037906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.624 [2024-07-24 22:34:27.037931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.624 qpair failed and we were unable to recover it. 00:25:01.624 [2024-07-24 22:34:27.038032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.624 [2024-07-24 22:34:27.038058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.624 qpair failed and we were unable to recover it. 00:25:01.624 [2024-07-24 22:34:27.038159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.625 [2024-07-24 22:34:27.038186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.625 qpair failed and we were unable to recover it. 00:25:01.625 [2024-07-24 22:34:27.038290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.625 [2024-07-24 22:34:27.038317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.625 qpair failed and we were unable to recover it. 00:25:01.625 [2024-07-24 22:34:27.038422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.625 [2024-07-24 22:34:27.038449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.625 qpair failed and we were unable to recover it. 00:25:01.625 [2024-07-24 22:34:27.038574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.625 [2024-07-24 22:34:27.038600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.625 qpair failed and we were unable to recover it. 00:25:01.625 [2024-07-24 22:34:27.038697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.625 [2024-07-24 22:34:27.038723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.625 qpair failed and we were unable to recover it. 00:25:01.625 [2024-07-24 22:34:27.038836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.625 [2024-07-24 22:34:27.038861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.625 qpair failed and we were unable to recover it. 00:25:01.625 [2024-07-24 22:34:27.038956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.625 [2024-07-24 22:34:27.038982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.625 qpair failed and we were unable to recover it. 00:25:01.625 [2024-07-24 22:34:27.039084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.625 [2024-07-24 22:34:27.039111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.625 qpair failed and we were unable to recover it. 00:25:01.625 [2024-07-24 22:34:27.039212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.625 [2024-07-24 22:34:27.039239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.625 qpair failed and we were unable to recover it. 00:25:01.625 [2024-07-24 22:34:27.039342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.625 [2024-07-24 22:34:27.039368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.625 qpair failed and we were unable to recover it. 00:25:01.625 [2024-07-24 22:34:27.039478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.625 [2024-07-24 22:34:27.039512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.625 qpair failed and we were unable to recover it. 00:25:01.625 [2024-07-24 22:34:27.039615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.625 [2024-07-24 22:34:27.039641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.625 qpair failed and we were unable to recover it. 00:25:01.625 [2024-07-24 22:34:27.039759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.625 [2024-07-24 22:34:27.039785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.625 qpair failed and we were unable to recover it. 00:25:01.625 [2024-07-24 22:34:27.039889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.625 [2024-07-24 22:34:27.039916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.625 qpair failed and we were unable to recover it. 00:25:01.625 [2024-07-24 22:34:27.040039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.625 [2024-07-24 22:34:27.040064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.625 qpair failed and we were unable to recover it. 00:25:01.625 [2024-07-24 22:34:27.040167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.625 [2024-07-24 22:34:27.040193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.625 qpair failed and we were unable to recover it. 00:25:01.625 [2024-07-24 22:34:27.040292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.625 [2024-07-24 22:34:27.040323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.625 qpair failed and we were unable to recover it. 00:25:01.625 [2024-07-24 22:34:27.040441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.625 [2024-07-24 22:34:27.040465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.625 qpair failed and we were unable to recover it. 00:25:01.625 [2024-07-24 22:34:27.040589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.625 [2024-07-24 22:34:27.040623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.625 qpair failed and we were unable to recover it. 00:25:01.625 [2024-07-24 22:34:27.040743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.625 [2024-07-24 22:34:27.040770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.625 qpair failed and we were unable to recover it. 00:25:01.625 [2024-07-24 22:34:27.040878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.625 [2024-07-24 22:34:27.040906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.625 qpair failed and we were unable to recover it. 00:25:01.625 [2024-07-24 22:34:27.041008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.625 [2024-07-24 22:34:27.041034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.625 qpair failed and we were unable to recover it. 00:25:01.625 [2024-07-24 22:34:27.041150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.625 [2024-07-24 22:34:27.041176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.625 qpair failed and we were unable to recover it. 00:25:01.625 [2024-07-24 22:34:27.041289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.625 [2024-07-24 22:34:27.041315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.625 qpair failed and we were unable to recover it. 00:25:01.625 [2024-07-24 22:34:27.041413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.625 [2024-07-24 22:34:27.041439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.625 qpair failed and we were unable to recover it. 00:25:01.625 [2024-07-24 22:34:27.041554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.625 [2024-07-24 22:34:27.041582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.625 qpair failed and we were unable to recover it. 00:25:01.625 [2024-07-24 22:34:27.041687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.625 [2024-07-24 22:34:27.041714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.625 qpair failed and we were unable to recover it. 00:25:01.625 [2024-07-24 22:34:27.041824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.625 [2024-07-24 22:34:27.041850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.625 qpair failed and we were unable to recover it. 00:25:01.625 [2024-07-24 22:34:27.041950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.625 [2024-07-24 22:34:27.041976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.625 qpair failed and we were unable to recover it. 00:25:01.625 [2024-07-24 22:34:27.042075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.625 [2024-07-24 22:34:27.042101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.625 qpair failed and we were unable to recover it. 00:25:01.625 [2024-07-24 22:34:27.042217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.625 [2024-07-24 22:34:27.042246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.625 qpair failed and we were unable to recover it. 00:25:01.625 [2024-07-24 22:34:27.042370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.625 [2024-07-24 22:34:27.042396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.625 qpair failed and we were unable to recover it. 00:25:01.625 [2024-07-24 22:34:27.042517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.625 [2024-07-24 22:34:27.042547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.625 qpair failed and we were unable to recover it. 00:25:01.625 [2024-07-24 22:34:27.042658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.625 [2024-07-24 22:34:27.042685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.625 qpair failed and we were unable to recover it. 00:25:01.625 [2024-07-24 22:34:27.042804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.625 [2024-07-24 22:34:27.042830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.625 qpair failed and we were unable to recover it. 00:25:01.625 [2024-07-24 22:34:27.042932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.625 [2024-07-24 22:34:27.042958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.625 qpair failed and we were unable to recover it. 00:25:01.625 [2024-07-24 22:34:27.043060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.625 [2024-07-24 22:34:27.043086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.625 qpair failed and we were unable to recover it. 00:25:01.625 [2024-07-24 22:34:27.043188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.625 [2024-07-24 22:34:27.043214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.625 qpair failed and we were unable to recover it. 00:25:01.626 [2024-07-24 22:34:27.043332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.626 [2024-07-24 22:34:27.043358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.626 qpair failed and we were unable to recover it. 00:25:01.626 [2024-07-24 22:34:27.043461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.626 [2024-07-24 22:34:27.043499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.626 qpair failed and we were unable to recover it. 00:25:01.626 [2024-07-24 22:34:27.043626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.626 [2024-07-24 22:34:27.043652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.626 qpair failed and we were unable to recover it. 00:25:01.626 [2024-07-24 22:34:27.043783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.626 [2024-07-24 22:34:27.043809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.626 qpair failed and we were unable to recover it. 00:25:01.626 [2024-07-24 22:34:27.043923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.626 [2024-07-24 22:34:27.043949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.626 qpair failed and we were unable to recover it. 00:25:01.626 [2024-07-24 22:34:27.044053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.626 [2024-07-24 22:34:27.044079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.626 qpair failed and we were unable to recover it. 00:25:01.626 [2024-07-24 22:34:27.044183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.626 [2024-07-24 22:34:27.044208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.626 qpair failed and we were unable to recover it. 00:25:01.626 [2024-07-24 22:34:27.044309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.626 [2024-07-24 22:34:27.044335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.626 qpair failed and we were unable to recover it. 00:25:01.626 [2024-07-24 22:34:27.044439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.626 [2024-07-24 22:34:27.044465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.626 qpair failed and we were unable to recover it. 00:25:01.626 [2024-07-24 22:34:27.044586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.626 [2024-07-24 22:34:27.044612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.626 qpair failed and we were unable to recover it. 00:25:01.626 [2024-07-24 22:34:27.044728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.626 [2024-07-24 22:34:27.044756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.626 qpair failed and we were unable to recover it. 00:25:01.626 [2024-07-24 22:34:27.044866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.626 [2024-07-24 22:34:27.044891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.626 qpair failed and we were unable to recover it. 00:25:01.626 [2024-07-24 22:34:27.045000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.626 [2024-07-24 22:34:27.045029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.626 qpair failed and we were unable to recover it. 00:25:01.626 [2024-07-24 22:34:27.045152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.626 [2024-07-24 22:34:27.045181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.626 qpair failed and we were unable to recover it. 00:25:01.626 [2024-07-24 22:34:27.045287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.626 [2024-07-24 22:34:27.045314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.626 qpair failed and we were unable to recover it. 00:25:01.626 [2024-07-24 22:34:27.045421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.626 [2024-07-24 22:34:27.045448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.626 qpair failed and we were unable to recover it. 00:25:01.626 [2024-07-24 22:34:27.045568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.626 [2024-07-24 22:34:27.045596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.626 qpair failed and we were unable to recover it. 00:25:01.626 [2024-07-24 22:34:27.045718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.626 [2024-07-24 22:34:27.045745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.626 qpair failed and we were unable to recover it. 00:25:01.626 [2024-07-24 22:34:27.045856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.626 [2024-07-24 22:34:27.045895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.626 qpair failed and we were unable to recover it. 00:25:01.626 [2024-07-24 22:34:27.046006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.626 [2024-07-24 22:34:27.046034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.626 qpair failed and we were unable to recover it. 00:25:01.626 [2024-07-24 22:34:27.046139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.626 [2024-07-24 22:34:27.046166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.626 qpair failed and we were unable to recover it. 00:25:01.626 [2024-07-24 22:34:27.046266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.626 [2024-07-24 22:34:27.046292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.626 qpair failed and we were unable to recover it. 00:25:01.626 [2024-07-24 22:34:27.046411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.626 [2024-07-24 22:34:27.046439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.626 qpair failed and we were unable to recover it. 00:25:01.626 [2024-07-24 22:34:27.046560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.626 [2024-07-24 22:34:27.046586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.626 qpair failed and we were unable to recover it. 00:25:01.626 [2024-07-24 22:34:27.046705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.626 [2024-07-24 22:34:27.046732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.626 qpair failed and we were unable to recover it. 00:25:01.626 [2024-07-24 22:34:27.046860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.626 [2024-07-24 22:34:27.046886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.626 qpair failed and we were unable to recover it. 00:25:01.626 [2024-07-24 22:34:27.046993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.626 [2024-07-24 22:34:27.047018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.626 qpair failed and we were unable to recover it. 00:25:01.626 [2024-07-24 22:34:27.047125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.626 [2024-07-24 22:34:27.047154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.626 qpair failed and we were unable to recover it. 00:25:01.626 [2024-07-24 22:34:27.047261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.626 [2024-07-24 22:34:27.047288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.626 qpair failed and we were unable to recover it. 00:25:01.626 [2024-07-24 22:34:27.047393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.626 [2024-07-24 22:34:27.047421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.626 qpair failed and we were unable to recover it. 00:25:01.626 [2024-07-24 22:34:27.047539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.626 [2024-07-24 22:34:27.047566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.626 qpair failed and we were unable to recover it. 00:25:01.626 [2024-07-24 22:34:27.047673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.626 [2024-07-24 22:34:27.047702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.626 qpair failed and we were unable to recover it. 00:25:01.626 [2024-07-24 22:34:27.047807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.626 [2024-07-24 22:34:27.047833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.626 qpair failed and we were unable to recover it. 00:25:01.626 [2024-07-24 22:34:27.047955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.626 [2024-07-24 22:34:27.047982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.626 qpair failed and we were unable to recover it. 00:25:01.626 [2024-07-24 22:34:27.048082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.626 [2024-07-24 22:34:27.048109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.626 qpair failed and we were unable to recover it. 00:25:01.626 [2024-07-24 22:34:27.048213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.626 [2024-07-24 22:34:27.048239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.626 qpair failed and we were unable to recover it. 00:25:01.626 [2024-07-24 22:34:27.048354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.626 [2024-07-24 22:34:27.048381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.626 qpair failed and we were unable to recover it. 00:25:01.626 [2024-07-24 22:34:27.048487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.626 [2024-07-24 22:34:27.048515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-24 22:34:27.048641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.627 [2024-07-24 22:34:27.048668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-24 22:34:27.048776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.627 [2024-07-24 22:34:27.048804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-24 22:34:27.048920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.627 [2024-07-24 22:34:27.048948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-24 22:34:27.049048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.627 [2024-07-24 22:34:27.049074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-24 22:34:27.049176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.627 [2024-07-24 22:34:27.049202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-24 22:34:27.049298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.627 [2024-07-24 22:34:27.049325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-24 22:34:27.049438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.627 [2024-07-24 22:34:27.049464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-24 22:34:27.049601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.627 [2024-07-24 22:34:27.049628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-24 22:34:27.049744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.627 [2024-07-24 22:34:27.049770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-24 22:34:27.049887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.627 [2024-07-24 22:34:27.049913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-24 22:34:27.050015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.627 [2024-07-24 22:34:27.050041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-24 22:34:27.050141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.627 [2024-07-24 22:34:27.050167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-24 22:34:27.050286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.627 [2024-07-24 22:34:27.050312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-24 22:34:27.050417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.627 [2024-07-24 22:34:27.050446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-24 22:34:27.050569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.627 [2024-07-24 22:34:27.050599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-24 22:34:27.050703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.627 [2024-07-24 22:34:27.050730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-24 22:34:27.050832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.627 [2024-07-24 22:34:27.050857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-24 22:34:27.050969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.627 [2024-07-24 22:34:27.050995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-24 22:34:27.051089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.627 [2024-07-24 22:34:27.051116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-24 22:34:27.051220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.627 [2024-07-24 22:34:27.051246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-24 22:34:27.051352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.627 [2024-07-24 22:34:27.051385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-24 22:34:27.051492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.627 [2024-07-24 22:34:27.051518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-24 22:34:27.051637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.627 [2024-07-24 22:34:27.051663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-24 22:34:27.051760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.627 [2024-07-24 22:34:27.051786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-24 22:34:27.051888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.627 [2024-07-24 22:34:27.051913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-24 22:34:27.052018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.627 [2024-07-24 22:34:27.052047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-24 22:34:27.052159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.627 [2024-07-24 22:34:27.052185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-24 22:34:27.052290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.627 [2024-07-24 22:34:27.052318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-24 22:34:27.052436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.627 [2024-07-24 22:34:27.052462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-24 22:34:27.052596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.627 [2024-07-24 22:34:27.052625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-24 22:34:27.052731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.627 [2024-07-24 22:34:27.052758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-24 22:34:27.052858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.627 [2024-07-24 22:34:27.052884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-24 22:34:27.052988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-24 22:34:27.053016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-24 22:34:27.053122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-24 22:34:27.053148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-24 22:34:27.053255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-24 22:34:27.053281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-24 22:34:27.053383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-24 22:34:27.053409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-24 22:34:27.053526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-24 22:34:27.053555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-24 22:34:27.053672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-24 22:34:27.053701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-24 22:34:27.053815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-24 22:34:27.053841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-24 22:34:27.053948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-24 22:34:27.053974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-24 22:34:27.054093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-24 22:34:27.054119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-24 22:34:27.054230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-24 22:34:27.054259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-24 22:34:27.054373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-24 22:34:27.054399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-24 22:34:27.054509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-24 22:34:27.054542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-24 22:34:27.054658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-24 22:34:27.054685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-24 22:34:27.054806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-24 22:34:27.054832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-24 22:34:27.054936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-24 22:34:27.054962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-24 22:34:27.055078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-24 22:34:27.055107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-24 22:34:27.055221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-24 22:34:27.055246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-24 22:34:27.055351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-24 22:34:27.055377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-24 22:34:27.055476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-24 22:34:27.055508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-24 22:34:27.055607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-24 22:34:27.055633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-24 22:34:27.055746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-24 22:34:27.055772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-24 22:34:27.055873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-24 22:34:27.055899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-24 22:34:27.056002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-24 22:34:27.056027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-24 22:34:27.056133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-24 22:34:27.056160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-24 22:34:27.056260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-24 22:34:27.056288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-24 22:34:27.056394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-24 22:34:27.056420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-24 22:34:27.056531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-24 22:34:27.056557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-24 22:34:27.056674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-24 22:34:27.056700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-24 22:34:27.056798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-24 22:34:27.056830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-24 22:34:27.056947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-24 22:34:27.056973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-24 22:34:27.057074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-24 22:34:27.057100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-24 22:34:27.057201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-24 22:34:27.057227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-24 22:34:27.057361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-24 22:34:27.057390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-24 22:34:27.057493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-24 22:34:27.057525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-24 22:34:27.057644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-24 22:34:27.057671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-24 22:34:27.057784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-24 22:34:27.057812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-24 22:34:27.057943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.628 [2024-07-24 22:34:27.057969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.628 qpair failed and we were unable to recover it. 00:25:01.628 [2024-07-24 22:34:27.058087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-24 22:34:27.058113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-24 22:34:27.058215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-24 22:34:27.058242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-24 22:34:27.058367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-24 22:34:27.058395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-24 22:34:27.058508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-24 22:34:27.058544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-24 22:34:27.058645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-24 22:34:27.058670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-24 22:34:27.058774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-24 22:34:27.058800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-24 22:34:27.058914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-24 22:34:27.058939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-24 22:34:27.059045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-24 22:34:27.059070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-24 22:34:27.059169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-24 22:34:27.059194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-24 22:34:27.059303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-24 22:34:27.059329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-24 22:34:27.059440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-24 22:34:27.059468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-24 22:34:27.059604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-24 22:34:27.059633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-24 22:34:27.059743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-24 22:34:27.059770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-24 22:34:27.059886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-24 22:34:27.059912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-24 22:34:27.060006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-24 22:34:27.060032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-24 22:34:27.060145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-24 22:34:27.060171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-24 22:34:27.060295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-24 22:34:27.060322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-24 22:34:27.060430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-24 22:34:27.060458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-24 22:34:27.060600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-24 22:34:27.060626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-24 22:34:27.060749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-24 22:34:27.060775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-24 22:34:27.060882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-24 22:34:27.060910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-24 22:34:27.061026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-24 22:34:27.061053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-24 22:34:27.061159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-24 22:34:27.061188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-24 22:34:27.061296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-24 22:34:27.061322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-24 22:34:27.061428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-24 22:34:27.061455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-24 22:34:27.061578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-24 22:34:27.061606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-24 22:34:27.061714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-24 22:34:27.061741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-24 22:34:27.061860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-24 22:34:27.061886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-24 22:34:27.061992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-24 22:34:27.062019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-24 22:34:27.062143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-24 22:34:27.062169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-24 22:34:27.062276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-24 22:34:27.062304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-24 22:34:27.062428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-24 22:34:27.062461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-24 22:34:27.062601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-24 22:34:27.062629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-24 22:34:27.062750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-24 22:34:27.062776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-24 22:34:27.062893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-24 22:34:27.062919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-24 22:34:27.063032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-24 22:34:27.063058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-24 22:34:27.063158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-24 22:34:27.063184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-24 22:34:27.063287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.629 [2024-07-24 22:34:27.063313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.629 qpair failed and we were unable to recover it. 00:25:01.629 [2024-07-24 22:34:27.063425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-24 22:34:27.063451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-24 22:34:27.063584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-24 22:34:27.063611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-24 22:34:27.063725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-24 22:34:27.063750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-24 22:34:27.063858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-24 22:34:27.063887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-24 22:34:27.063992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-24 22:34:27.064021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-24 22:34:27.064131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-24 22:34:27.064158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-24 22:34:27.064279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-24 22:34:27.064305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-24 22:34:27.064414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-24 22:34:27.064442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-24 22:34:27.064584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-24 22:34:27.064614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-24 22:34:27.064716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-24 22:34:27.064743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-24 22:34:27.064858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-24 22:34:27.064884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-24 22:34:27.064988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-24 22:34:27.065016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-24 22:34:27.065123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-24 22:34:27.065151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-24 22:34:27.065271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-24 22:34:27.065299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-24 22:34:27.065412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-24 22:34:27.065439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-24 22:34:27.065566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-24 22:34:27.065593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-24 22:34:27.065697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-24 22:34:27.065723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-24 22:34:27.065832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-24 22:34:27.065858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-24 22:34:27.065960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-24 22:34:27.065989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-24 22:34:27.066091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-24 22:34:27.066117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-24 22:34:27.066231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-24 22:34:27.066258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-24 22:34:27.066366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-24 22:34:27.066393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-24 22:34:27.066517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-24 22:34:27.066544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-24 22:34:27.066669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-24 22:34:27.066699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-24 22:34:27.066809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-24 22:34:27.066835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-24 22:34:27.066937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-24 22:34:27.066963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-24 22:34:27.067085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-24 22:34:27.067111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-24 22:34:27.067214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-24 22:34:27.067243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-24 22:34:27.067350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-24 22:34:27.067378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-24 22:34:27.067506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-24 22:34:27.067534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-24 22:34:27.067659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-24 22:34:27.067686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-24 22:34:27.067789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-24 22:34:27.067816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-24 22:34:27.067916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-24 22:34:27.067944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-24 22:34:27.068049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-24 22:34:27.068084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-24 22:34:27.068202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-24 22:34:27.068229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-24 22:34:27.068348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-24 22:34:27.068375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-24 22:34:27.068485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-24 22:34:27.068512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-24 22:34:27.068628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-24 22:34:27.068654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-24 22:34:27.068776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-24 22:34:27.068803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.630 [2024-07-24 22:34:27.068920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.630 [2024-07-24 22:34:27.068946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.630 qpair failed and we were unable to recover it. 00:25:01.631 [2024-07-24 22:34:27.069045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.631 [2024-07-24 22:34:27.069071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.631 qpair failed and we were unable to recover it. 00:25:01.631 [2024-07-24 22:34:27.069193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.631 [2024-07-24 22:34:27.069219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.631 qpair failed and we were unable to recover it. 00:25:01.631 [2024-07-24 22:34:27.069327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.631 [2024-07-24 22:34:27.069355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.631 qpair failed and we were unable to recover it. 00:25:01.631 [2024-07-24 22:34:27.069455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.631 [2024-07-24 22:34:27.069490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.631 qpair failed and we were unable to recover it. 00:25:01.631 [2024-07-24 22:34:27.069594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.631 [2024-07-24 22:34:27.069620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.631 qpair failed and we were unable to recover it. 00:25:01.631 [2024-07-24 22:34:27.069735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.631 [2024-07-24 22:34:27.069760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.631 qpair failed and we were unable to recover it. 00:25:01.631 [2024-07-24 22:34:27.069860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.631 [2024-07-24 22:34:27.069886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.631 qpair failed and we were unable to recover it. 00:25:01.631 [2024-07-24 22:34:27.069994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.631 [2024-07-24 22:34:27.070031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.631 qpair failed and we were unable to recover it. 00:25:01.631 [2024-07-24 22:34:27.070184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.631 [2024-07-24 22:34:27.070210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.631 qpair failed and we were unable to recover it. 00:25:01.631 [2024-07-24 22:34:27.070312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.631 [2024-07-24 22:34:27.070338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.631 qpair failed and we were unable to recover it. 00:25:01.631 [2024-07-24 22:34:27.070489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.631 [2024-07-24 22:34:27.070516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.631 qpair failed and we were unable to recover it. 00:25:01.631 [2024-07-24 22:34:27.070630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.631 [2024-07-24 22:34:27.070657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.631 qpair failed and we were unable to recover it. 00:25:01.631 [2024-07-24 22:34:27.070756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.631 [2024-07-24 22:34:27.070782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.631 qpair failed and we were unable to recover it. 00:25:01.631 [2024-07-24 22:34:27.070885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.631 [2024-07-24 22:34:27.070913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.631 qpair failed and we were unable to recover it. 00:25:01.631 [2024-07-24 22:34:27.071014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.631 [2024-07-24 22:34:27.071040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.631 qpair failed and we were unable to recover it. 00:25:01.631 [2024-07-24 22:34:27.071138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.631 [2024-07-24 22:34:27.071164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.631 qpair failed and we were unable to recover it. 00:25:01.631 [2024-07-24 22:34:27.071260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.631 [2024-07-24 22:34:27.071285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.631 qpair failed and we were unable to recover it. 00:25:01.631 [2024-07-24 22:34:27.071402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.631 [2024-07-24 22:34:27.071428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.631 qpair failed and we were unable to recover it. 00:25:01.631 [2024-07-24 22:34:27.071546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.631 [2024-07-24 22:34:27.071573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.631 qpair failed and we were unable to recover it. 00:25:01.631 [2024-07-24 22:34:27.071677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.631 [2024-07-24 22:34:27.071703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.631 qpair failed and we were unable to recover it. 00:25:01.631 [2024-07-24 22:34:27.071813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.631 [2024-07-24 22:34:27.071843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.631 qpair failed and we were unable to recover it. 00:25:01.631 [2024-07-24 22:34:27.071961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.631 [2024-07-24 22:34:27.071987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.631 qpair failed and we were unable to recover it. 00:25:01.631 [2024-07-24 22:34:27.072114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.631 [2024-07-24 22:34:27.072143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.631 qpair failed and we were unable to recover it. 00:25:01.631 [2024-07-24 22:34:27.072254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.631 [2024-07-24 22:34:27.072281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.631 qpair failed and we were unable to recover it. 00:25:01.631 [2024-07-24 22:34:27.072395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.631 [2024-07-24 22:34:27.072423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.631 qpair failed and we were unable to recover it. 00:25:01.631 [2024-07-24 22:34:27.072529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.631 [2024-07-24 22:34:27.072557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.631 qpair failed and we were unable to recover it. 00:25:01.631 [2024-07-24 22:34:27.072653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.631 [2024-07-24 22:34:27.072679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.631 qpair failed and we were unable to recover it. 00:25:01.631 [2024-07-24 22:34:27.072779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.631 [2024-07-24 22:34:27.072805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.631 qpair failed and we were unable to recover it. 00:25:01.631 [2024-07-24 22:34:27.072918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.631 [2024-07-24 22:34:27.072944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.631 qpair failed and we were unable to recover it. 00:25:01.631 [2024-07-24 22:34:27.073058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.631 [2024-07-24 22:34:27.073089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.631 qpair failed and we were unable to recover it. 00:25:01.631 [2024-07-24 22:34:27.073194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.631 [2024-07-24 22:34:27.073220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.631 qpair failed and we were unable to recover it. 00:25:01.631 [2024-07-24 22:34:27.073322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.631 [2024-07-24 22:34:27.073349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.631 qpair failed and we were unable to recover it. 00:25:01.631 [2024-07-24 22:34:27.073456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.631 [2024-07-24 22:34:27.073494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.631 qpair failed and we were unable to recover it. 00:25:01.631 [2024-07-24 22:34:27.073597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.631 [2024-07-24 22:34:27.073627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.631 qpair failed and we were unable to recover it. 00:25:01.631 [2024-07-24 22:34:27.073731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.631 [2024-07-24 22:34:27.073757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.631 qpair failed and we were unable to recover it. 00:25:01.631 [2024-07-24 22:34:27.073858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.631 [2024-07-24 22:34:27.073885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.631 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-24 22:34:27.073986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-24 22:34:27.074012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-24 22:34:27.074126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-24 22:34:27.074152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-24 22:34:27.074257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-24 22:34:27.074283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-24 22:34:27.074398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-24 22:34:27.074424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-24 22:34:27.074536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-24 22:34:27.074566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-24 22:34:27.074669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-24 22:34:27.074696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-24 22:34:27.074816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-24 22:34:27.074843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-24 22:34:27.074940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-24 22:34:27.074966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-24 22:34:27.075066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-24 22:34:27.075093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-24 22:34:27.075194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-24 22:34:27.075220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-24 22:34:27.075321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-24 22:34:27.075349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-24 22:34:27.075459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-24 22:34:27.075495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-24 22:34:27.075595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-24 22:34:27.075622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-24 22:34:27.075735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-24 22:34:27.075764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-24 22:34:27.075860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-24 22:34:27.075886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-24 22:34:27.075987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-24 22:34:27.076013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-24 22:34:27.076115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-24 22:34:27.076142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-24 22:34:27.076252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-24 22:34:27.076278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-24 22:34:27.076385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-24 22:34:27.076412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-24 22:34:27.076521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-24 22:34:27.076549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-24 22:34:27.076674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-24 22:34:27.076700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-24 22:34:27.076795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-24 22:34:27.076821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-24 22:34:27.076940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-24 22:34:27.076967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-24 22:34:27.077075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-24 22:34:27.077104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-24 22:34:27.077226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-24 22:34:27.077263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-24 22:34:27.077396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-24 22:34:27.077425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-24 22:34:27.077534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-24 22:34:27.077560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-24 22:34:27.077678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-24 22:34:27.077703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-24 22:34:27.077823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-24 22:34:27.077848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-24 22:34:27.077948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-24 22:34:27.077973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-24 22:34:27.078074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-24 22:34:27.078099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-24 22:34:27.078194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-24 22:34:27.078219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-24 22:34:27.078323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-24 22:34:27.078348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-24 22:34:27.078443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-24 22:34:27.078468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-24 22:34:27.078595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-24 22:34:27.078622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-24 22:34:27.078729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-24 22:34:27.078757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-24 22:34:27.078861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-24 22:34:27.078888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-24 22:34:27.078993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-24 22:34:27.079024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-24 22:34:27.079137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-24 22:34:27.079164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-24 22:34:27.079278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-24 22:34:27.079306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.632 [2024-07-24 22:34:27.079403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.632 [2024-07-24 22:34:27.079429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.632 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-24 22:34:27.079530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-24 22:34:27.079560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-24 22:34:27.079665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-24 22:34:27.079691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-24 22:34:27.079800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-24 22:34:27.079826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-24 22:34:27.079945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-24 22:34:27.079972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-24 22:34:27.080071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-24 22:34:27.080098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-24 22:34:27.080200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-24 22:34:27.080228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-24 22:34:27.080332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-24 22:34:27.080360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-24 22:34:27.080470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-24 22:34:27.080505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-24 22:34:27.080619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-24 22:34:27.080645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-24 22:34:27.080753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-24 22:34:27.080782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-24 22:34:27.080888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-24 22:34:27.080915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-24 22:34:27.081014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-24 22:34:27.081039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-24 22:34:27.081139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-24 22:34:27.081164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-24 22:34:27.081281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-24 22:34:27.081311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-24 22:34:27.081419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-24 22:34:27.081446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-24 22:34:27.081569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-24 22:34:27.081597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-24 22:34:27.081716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-24 22:34:27.081745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-24 22:34:27.081846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-24 22:34:27.081872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-24 22:34:27.081979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-24 22:34:27.082008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-24 22:34:27.082115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-24 22:34:27.082143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-24 22:34:27.082248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-24 22:34:27.082274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-24 22:34:27.082380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-24 22:34:27.082408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-24 22:34:27.082511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-24 22:34:27.082537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-24 22:34:27.082661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-24 22:34:27.082698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-24 22:34:27.082833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-24 22:34:27.082862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-24 22:34:27.082966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-24 22:34:27.082992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-24 22:34:27.083095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-24 22:34:27.083123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-24 22:34:27.083225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-24 22:34:27.083251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-24 22:34:27.083349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-24 22:34:27.083375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-24 22:34:27.083484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-24 22:34:27.083517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-24 22:34:27.083632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-24 22:34:27.083659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-24 22:34:27.083768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-24 22:34:27.083795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-24 22:34:27.083905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-24 22:34:27.083932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-24 22:34:27.084036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-24 22:34:27.084063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-24 22:34:27.084175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-24 22:34:27.084204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-24 22:34:27.084308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-24 22:34:27.084336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-24 22:34:27.084439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-24 22:34:27.084475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-24 22:34:27.084592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-24 22:34:27.084619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-24 22:34:27.084718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-24 22:34:27.084744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-24 22:34:27.084840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-24 22:34:27.084867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.633 [2024-07-24 22:34:27.084973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.633 [2024-07-24 22:34:27.084998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.633 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-24 22:34:27.085097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-24 22:34:27.085123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-24 22:34:27.085231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-24 22:34:27.085257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-24 22:34:27.085377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-24 22:34:27.085405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-24 22:34:27.085524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-24 22:34:27.085551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-24 22:34:27.085650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-24 22:34:27.085676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-24 22:34:27.085779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-24 22:34:27.085806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-24 22:34:27.085922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-24 22:34:27.085948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-24 22:34:27.086060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-24 22:34:27.086086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-24 22:34:27.086190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-24 22:34:27.086216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-24 22:34:27.086326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-24 22:34:27.086353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-24 22:34:27.086455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-24 22:34:27.086486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-24 22:34:27.086642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-24 22:34:27.086668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-24 22:34:27.086780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-24 22:34:27.086806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-24 22:34:27.086909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-24 22:34:27.086935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-24 22:34:27.087036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-24 22:34:27.087062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-24 22:34:27.087158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-24 22:34:27.087185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-24 22:34:27.087289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-24 22:34:27.087316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-24 22:34:27.087420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-24 22:34:27.087446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-24 22:34:27.087564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-24 22:34:27.087592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-24 22:34:27.087695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-24 22:34:27.087723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-24 22:34:27.087829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-24 22:34:27.087857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-24 22:34:27.087973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-24 22:34:27.087999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-24 22:34:27.088111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-24 22:34:27.088145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-24 22:34:27.088265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-24 22:34:27.088294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-24 22:34:27.088397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-24 22:34:27.088423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-24 22:34:27.088538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-24 22:34:27.088565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-24 22:34:27.088684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-24 22:34:27.088710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-24 22:34:27.088810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-24 22:34:27.088836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-24 22:34:27.088950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-24 22:34:27.088980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-24 22:34:27.089078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-24 22:34:27.089104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-24 22:34:27.089201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-24 22:34:27.089226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-24 22:34:27.089327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-24 22:34:27.089354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-24 22:34:27.089453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-24 22:34:27.089485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-24 22:34:27.089588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-24 22:34:27.089621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-24 22:34:27.089740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-24 22:34:27.089769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-24 22:34:27.089870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-24 22:34:27.089897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-24 22:34:27.090005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-24 22:34:27.090033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-24 22:34:27.090136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-24 22:34:27.090164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-24 22:34:27.090274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-24 22:34:27.090300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-24 22:34:27.090399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-24 22:34:27.090425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.634 qpair failed and we were unable to recover it. 00:25:01.634 [2024-07-24 22:34:27.090573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.634 [2024-07-24 22:34:27.090609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-24 22:34:27.090753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-24 22:34:27.090779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-24 22:34:27.090896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-24 22:34:27.090923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-24 22:34:27.091026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-24 22:34:27.091052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-24 22:34:27.091154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-24 22:34:27.091180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-24 22:34:27.091288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-24 22:34:27.091315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-24 22:34:27.091421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-24 22:34:27.091449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-24 22:34:27.091579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-24 22:34:27.091605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-24 22:34:27.091721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-24 22:34:27.091748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-24 22:34:27.091856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-24 22:34:27.091883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-24 22:34:27.092006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-24 22:34:27.092031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-24 22:34:27.092133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-24 22:34:27.092160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-24 22:34:27.092257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-24 22:34:27.092283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-24 22:34:27.092383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-24 22:34:27.092410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-24 22:34:27.092515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-24 22:34:27.092542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-24 22:34:27.092645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-24 22:34:27.092672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-24 22:34:27.092783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-24 22:34:27.092809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-24 22:34:27.092909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-24 22:34:27.092936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-24 22:34:27.093034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-24 22:34:27.093059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-24 22:34:27.093174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-24 22:34:27.093200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-24 22:34:27.093300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-24 22:34:27.093326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-24 22:34:27.093429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-24 22:34:27.093456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-24 22:34:27.093562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-24 22:34:27.093592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-24 22:34:27.093689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-24 22:34:27.093715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-24 22:34:27.093814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-24 22:34:27.093840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-24 22:34:27.093944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-24 22:34:27.093971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-24 22:34:27.094088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-24 22:34:27.094114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-24 22:34:27.094233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-24 22:34:27.094259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-24 22:34:27.094373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-24 22:34:27.094398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-24 22:34:27.094495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-24 22:34:27.094523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-24 22:34:27.094622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-24 22:34:27.094648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-24 22:34:27.094744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-24 22:34:27.094770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-24 22:34:27.094883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-24 22:34:27.094909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-24 22:34:27.095007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-24 22:34:27.095033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-24 22:34:27.095128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-24 22:34:27.095154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-24 22:34:27.095248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-24 22:34:27.095274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-24 22:34:27.095387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-24 22:34:27.095413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-24 22:34:27.095521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-24 22:34:27.095549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-24 22:34:27.095652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-24 22:34:27.095678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-24 22:34:27.095777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-24 22:34:27.095803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-24 22:34:27.095907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-24 22:34:27.095934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-24 22:34:27.096041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-24 22:34:27.096068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-24 22:34:27.096194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.635 [2024-07-24 22:34:27.096224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.635 qpair failed and we were unable to recover it. 00:25:01.635 [2024-07-24 22:34:27.096346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-24 22:34:27.096375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-24 22:34:27.096478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-24 22:34:27.096514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-24 22:34:27.096615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-24 22:34:27.096641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-24 22:34:27.096744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-24 22:34:27.096771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-24 22:34:27.096902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-24 22:34:27.096929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-24 22:34:27.097042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-24 22:34:27.097068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-24 22:34:27.097176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-24 22:34:27.097202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-24 22:34:27.097305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-24 22:34:27.097332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-24 22:34:27.097433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-24 22:34:27.097460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-24 22:34:27.097585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-24 22:34:27.097610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-24 22:34:27.097725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-24 22:34:27.097750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-24 22:34:27.097847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-24 22:34:27.097872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-24 22:34:27.097973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-24 22:34:27.097999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-24 22:34:27.098109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-24 22:34:27.098139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-24 22:34:27.098248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-24 22:34:27.098277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-24 22:34:27.098383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-24 22:34:27.098411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-24 22:34:27.098512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-24 22:34:27.098539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-24 22:34:27.098640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-24 22:34:27.098667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-24 22:34:27.098773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-24 22:34:27.098808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-24 22:34:27.098924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-24 22:34:27.098955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-24 22:34:27.099069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-24 22:34:27.099095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-24 22:34:27.099199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-24 22:34:27.099226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-24 22:34:27.099344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-24 22:34:27.099371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-24 22:34:27.099491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-24 22:34:27.099529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-24 22:34:27.099634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-24 22:34:27.099661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-24 22:34:27.099759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-24 22:34:27.099785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-24 22:34:27.099887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-24 22:34:27.099915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-24 22:34:27.100019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-24 22:34:27.100048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-24 22:34:27.100153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-24 22:34:27.100179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-24 22:34:27.100293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-24 22:34:27.100320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-24 22:34:27.100427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-24 22:34:27.100454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-24 22:34:27.100573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-24 22:34:27.100610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-24 22:34:27.100725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-24 22:34:27.100751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-24 22:34:27.100858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-24 22:34:27.100883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-24 22:34:27.100986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-24 22:34:27.101011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-24 22:34:27.101116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-24 22:34:27.101145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-24 22:34:27.101245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-24 22:34:27.101271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-24 22:34:27.101374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-24 22:34:27.101400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-24 22:34:27.101522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.636 [2024-07-24 22:34:27.101550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.636 qpair failed and we were unable to recover it. 00:25:01.636 [2024-07-24 22:34:27.101671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-24 22:34:27.101699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-24 22:34:27.101806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-24 22:34:27.101833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-24 22:34:27.101939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-24 22:34:27.101966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-24 22:34:27.102072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-24 22:34:27.102098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-24 22:34:27.102193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-24 22:34:27.102219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-24 22:34:27.102324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-24 22:34:27.102352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-24 22:34:27.102460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-24 22:34:27.102491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-24 22:34:27.102608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-24 22:34:27.102635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-24 22:34:27.102735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-24 22:34:27.102767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-24 22:34:27.102865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-24 22:34:27.102890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-24 22:34:27.103006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-24 22:34:27.103032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-24 22:34:27.103152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-24 22:34:27.103178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-24 22:34:27.103280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-24 22:34:27.103305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-24 22:34:27.103401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-24 22:34:27.103426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-24 22:34:27.103526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-24 22:34:27.103552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-24 22:34:27.103656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-24 22:34:27.103682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-24 22:34:27.103781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-24 22:34:27.103806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-24 22:34:27.103907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-24 22:34:27.103933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-24 22:34:27.104047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-24 22:34:27.104073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-24 22:34:27.104171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-24 22:34:27.104197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-24 22:34:27.104295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-24 22:34:27.104330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-24 22:34:27.104428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-24 22:34:27.104453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-24 22:34:27.104565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-24 22:34:27.104591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-24 22:34:27.104708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-24 22:34:27.104733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-24 22:34:27.104836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-24 22:34:27.104863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-24 22:34:27.104983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-24 22:34:27.105014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-24 22:34:27.105123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-24 22:34:27.105151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-24 22:34:27.105272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-24 22:34:27.105300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-24 22:34:27.105401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-24 22:34:27.105427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-24 22:34:27.105529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-24 22:34:27.105557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-24 22:34:27.105663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-24 22:34:27.105690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-24 22:34:27.105801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-24 22:34:27.105827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-24 22:34:27.105929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-24 22:34:27.105955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-24 22:34:27.106063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-24 22:34:27.106090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-24 22:34:27.106207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-24 22:34:27.106234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-24 22:34:27.106339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-24 22:34:27.106366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-24 22:34:27.106464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-24 22:34:27.106502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-24 22:34:27.106621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-24 22:34:27.106650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-24 22:34:27.106756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-24 22:34:27.106786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-24 22:34:27.106885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-24 22:34:27.106911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-24 22:34:27.107025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-24 22:34:27.107050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-24 22:34:27.107169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-24 22:34:27.107195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-24 22:34:27.107308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-24 22:34:27.107334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.637 [2024-07-24 22:34:27.107433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.637 [2024-07-24 22:34:27.107460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.637 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-24 22:34:27.107568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-24 22:34:27.107596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-24 22:34:27.107704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-24 22:34:27.107730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-24 22:34:27.107844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-24 22:34:27.107869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-24 22:34:27.107973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-24 22:34:27.108001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-24 22:34:27.108096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-24 22:34:27.108122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-24 22:34:27.108226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-24 22:34:27.108255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-24 22:34:27.108364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-24 22:34:27.108391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-24 22:34:27.108496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-24 22:34:27.108527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-24 22:34:27.108642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-24 22:34:27.108668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-24 22:34:27.108769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-24 22:34:27.108796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-24 22:34:27.108897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-24 22:34:27.108926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-24 22:34:27.109026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-24 22:34:27.109053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-24 22:34:27.109166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-24 22:34:27.109191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-24 22:34:27.109300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-24 22:34:27.109325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-24 22:34:27.109428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-24 22:34:27.109453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-24 22:34:27.109562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-24 22:34:27.109589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-24 22:34:27.109690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-24 22:34:27.109720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-24 22:34:27.109827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-24 22:34:27.109854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-24 22:34:27.109952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-24 22:34:27.109977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-24 22:34:27.110079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-24 22:34:27.110112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-24 22:34:27.110205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-24 22:34:27.110231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-24 22:34:27.110329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-24 22:34:27.110355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-24 22:34:27.110456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-24 22:34:27.110491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-24 22:34:27.110610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-24 22:34:27.110636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-24 22:34:27.110745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-24 22:34:27.110771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-24 22:34:27.110870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-24 22:34:27.110896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-24 22:34:27.110996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-24 22:34:27.111023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-24 22:34:27.111120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-24 22:34:27.111146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-24 22:34:27.111250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-24 22:34:27.111276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-24 22:34:27.111376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-24 22:34:27.111402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-24 22:34:27.111509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-24 22:34:27.111535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-24 22:34:27.111638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-24 22:34:27.111664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-24 22:34:27.111759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-24 22:34:27.111784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-24 22:34:27.111898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-24 22:34:27.111933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-24 22:34:27.112045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-24 22:34:27.112069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-24 22:34:27.112175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-24 22:34:27.112200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-24 22:34:27.112311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-24 22:34:27.112341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-24 22:34:27.112458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-24 22:34:27.112490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-24 22:34:27.112598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-24 22:34:27.112625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-24 22:34:27.112727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-24 22:34:27.112753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-24 22:34:27.112848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-24 22:34:27.112874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-24 22:34:27.112973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-24 22:34:27.112999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-24 22:34:27.113102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-24 22:34:27.113129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-24 22:34:27.113250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-24 22:34:27.113276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-24 22:34:27.113393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-24 22:34:27.113419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.638 [2024-07-24 22:34:27.113535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.638 [2024-07-24 22:34:27.113581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.638 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-24 22:34:27.113696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-24 22:34:27.113721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-24 22:34:27.113838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-24 22:34:27.113869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-24 22:34:27.113973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-24 22:34:27.114000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-24 22:34:27.114105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-24 22:34:27.114132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-24 22:34:27.114236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-24 22:34:27.114263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-24 22:34:27.114359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-24 22:34:27.114385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-24 22:34:27.114491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-24 22:34:27.114518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-24 22:34:27.114623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-24 22:34:27.114650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-24 22:34:27.114776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-24 22:34:27.114803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-24 22:34:27.114909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-24 22:34:27.114935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-24 22:34:27.115035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-24 22:34:27.115066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-24 22:34:27.115186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-24 22:34:27.115212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-24 22:34:27.115321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-24 22:34:27.115347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-24 22:34:27.115446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-24 22:34:27.115472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-24 22:34:27.115591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-24 22:34:27.115620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-24 22:34:27.115722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-24 22:34:27.115748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-24 22:34:27.115849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-24 22:34:27.115876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-24 22:34:27.115995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-24 22:34:27.116021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-24 22:34:27.116126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-24 22:34:27.116153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-24 22:34:27.116254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-24 22:34:27.116283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-24 22:34:27.116384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-24 22:34:27.116410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-24 22:34:27.116509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-24 22:34:27.116536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-24 22:34:27.116647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-24 22:34:27.116673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-24 22:34:27.116780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-24 22:34:27.116809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-24 22:34:27.116934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-24 22:34:27.116961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-24 22:34:27.117074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-24 22:34:27.117100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-24 22:34:27.117223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-24 22:34:27.117249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-24 22:34:27.117347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-24 22:34:27.117373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-24 22:34:27.117476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-24 22:34:27.117511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-24 22:34:27.117613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-24 22:34:27.117641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-24 22:34:27.117753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-24 22:34:27.117781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-24 22:34:27.117884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-24 22:34:27.117912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-24 22:34:27.118031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-24 22:34:27.118057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-24 22:34:27.118168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-24 22:34:27.118194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-24 22:34:27.118295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-24 22:34:27.118321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-24 22:34:27.118422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-24 22:34:27.118449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-24 22:34:27.118579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-24 22:34:27.118606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-24 22:34:27.118723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-24 22:34:27.118750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-24 22:34:27.118850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-24 22:34:27.118878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-24 22:34:27.118986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-24 22:34:27.119012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-24 22:34:27.119126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-24 22:34:27.119151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.639 [2024-07-24 22:34:27.119248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.639 [2024-07-24 22:34:27.119274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.639 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-24 22:34:27.119373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-24 22:34:27.119399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-24 22:34:27.119509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-24 22:34:27.119535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-24 22:34:27.119637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-24 22:34:27.119664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-24 22:34:27.119776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-24 22:34:27.119802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-24 22:34:27.119918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-24 22:34:27.119945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-24 22:34:27.120052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-24 22:34:27.120082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-24 22:34:27.120186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-24 22:34:27.120213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-24 22:34:27.120326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-24 22:34:27.120352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-24 22:34:27.120455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-24 22:34:27.120498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-24 22:34:27.120607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-24 22:34:27.120635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-24 22:34:27.120744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-24 22:34:27.120772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-24 22:34:27.120874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-24 22:34:27.120902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-24 22:34:27.121006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-24 22:34:27.121033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-24 22:34:27.121136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-24 22:34:27.121166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-24 22:34:27.121263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-24 22:34:27.121290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-24 22:34:27.121388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-24 22:34:27.121415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-24 22:34:27.121528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-24 22:34:27.121555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-24 22:34:27.121653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-24 22:34:27.121679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-24 22:34:27.121794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-24 22:34:27.121821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-24 22:34:27.121937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-24 22:34:27.121963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-24 22:34:27.122075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-24 22:34:27.122112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-24 22:34:27.122223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-24 22:34:27.122249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-24 22:34:27.122363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-24 22:34:27.122391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-24 22:34:27.122494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-24 22:34:27.122521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-24 22:34:27.122670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-24 22:34:27.122697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-24 22:34:27.122850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-24 22:34:27.122877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-24 22:34:27.122994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-24 22:34:27.123027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-24 22:34:27.123127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-24 22:34:27.123153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-24 22:34:27.123298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-24 22:34:27.123324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-24 22:34:27.123441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-24 22:34:27.123467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-24 22:34:27.123587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-24 22:34:27.123613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-24 22:34:27.123719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-24 22:34:27.123747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-24 22:34:27.123848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-24 22:34:27.123880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-24 22:34:27.123987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-24 22:34:27.124013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-24 22:34:27.124117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-24 22:34:27.124145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-24 22:34:27.124256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-24 22:34:27.124283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-24 22:34:27.124391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-24 22:34:27.124417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-24 22:34:27.124524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-24 22:34:27.124551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-24 22:34:27.124648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-24 22:34:27.124674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-24 22:34:27.124777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-24 22:34:27.124804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-24 22:34:27.124903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-24 22:34:27.124928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-24 22:34:27.125029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-24 22:34:27.125065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-24 22:34:27.125169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-24 22:34:27.125195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-24 22:34:27.125294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-24 22:34:27.125320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-24 22:34:27.125426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-24 22:34:27.125453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-24 22:34:27.125574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.640 [2024-07-24 22:34:27.125602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.640 qpair failed and we were unable to recover it. 00:25:01.640 [2024-07-24 22:34:27.125703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-24 22:34:27.125729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-24 22:34:27.125876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-24 22:34:27.125901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-24 22:34:27.126002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-24 22:34:27.126033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-24 22:34:27.126137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-24 22:34:27.126164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-24 22:34:27.126276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-24 22:34:27.126304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-24 22:34:27.126409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-24 22:34:27.126434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-24 22:34:27.126544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-24 22:34:27.126570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-24 22:34:27.126686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-24 22:34:27.126711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-24 22:34:27.126812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-24 22:34:27.126838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-24 22:34:27.126943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-24 22:34:27.126970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-24 22:34:27.127075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-24 22:34:27.127100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-24 22:34:27.127225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-24 22:34:27.127251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-24 22:34:27.127352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-24 22:34:27.127377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-24 22:34:27.127490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-24 22:34:27.127518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-24 22:34:27.127636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-24 22:34:27.127662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-24 22:34:27.127782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-24 22:34:27.127808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-24 22:34:27.127913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-24 22:34:27.127939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-24 22:34:27.128033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-24 22:34:27.128059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-24 22:34:27.128166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-24 22:34:27.128200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-24 22:34:27.128304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-24 22:34:27.128331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-24 22:34:27.128432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-24 22:34:27.128459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-24 22:34:27.128573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-24 22:34:27.128599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-24 22:34:27.128706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-24 22:34:27.128733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-24 22:34:27.128833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-24 22:34:27.128861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-24 22:34:27.128956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-24 22:34:27.128982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-24 22:34:27.129082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-24 22:34:27.129115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-24 22:34:27.129219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-24 22:34:27.129246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-24 22:34:27.129381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-24 22:34:27.129408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-24 22:34:27.129517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-24 22:34:27.129544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-24 22:34:27.129671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-24 22:34:27.129707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-24 22:34:27.129829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-24 22:34:27.129857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-24 22:34:27.129964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-24 22:34:27.129992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-24 22:34:27.130090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-24 22:34:27.130117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-24 22:34:27.130224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-24 22:34:27.130251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-24 22:34:27.130363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-24 22:34:27.130391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-24 22:34:27.130497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-24 22:34:27.130524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-24 22:34:27.130642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-24 22:34:27.130667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-24 22:34:27.130768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-24 22:34:27.130793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-24 22:34:27.130893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-24 22:34:27.130918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-24 22:34:27.131014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-24 22:34:27.131038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-24 22:34:27.131168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-24 22:34:27.131194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-24 22:34:27.131309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-24 22:34:27.131335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-24 22:34:27.131437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-24 22:34:27.131467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-24 22:34:27.131590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-24 22:34:27.131620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-24 22:34:27.131728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-24 22:34:27.131756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-24 22:34:27.131857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-24 22:34:27.131883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.641 [2024-07-24 22:34:27.132030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.641 [2024-07-24 22:34:27.132057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.641 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-24 22:34:27.132175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-24 22:34:27.132202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-24 22:34:27.132302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-24 22:34:27.132329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-24 22:34:27.132432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-24 22:34:27.132458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-24 22:34:27.132566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-24 22:34:27.132595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-24 22:34:27.132705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-24 22:34:27.132732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-24 22:34:27.132846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-24 22:34:27.132872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-24 22:34:27.132979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-24 22:34:27.133005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-24 22:34:27.133112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-24 22:34:27.133138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-24 22:34:27.133273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-24 22:34:27.133300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-24 22:34:27.133409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-24 22:34:27.133436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-24 22:34:27.133544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-24 22:34:27.133571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-24 22:34:27.133669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-24 22:34:27.133695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-24 22:34:27.133790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-24 22:34:27.133822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-24 22:34:27.133925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-24 22:34:27.133952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-24 22:34:27.134048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-24 22:34:27.134075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-24 22:34:27.134192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-24 22:34:27.134220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-24 22:34:27.134338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-24 22:34:27.134364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-24 22:34:27.134469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-24 22:34:27.134506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-24 22:34:27.134613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-24 22:34:27.134639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-24 22:34:27.134739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-24 22:34:27.134765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-24 22:34:27.134865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-24 22:34:27.134891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-24 22:34:27.134989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-24 22:34:27.135015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-24 22:34:27.135142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-24 22:34:27.135177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-24 22:34:27.135293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-24 22:34:27.135321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-24 22:34:27.135424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-24 22:34:27.135450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-24 22:34:27.135566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-24 22:34:27.135593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-24 22:34:27.135707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-24 22:34:27.135733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-24 22:34:27.135841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-24 22:34:27.135867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-24 22:34:27.135972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-24 22:34:27.136000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-24 22:34:27.136113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-24 22:34:27.136138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-24 22:34:27.136251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-24 22:34:27.136278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-24 22:34:27.136382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-24 22:34:27.136410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-24 22:34:27.136509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-24 22:34:27.136537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-24 22:34:27.136639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-24 22:34:27.136665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-24 22:34:27.136768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-24 22:34:27.136795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-24 22:34:27.136897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-24 22:34:27.136933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-24 22:34:27.137034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-24 22:34:27.137062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-24 22:34:27.137159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-24 22:34:27.137190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-24 22:34:27.137286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-24 22:34:27.137312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-24 22:34:27.137418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-24 22:34:27.137445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-24 22:34:27.137561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-24 22:34:27.137588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-24 22:34:27.137736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-24 22:34:27.137767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-24 22:34:27.137871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-24 22:34:27.137898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-24 22:34:27.138011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-24 22:34:27.138038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-24 22:34:27.138150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-24 22:34:27.138181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-24 22:34:27.138281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-24 22:34:27.138307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-24 22:34:27.138421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-24 22:34:27.138447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-24 22:34:27.138575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-24 22:34:27.138608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-24 22:34:27.138710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.642 [2024-07-24 22:34:27.138737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.642 qpair failed and we were unable to recover it. 00:25:01.642 [2024-07-24 22:34:27.138852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-24 22:34:27.138890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-24 22:34:27.138997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-24 22:34:27.139024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-24 22:34:27.139136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-24 22:34:27.139162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-24 22:34:27.139259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-24 22:34:27.139284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-24 22:34:27.139385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-24 22:34:27.139410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-24 22:34:27.139513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-24 22:34:27.139539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-24 22:34:27.139649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-24 22:34:27.139675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-24 22:34:27.139783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-24 22:34:27.139809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-24 22:34:27.139931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-24 22:34:27.139961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-24 22:34:27.140066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-24 22:34:27.140092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-24 22:34:27.140196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-24 22:34:27.140222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-24 22:34:27.140320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-24 22:34:27.140346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-24 22:34:27.140447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-24 22:34:27.140476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-24 22:34:27.140615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-24 22:34:27.140649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-24 22:34:27.140767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-24 22:34:27.140796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-24 22:34:27.140896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-24 22:34:27.140930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-24 22:34:27.141037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-24 22:34:27.141062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-24 22:34:27.141181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-24 22:34:27.141206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-24 22:34:27.141309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-24 22:34:27.141337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-24 22:34:27.141441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-24 22:34:27.141469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-24 22:34:27.141592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-24 22:34:27.141620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-24 22:34:27.141743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-24 22:34:27.141773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-24 22:34:27.141883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-24 22:34:27.141909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-24 22:34:27.142016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-24 22:34:27.142044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-24 22:34:27.142151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-24 22:34:27.142180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-24 22:34:27.142286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-24 22:34:27.142312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-24 22:34:27.142407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-24 22:34:27.142439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-24 22:34:27.142559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-24 22:34:27.142585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-24 22:34:27.142685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-24 22:34:27.142710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-24 22:34:27.142813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-24 22:34:27.142841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-24 22:34:27.142954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-24 22:34:27.142980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-24 22:34:27.143075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-24 22:34:27.143101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-24 22:34:27.143216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-24 22:34:27.143242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-24 22:34:27.143358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-24 22:34:27.143383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-24 22:34:27.143492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-24 22:34:27.143519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-24 22:34:27.143623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-24 22:34:27.143648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-24 22:34:27.143746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-24 22:34:27.143777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-24 22:34:27.143876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-24 22:34:27.143901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-24 22:34:27.143997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-24 22:34:27.144024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-24 22:34:27.144127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-24 22:34:27.144153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-24 22:34:27.144262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-24 22:34:27.144290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-24 22:34:27.144393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-24 22:34:27.144419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-24 22:34:27.144525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-24 22:34:27.144553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-24 22:34:27.144651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.643 [2024-07-24 22:34:27.144676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.643 qpair failed and we were unable to recover it. 00:25:01.643 [2024-07-24 22:34:27.144780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-24 22:34:27.144811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-24 22:34:27.144912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-24 22:34:27.144939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-24 22:34:27.145045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-24 22:34:27.145075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-24 22:34:27.145186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-24 22:34:27.145213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-24 22:34:27.145319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-24 22:34:27.145348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-24 22:34:27.145456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-24 22:34:27.145494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-24 22:34:27.145613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-24 22:34:27.145640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-24 22:34:27.145742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-24 22:34:27.145769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-24 22:34:27.145874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-24 22:34:27.145900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-24 22:34:27.146012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-24 22:34:27.146046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-24 22:34:27.146185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-24 22:34:27.146213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-24 22:34:27.146321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-24 22:34:27.146349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-24 22:34:27.146452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-24 22:34:27.146485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-24 22:34:27.146601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-24 22:34:27.146628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-24 22:34:27.146730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-24 22:34:27.146756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-24 22:34:27.146872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-24 22:34:27.146901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-24 22:34:27.147014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-24 22:34:27.147041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-24 22:34:27.147146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-24 22:34:27.147172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-24 22:34:27.147279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-24 22:34:27.147305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-24 22:34:27.147405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-24 22:34:27.147433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-24 22:34:27.147545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-24 22:34:27.147576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-24 22:34:27.147672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-24 22:34:27.147697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-24 22:34:27.147818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-24 22:34:27.147844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-24 22:34:27.147952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-24 22:34:27.147977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-24 22:34:27.148074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-24 22:34:27.148100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-24 22:34:27.148204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-24 22:34:27.148230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-24 22:34:27.148330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-24 22:34:27.148356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-24 22:34:27.148459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-24 22:34:27.148492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-24 22:34:27.148598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-24 22:34:27.148623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-24 22:34:27.148724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-24 22:34:27.148751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-24 22:34:27.148867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-24 22:34:27.148896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-24 22:34:27.149003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-24 22:34:27.149031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-24 22:34:27.149141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-24 22:34:27.149167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-24 22:34:27.149269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-24 22:34:27.149297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-24 22:34:27.149393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-24 22:34:27.149419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-24 22:34:27.149525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-24 22:34:27.149552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-24 22:34:27.149671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-24 22:34:27.149698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-24 22:34:27.149917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-24 22:34:27.149943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-24 22:34:27.150038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-24 22:34:27.150064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-24 22:34:27.150176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-24 22:34:27.150202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-24 22:34:27.150319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-24 22:34:27.150345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-24 22:34:27.150458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-24 22:34:27.150488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-24 22:34:27.150598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-24 22:34:27.150625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-24 22:34:27.150734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-24 22:34:27.150761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-24 22:34:27.150858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-24 22:34:27.150884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-24 22:34:27.151095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-24 22:34:27.151121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-24 22:34:27.151237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-24 22:34:27.151263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-24 22:34:27.151357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-24 22:34:27.151383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-24 22:34:27.151490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-24 22:34:27.151519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-24 22:34:27.151637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-24 22:34:27.151669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-24 22:34:27.151779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.644 [2024-07-24 22:34:27.151804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.644 qpair failed and we were unable to recover it. 00:25:01.644 [2024-07-24 22:34:27.151903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-24 22:34:27.151929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-24 22:34:27.152031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-24 22:34:27.152058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-24 22:34:27.152164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-24 22:34:27.152193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-24 22:34:27.152293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-24 22:34:27.152320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-24 22:34:27.152418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-24 22:34:27.152444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-24 22:34:27.152565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-24 22:34:27.152592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-24 22:34:27.152698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-24 22:34:27.152726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-24 22:34:27.152841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-24 22:34:27.152867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-24 22:34:27.152974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-24 22:34:27.153009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-24 22:34:27.153126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-24 22:34:27.153153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-24 22:34:27.153259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-24 22:34:27.153286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-24 22:34:27.153391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-24 22:34:27.153417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-24 22:34:27.153536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-24 22:34:27.153562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-24 22:34:27.153668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-24 22:34:27.153695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-24 22:34:27.153796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-24 22:34:27.153822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-24 22:34:27.153924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-24 22:34:27.153950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-24 22:34:27.154051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-24 22:34:27.154078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-24 22:34:27.154195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-24 22:34:27.154220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-24 22:34:27.154327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-24 22:34:27.154355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-24 22:34:27.154456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-24 22:34:27.154491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-24 22:34:27.154619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-24 22:34:27.154645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-24 22:34:27.154747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-24 22:34:27.154774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-24 22:34:27.154881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-24 22:34:27.154907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-24 22:34:27.155019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-24 22:34:27.155045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-24 22:34:27.155163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-24 22:34:27.155190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-24 22:34:27.155304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-24 22:34:27.155331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-24 22:34:27.155447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-24 22:34:27.155476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-24 22:34:27.155604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-24 22:34:27.155631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-24 22:34:27.155730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-24 22:34:27.155756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-24 22:34:27.155854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-24 22:34:27.155880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-24 22:34:27.155978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-24 22:34:27.156004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-24 22:34:27.156103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-24 22:34:27.156128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-24 22:34:27.156237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-24 22:34:27.156263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-24 22:34:27.156385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-24 22:34:27.156415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-24 22:34:27.156549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-24 22:34:27.156578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-24 22:34:27.156674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-24 22:34:27.156701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-24 22:34:27.156805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-24 22:34:27.156832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-24 22:34:27.156945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-24 22:34:27.156971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-24 22:34:27.157071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-24 22:34:27.157102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-24 22:34:27.157199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-24 22:34:27.157225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-24 22:34:27.157437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-24 22:34:27.157465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-24 22:34:27.157582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-24 22:34:27.157609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-24 22:34:27.157723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-24 22:34:27.157749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-24 22:34:27.157853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-24 22:34:27.157881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-24 22:34:27.157988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-24 22:34:27.158015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-24 22:34:27.158125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-24 22:34:27.158153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-24 22:34:27.158259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-24 22:34:27.158285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-24 22:34:27.158388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-24 22:34:27.158417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-24 22:34:27.158527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-24 22:34:27.158555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-24 22:34:27.158669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-24 22:34:27.158695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.645 qpair failed and we were unable to recover it. 00:25:01.645 [2024-07-24 22:34:27.158811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.645 [2024-07-24 22:34:27.158837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-24 22:34:27.158942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-24 22:34:27.158969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-24 22:34:27.159080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-24 22:34:27.159106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-24 22:34:27.159211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-24 22:34:27.159239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-24 22:34:27.159339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-24 22:34:27.159364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-24 22:34:27.159467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-24 22:34:27.159502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-24 22:34:27.159615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-24 22:34:27.159640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-24 22:34:27.159766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-24 22:34:27.159796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-24 22:34:27.159904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-24 22:34:27.159931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-24 22:34:27.160031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-24 22:34:27.160058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-24 22:34:27.160166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-24 22:34:27.160193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-24 22:34:27.160305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-24 22:34:27.160331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-24 22:34:27.160452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-24 22:34:27.160484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-24 22:34:27.160599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-24 22:34:27.160626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-24 22:34:27.160741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-24 22:34:27.160767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-24 22:34:27.160870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-24 22:34:27.160898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-24 22:34:27.161004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-24 22:34:27.161032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-24 22:34:27.161135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-24 22:34:27.161162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-24 22:34:27.161264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-24 22:34:27.161290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-24 22:34:27.161403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-24 22:34:27.161429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-24 22:34:27.161535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-24 22:34:27.161563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-24 22:34:27.161668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-24 22:34:27.161694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-24 22:34:27.161791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-24 22:34:27.161817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-24 22:34:27.161923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-24 22:34:27.161951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-24 22:34:27.162053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-24 22:34:27.162082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-24 22:34:27.162185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-24 22:34:27.162213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-24 22:34:27.162315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-24 22:34:27.162342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-24 22:34:27.162455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-24 22:34:27.162485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-24 22:34:27.162589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-24 22:34:27.162619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-24 22:34:27.162741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-24 22:34:27.162767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-24 22:34:27.162874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-24 22:34:27.162903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-24 22:34:27.163011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-24 22:34:27.163040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-24 22:34:27.163159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-24 22:34:27.163184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-24 22:34:27.163297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-24 22:34:27.163323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-24 22:34:27.163422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-24 22:34:27.163447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-24 22:34:27.163557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-24 22:34:27.163583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-24 22:34:27.163687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-24 22:34:27.163713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-24 22:34:27.163818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-24 22:34:27.163843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-24 22:34:27.163944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-24 22:34:27.163969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-24 22:34:27.164073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-24 22:34:27.164106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-24 22:34:27.164215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-24 22:34:27.164243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-24 22:34:27.164347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-24 22:34:27.164373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-24 22:34:27.164488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-24 22:34:27.164515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-24 22:34:27.164619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-24 22:34:27.164646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-24 22:34:27.164757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-24 22:34:27.164783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-24 22:34:27.164900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-24 22:34:27.164929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-24 22:34:27.165147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-24 22:34:27.165176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-24 22:34:27.165282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-24 22:34:27.165308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.646 qpair failed and we were unable to recover it. 00:25:01.646 [2024-07-24 22:34:27.165437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.646 [2024-07-24 22:34:27.165463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-24 22:34:27.165584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-24 22:34:27.165611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-24 22:34:27.165716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-24 22:34:27.165743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-24 22:34:27.165842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-24 22:34:27.165868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-24 22:34:27.165983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-24 22:34:27.166009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-24 22:34:27.166120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-24 22:34:27.166146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-24 22:34:27.166293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-24 22:34:27.166318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-24 22:34:27.166428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-24 22:34:27.166454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-24 22:34:27.166574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-24 22:34:27.166601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-24 22:34:27.166703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-24 22:34:27.166729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-24 22:34:27.166876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-24 22:34:27.166903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-24 22:34:27.167005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-24 22:34:27.167031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-24 22:34:27.167143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-24 22:34:27.167170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-24 22:34:27.167273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-24 22:34:27.167299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-24 22:34:27.167404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-24 22:34:27.167431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-24 22:34:27.167530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-24 22:34:27.167558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-24 22:34:27.167663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-24 22:34:27.167691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-24 22:34:27.167787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-24 22:34:27.167813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-24 22:34:27.167933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-24 22:34:27.167962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-24 22:34:27.168079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-24 22:34:27.168107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-24 22:34:27.168224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-24 22:34:27.168256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-24 22:34:27.168363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-24 22:34:27.168391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-24 22:34:27.168512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-24 22:34:27.168542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-24 22:34:27.168647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-24 22:34:27.168673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-24 22:34:27.168776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-24 22:34:27.168802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-24 22:34:27.168897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-24 22:34:27.168923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-24 22:34:27.169026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-24 22:34:27.169052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-24 22:34:27.169153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-24 22:34:27.169186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-24 22:34:27.169294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-24 22:34:27.169320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-24 22:34:27.169418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-24 22:34:27.169443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-24 22:34:27.169549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-24 22:34:27.169576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-24 22:34:27.169684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-24 22:34:27.169711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-24 22:34:27.169812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-24 22:34:27.169839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-24 22:34:27.169943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-24 22:34:27.169968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-24 22:34:27.170087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-24 22:34:27.170114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-24 22:34:27.170212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-24 22:34:27.170237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-24 22:34:27.170342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-24 22:34:27.170367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-24 22:34:27.170471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-24 22:34:27.170506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-24 22:34:27.170606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-24 22:34:27.170631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-24 22:34:27.170734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-24 22:34:27.170759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-24 22:34:27.170859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-24 22:34:27.170885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-24 22:34:27.170992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-24 22:34:27.171017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-24 22:34:27.171120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-24 22:34:27.171148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-24 22:34:27.171250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-24 22:34:27.171275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-24 22:34:27.171392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-24 22:34:27.171418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-24 22:34:27.171517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-24 22:34:27.171544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-24 22:34:27.171643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-24 22:34:27.171669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-24 22:34:27.171780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-24 22:34:27.171805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-24 22:34:27.171915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.647 [2024-07-24 22:34:27.171940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.647 qpair failed and we were unable to recover it. 00:25:01.647 [2024-07-24 22:34:27.172033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-24 22:34:27.172058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-24 22:34:27.172159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-24 22:34:27.172185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-24 22:34:27.172286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-24 22:34:27.172311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-24 22:34:27.172420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-24 22:34:27.172449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-24 22:34:27.172560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-24 22:34:27.172586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-24 22:34:27.172684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-24 22:34:27.172710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-24 22:34:27.172808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-24 22:34:27.172834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-24 22:34:27.172949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-24 22:34:27.172976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-24 22:34:27.173092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-24 22:34:27.173118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-24 22:34:27.173224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-24 22:34:27.173252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-24 22:34:27.173368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-24 22:34:27.173395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-24 22:34:27.173507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-24 22:34:27.173539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-24 22:34:27.173643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-24 22:34:27.173670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-24 22:34:27.173779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-24 22:34:27.173805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-24 22:34:27.173902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-24 22:34:27.173929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-24 22:34:27.174054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-24 22:34:27.174080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-24 22:34:27.174185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-24 22:34:27.174212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-24 22:34:27.174321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-24 22:34:27.174348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-24 22:34:27.174465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-24 22:34:27.174501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-24 22:34:27.174619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-24 22:34:27.174645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-24 22:34:27.174747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-24 22:34:27.174773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-24 22:34:27.174869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-24 22:34:27.174904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-24 22:34:27.174997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-24 22:34:27.175022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-24 22:34:27.175139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-24 22:34:27.175164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-24 22:34:27.175268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-24 22:34:27.175294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-24 22:34:27.175395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-24 22:34:27.175420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-24 22:34:27.175526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-24 22:34:27.175553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-24 22:34:27.175657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-24 22:34:27.175686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-24 22:34:27.175790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-24 22:34:27.175816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-24 22:34:27.175921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-24 22:34:27.175947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-24 22:34:27.176054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-24 22:34:27.176081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-24 22:34:27.176191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-24 22:34:27.176217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-24 22:34:27.176332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-24 22:34:27.176361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-24 22:34:27.176460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-24 22:34:27.176496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-24 22:34:27.176614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-24 22:34:27.176641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-24 22:34:27.176742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-24 22:34:27.176767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-24 22:34:27.176886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-24 22:34:27.176911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-24 22:34:27.177032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-24 22:34:27.177057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-24 22:34:27.177179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-24 22:34:27.177205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-24 22:34:27.177305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-24 22:34:27.177330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-24 22:34:27.177436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-24 22:34:27.177463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-24 22:34:27.177581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-24 22:34:27.177607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-24 22:34:27.177710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-24 22:34:27.177739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-24 22:34:27.177853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-24 22:34:27.177878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-24 22:34:27.178000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-24 22:34:27.178029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-24 22:34:27.178140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-24 22:34:27.178169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-24 22:34:27.178274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-24 22:34:27.178300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-24 22:34:27.178405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-24 22:34:27.178432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-24 22:34:27.178555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-24 22:34:27.178583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-24 22:34:27.178689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-24 22:34:27.178715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-24 22:34:27.178817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-24 22:34:27.178844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-24 22:34:27.178940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-24 22:34:27.178970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-24 22:34:27.179075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-24 22:34:27.179102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.648 qpair failed and we were unable to recover it. 00:25:01.648 [2024-07-24 22:34:27.179212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.648 [2024-07-24 22:34:27.179237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-24 22:34:27.179339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-24 22:34:27.179365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-24 22:34:27.179463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-24 22:34:27.179496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-24 22:34:27.179617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-24 22:34:27.179643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-24 22:34:27.179741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-24 22:34:27.179766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-24 22:34:27.179864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-24 22:34:27.179889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-24 22:34:27.179994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-24 22:34:27.180020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-24 22:34:27.180119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-24 22:34:27.180144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-24 22:34:27.180249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-24 22:34:27.180275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-24 22:34:27.180387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-24 22:34:27.180413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-24 22:34:27.180518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-24 22:34:27.180545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-24 22:34:27.180652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-24 22:34:27.180677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-24 22:34:27.180786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-24 22:34:27.180811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-24 22:34:27.180913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-24 22:34:27.180940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-24 22:34:27.181041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-24 22:34:27.181066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-24 22:34:27.181168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-24 22:34:27.181193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-24 22:34:27.181300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-24 22:34:27.181328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-24 22:34:27.181424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-24 22:34:27.181449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-24 22:34:27.181556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-24 22:34:27.181582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-24 22:34:27.181686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-24 22:34:27.181712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-24 22:34:27.181808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-24 22:34:27.181834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-24 22:34:27.181932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-24 22:34:27.181957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-24 22:34:27.182056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-24 22:34:27.182082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-24 22:34:27.182200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-24 22:34:27.182226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-24 22:34:27.182342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-24 22:34:27.182371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-24 22:34:27.182486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-24 22:34:27.182513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-24 22:34:27.182628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-24 22:34:27.182654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-24 22:34:27.182757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-24 22:34:27.182784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-24 22:34:27.182889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-24 22:34:27.182915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-24 22:34:27.183018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-24 22:34:27.183044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-24 22:34:27.183146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-24 22:34:27.183173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-24 22:34:27.183278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-24 22:34:27.183305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-24 22:34:27.183400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-24 22:34:27.183426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-24 22:34:27.183529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-24 22:34:27.183559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-24 22:34:27.183654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-24 22:34:27.183680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-24 22:34:27.183789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-24 22:34:27.183815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-24 22:34:27.183920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-24 22:34:27.183948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-24 22:34:27.184054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-24 22:34:27.184080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-24 22:34:27.184184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-24 22:34:27.184216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-24 22:34:27.184310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-24 22:34:27.184336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-24 22:34:27.184445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-24 22:34:27.184475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-24 22:34:27.184586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-24 22:34:27.184614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-24 22:34:27.184736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-24 22:34:27.184765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-24 22:34:27.184883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-24 22:34:27.184910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-24 22:34:27.185013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-24 22:34:27.185039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-24 22:34:27.185140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-24 22:34:27.185166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-24 22:34:27.185270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-24 22:34:27.185297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-24 22:34:27.185412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-24 22:34:27.185437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-24 22:34:27.185559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-24 22:34:27.185589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-24 22:34:27.185697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-24 22:34:27.185725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-24 22:34:27.185828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-24 22:34:27.185856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-24 22:34:27.185972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-24 22:34:27.185998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-24 22:34:27.186109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-24 22:34:27.186134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-24 22:34:27.186239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-24 22:34:27.186265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-24 22:34:27.186373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-24 22:34:27.186399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-24 22:34:27.186512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-24 22:34:27.186541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.649 [2024-07-24 22:34:27.186657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.649 [2024-07-24 22:34:27.186683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.649 qpair failed and we were unable to recover it. 00:25:01.650 [2024-07-24 22:34:27.186778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.650 [2024-07-24 22:34:27.186803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.650 qpair failed and we were unable to recover it. 00:25:01.650 [2024-07-24 22:34:27.186909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.650 [2024-07-24 22:34:27.186937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.650 qpair failed and we were unable to recover it. 00:25:01.650 [2024-07-24 22:34:27.187043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.650 [2024-07-24 22:34:27.187069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.650 qpair failed and we were unable to recover it. 00:25:01.650 [2024-07-24 22:34:27.187221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.650 [2024-07-24 22:34:27.187250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.650 qpair failed and we were unable to recover it. 00:25:01.650 [2024-07-24 22:34:27.187398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.650 [2024-07-24 22:34:27.187425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.650 qpair failed and we were unable to recover it. 00:25:01.650 [2024-07-24 22:34:27.187535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.650 [2024-07-24 22:34:27.187562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.650 qpair failed and we were unable to recover it. 00:25:01.650 [2024-07-24 22:34:27.187679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.650 [2024-07-24 22:34:27.187705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.650 qpair failed and we were unable to recover it. 00:25:01.650 [2024-07-24 22:34:27.187821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.650 [2024-07-24 22:34:27.187846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.650 qpair failed and we were unable to recover it. 00:25:01.650 [2024-07-24 22:34:27.187967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.650 [2024-07-24 22:34:27.188004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.650 qpair failed and we were unable to recover it. 00:25:01.650 [2024-07-24 22:34:27.188121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.650 [2024-07-24 22:34:27.188149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.650 qpair failed and we were unable to recover it. 00:25:01.650 [2024-07-24 22:34:27.188245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.650 [2024-07-24 22:34:27.188272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.650 qpair failed and we were unable to recover it. 00:25:01.650 [2024-07-24 22:34:27.188373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.650 [2024-07-24 22:34:27.188399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.650 qpair failed and we were unable to recover it. 00:25:01.650 [2024-07-24 22:34:27.188497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.650 [2024-07-24 22:34:27.188524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.650 qpair failed and we were unable to recover it. 00:25:01.650 [2024-07-24 22:34:27.188630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.650 [2024-07-24 22:34:27.188658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.650 qpair failed and we were unable to recover it. 00:25:01.650 [2024-07-24 22:34:27.188756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.650 [2024-07-24 22:34:27.188782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.650 qpair failed and we were unable to recover it. 00:25:01.650 [2024-07-24 22:34:27.188886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.650 [2024-07-24 22:34:27.188914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.650 qpair failed and we were unable to recover it. 00:25:01.650 [2024-07-24 22:34:27.189018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.650 [2024-07-24 22:34:27.189046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.650 qpair failed and we were unable to recover it. 00:25:01.650 [2024-07-24 22:34:27.189150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.650 [2024-07-24 22:34:27.189176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.650 qpair failed and we were unable to recover it. 00:25:01.650 [2024-07-24 22:34:27.189279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.650 [2024-07-24 22:34:27.189305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.650 qpair failed and we were unable to recover it. 00:25:01.650 [2024-07-24 22:34:27.189408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.650 [2024-07-24 22:34:27.189435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.650 qpair failed and we were unable to recover it. 00:25:01.650 [2024-07-24 22:34:27.189541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.650 [2024-07-24 22:34:27.189567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.650 qpair failed and we were unable to recover it. 00:25:01.650 [2024-07-24 22:34:27.189662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.650 [2024-07-24 22:34:27.189692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.650 qpair failed and we were unable to recover it. 00:25:01.650 [2024-07-24 22:34:27.189787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.650 [2024-07-24 22:34:27.189813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.650 qpair failed and we were unable to recover it. 00:25:01.650 [2024-07-24 22:34:27.189920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.650 [2024-07-24 22:34:27.189945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.650 qpair failed and we were unable to recover it. 00:25:01.650 [2024-07-24 22:34:27.190059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.650 [2024-07-24 22:34:27.190085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.650 qpair failed and we were unable to recover it. 00:25:01.650 [2024-07-24 22:34:27.190188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.650 [2024-07-24 22:34:27.190214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.650 qpair failed and we were unable to recover it. 00:25:01.650 [2024-07-24 22:34:27.190315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.650 [2024-07-24 22:34:27.190340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.650 qpair failed and we were unable to recover it. 00:25:01.650 [2024-07-24 22:34:27.190441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.650 [2024-07-24 22:34:27.190466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.650 qpair failed and we were unable to recover it. 00:25:01.650 [2024-07-24 22:34:27.190607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.650 [2024-07-24 22:34:27.190636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.650 qpair failed and we were unable to recover it. 00:25:01.650 [2024-07-24 22:34:27.190745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.650 [2024-07-24 22:34:27.190773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.650 qpair failed and we were unable to recover it. 00:25:01.650 [2024-07-24 22:34:27.190873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.650 [2024-07-24 22:34:27.190900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.650 qpair failed and we were unable to recover it. 00:25:01.650 [2024-07-24 22:34:27.191011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.650 [2024-07-24 22:34:27.191039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.650 qpair failed and we were unable to recover it. 00:25:01.650 [2024-07-24 22:34:27.191148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.650 [2024-07-24 22:34:27.191177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.650 qpair failed and we were unable to recover it. 00:25:01.650 [2024-07-24 22:34:27.191288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.650 [2024-07-24 22:34:27.191316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.650 qpair failed and we were unable to recover it. 00:25:01.650 [2024-07-24 22:34:27.191418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.650 [2024-07-24 22:34:27.191445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.651 qpair failed and we were unable to recover it. 00:25:01.651 [2024-07-24 22:34:27.191565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.651 [2024-07-24 22:34:27.191593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.651 qpair failed and we were unable to recover it. 00:25:01.651 [2024-07-24 22:34:27.191691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.651 [2024-07-24 22:34:27.191717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.651 qpair failed and we were unable to recover it. 00:25:01.651 [2024-07-24 22:34:27.191865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.651 [2024-07-24 22:34:27.191891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.651 qpair failed and we were unable to recover it. 00:25:01.651 [2024-07-24 22:34:27.191995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.651 [2024-07-24 22:34:27.192020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.651 qpair failed and we were unable to recover it. 00:25:01.651 [2024-07-24 22:34:27.192138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.651 [2024-07-24 22:34:27.192166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.651 qpair failed and we were unable to recover it. 00:25:01.651 [2024-07-24 22:34:27.192264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.651 [2024-07-24 22:34:27.192290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.651 qpair failed and we were unable to recover it. 00:25:01.651 [2024-07-24 22:34:27.192393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.651 [2024-07-24 22:34:27.192419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.651 qpair failed and we were unable to recover it. 00:25:01.651 [2024-07-24 22:34:27.192536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.651 [2024-07-24 22:34:27.192563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.651 qpair failed and we were unable to recover it. 00:25:01.651 [2024-07-24 22:34:27.192668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.651 [2024-07-24 22:34:27.192696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.651 qpair failed and we were unable to recover it. 00:25:01.651 [2024-07-24 22:34:27.192797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.651 [2024-07-24 22:34:27.192825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.651 qpair failed and we were unable to recover it. 00:25:01.651 [2024-07-24 22:34:27.192928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.651 [2024-07-24 22:34:27.192954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.651 qpair failed and we were unable to recover it. 00:25:01.651 [2024-07-24 22:34:27.193051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.651 [2024-07-24 22:34:27.193077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.651 qpair failed and we were unable to recover it. 00:25:01.651 [2024-07-24 22:34:27.193178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.651 [2024-07-24 22:34:27.193204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.651 qpair failed and we were unable to recover it. 00:25:01.651 [2024-07-24 22:34:27.193316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.651 [2024-07-24 22:34:27.193350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.651 qpair failed and we were unable to recover it. 00:25:01.651 [2024-07-24 22:34:27.193517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.651 [2024-07-24 22:34:27.193554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.651 qpair failed and we were unable to recover it. 00:25:01.651 [2024-07-24 22:34:27.193678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.651 [2024-07-24 22:34:27.193705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.651 qpair failed and we were unable to recover it. 00:25:01.651 [2024-07-24 22:34:27.193824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.651 [2024-07-24 22:34:27.193851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.651 qpair failed and we were unable to recover it. 00:25:01.651 [2024-07-24 22:34:27.193969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.651 [2024-07-24 22:34:27.193995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.651 qpair failed and we were unable to recover it. 00:25:01.651 [2024-07-24 22:34:27.194102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.651 [2024-07-24 22:34:27.194129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.651 qpair failed and we were unable to recover it. 00:25:01.651 [2024-07-24 22:34:27.194234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.651 [2024-07-24 22:34:27.194261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.651 qpair failed and we were unable to recover it. 00:25:01.651 [2024-07-24 22:34:27.194358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.651 [2024-07-24 22:34:27.194384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.651 qpair failed and we were unable to recover it. 00:25:01.651 [2024-07-24 22:34:27.194497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.651 [2024-07-24 22:34:27.194526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.651 qpair failed and we were unable to recover it. 00:25:01.651 [2024-07-24 22:34:27.194643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.651 [2024-07-24 22:34:27.194668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.651 qpair failed and we were unable to recover it. 00:25:01.651 [2024-07-24 22:34:27.194777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.651 [2024-07-24 22:34:27.194802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.651 qpair failed and we were unable to recover it. 00:25:01.651 [2024-07-24 22:34:27.194902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.651 [2024-07-24 22:34:27.194926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.651 qpair failed and we were unable to recover it. 00:25:01.651 [2024-07-24 22:34:27.195031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.651 [2024-07-24 22:34:27.195060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.651 qpair failed and we were unable to recover it. 00:25:01.651 [2024-07-24 22:34:27.195164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.651 [2024-07-24 22:34:27.195196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.651 qpair failed and we were unable to recover it. 00:25:01.651 [2024-07-24 22:34:27.195295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.651 [2024-07-24 22:34:27.195321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.651 qpair failed and we were unable to recover it. 00:25:01.651 [2024-07-24 22:34:27.195438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.651 [2024-07-24 22:34:27.195466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.651 qpair failed and we were unable to recover it. 00:25:01.651 [2024-07-24 22:34:27.195582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.651 [2024-07-24 22:34:27.195608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.651 qpair failed and we were unable to recover it. 00:25:01.651 [2024-07-24 22:34:27.195723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.651 [2024-07-24 22:34:27.195752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.651 qpair failed and we were unable to recover it. 00:25:01.651 [2024-07-24 22:34:27.195862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.651 [2024-07-24 22:34:27.195890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.651 qpair failed and we were unable to recover it. 00:25:01.651 [2024-07-24 22:34:27.195987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.651 [2024-07-24 22:34:27.196013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.651 qpair failed and we were unable to recover it. 00:25:01.651 [2024-07-24 22:34:27.196113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.651 [2024-07-24 22:34:27.196139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.651 qpair failed and we were unable to recover it. 00:25:01.651 [2024-07-24 22:34:27.196245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-24 22:34:27.196272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.652 qpair failed and we were unable to recover it. 00:25:01.652 [2024-07-24 22:34:27.196371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-24 22:34:27.196397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.652 qpair failed and we were unable to recover it. 00:25:01.652 [2024-07-24 22:34:27.196544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-24 22:34:27.196572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.652 qpair failed and we were unable to recover it. 00:25:01.652 [2024-07-24 22:34:27.196670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-24 22:34:27.196696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.652 qpair failed and we were unable to recover it. 00:25:01.652 [2024-07-24 22:34:27.196805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-24 22:34:27.196831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.652 qpair failed and we were unable to recover it. 00:25:01.652 [2024-07-24 22:34:27.196938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-24 22:34:27.196964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.652 qpair failed and we were unable to recover it. 00:25:01.652 [2024-07-24 22:34:27.197093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-24 22:34:27.197122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.652 qpair failed and we were unable to recover it. 00:25:01.652 [2024-07-24 22:34:27.197228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-24 22:34:27.197254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.652 qpair failed and we were unable to recover it. 00:25:01.652 [2024-07-24 22:34:27.197366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-24 22:34:27.197393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.652 qpair failed and we were unable to recover it. 00:25:01.652 [2024-07-24 22:34:27.197503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-24 22:34:27.197529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.652 qpair failed and we were unable to recover it. 00:25:01.652 [2024-07-24 22:34:27.197623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-24 22:34:27.197649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.652 qpair failed and we were unable to recover it. 00:25:01.652 [2024-07-24 22:34:27.197752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-24 22:34:27.197777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.652 qpair failed and we were unable to recover it. 00:25:01.652 [2024-07-24 22:34:27.197885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-24 22:34:27.197911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.652 qpair failed and we were unable to recover it. 00:25:01.652 [2024-07-24 22:34:27.198012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-24 22:34:27.198037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.652 qpair failed and we were unable to recover it. 00:25:01.652 [2024-07-24 22:34:27.198134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-24 22:34:27.198160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.652 qpair failed and we were unable to recover it. 00:25:01.652 [2024-07-24 22:34:27.198266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-24 22:34:27.198292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.652 qpair failed and we were unable to recover it. 00:25:01.652 [2024-07-24 22:34:27.198395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-24 22:34:27.198422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.652 qpair failed and we were unable to recover it. 00:25:01.652 [2024-07-24 22:34:27.198527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-24 22:34:27.198555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.652 qpair failed and we were unable to recover it. 00:25:01.652 [2024-07-24 22:34:27.198651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-24 22:34:27.198678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.652 qpair failed and we were unable to recover it. 00:25:01.652 [2024-07-24 22:34:27.198793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-24 22:34:27.198827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.652 qpair failed and we were unable to recover it. 00:25:01.652 [2024-07-24 22:34:27.198944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-24 22:34:27.198973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.652 qpair failed and we were unable to recover it. 00:25:01.652 [2024-07-24 22:34:27.199083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-24 22:34:27.199111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.652 qpair failed and we were unable to recover it. 00:25:01.652 [2024-07-24 22:34:27.199214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-24 22:34:27.199241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.652 qpair failed and we were unable to recover it. 00:25:01.652 [2024-07-24 22:34:27.199344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-24 22:34:27.199371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.652 qpair failed and we were unable to recover it. 00:25:01.652 [2024-07-24 22:34:27.199473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-24 22:34:27.199509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.652 qpair failed and we were unable to recover it. 00:25:01.652 [2024-07-24 22:34:27.199629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-24 22:34:27.199655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.652 qpair failed and we were unable to recover it. 00:25:01.652 [2024-07-24 22:34:27.199763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-24 22:34:27.199788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.652 qpair failed and we were unable to recover it. 00:25:01.652 [2024-07-24 22:34:27.199911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-24 22:34:27.199937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.652 qpair failed and we were unable to recover it. 00:25:01.652 [2024-07-24 22:34:27.200051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-24 22:34:27.200078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.652 qpair failed and we were unable to recover it. 00:25:01.652 [2024-07-24 22:34:27.200182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-24 22:34:27.200209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.652 qpair failed and we were unable to recover it. 00:25:01.652 [2024-07-24 22:34:27.200307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-24 22:34:27.200332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.652 qpair failed and we were unable to recover it. 00:25:01.652 [2024-07-24 22:34:27.200432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-24 22:34:27.200458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.652 qpair failed and we were unable to recover it. 00:25:01.652 [2024-07-24 22:34:27.200600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-24 22:34:27.200637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.652 qpair failed and we were unable to recover it. 00:25:01.652 [2024-07-24 22:34:27.200756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-24 22:34:27.200785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.652 qpair failed and we were unable to recover it. 00:25:01.652 [2024-07-24 22:34:27.200936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-24 22:34:27.200963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.652 qpair failed and we were unable to recover it. 00:25:01.652 [2024-07-24 22:34:27.201063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-24 22:34:27.201088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.652 qpair failed and we were unable to recover it. 00:25:01.652 [2024-07-24 22:34:27.201194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-24 22:34:27.201220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.653 qpair failed and we were unable to recover it. 00:25:01.653 [2024-07-24 22:34:27.201331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.653 [2024-07-24 22:34:27.201357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.653 qpair failed and we were unable to recover it. 00:25:01.653 [2024-07-24 22:34:27.201461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.653 [2024-07-24 22:34:27.201493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.653 qpair failed and we were unable to recover it. 00:25:01.653 [2024-07-24 22:34:27.201598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.653 [2024-07-24 22:34:27.201625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.653 qpair failed and we were unable to recover it. 00:25:01.653 [2024-07-24 22:34:27.201740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.653 [2024-07-24 22:34:27.201768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.653 qpair failed and we were unable to recover it. 00:25:01.653 [2024-07-24 22:34:27.201886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.653 [2024-07-24 22:34:27.201913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.653 qpair failed and we were unable to recover it. 00:25:01.653 [2024-07-24 22:34:27.202016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.653 [2024-07-24 22:34:27.202043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.653 qpair failed and we were unable to recover it. 00:25:01.653 [2024-07-24 22:34:27.202146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.653 [2024-07-24 22:34:27.202172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.653 qpair failed and we were unable to recover it. 00:25:01.653 [2024-07-24 22:34:27.202280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.653 [2024-07-24 22:34:27.202308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.653 qpair failed and we were unable to recover it. 00:25:01.653 [2024-07-24 22:34:27.202412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.653 [2024-07-24 22:34:27.202437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.653 qpair failed and we were unable to recover it. 00:25:01.653 [2024-07-24 22:34:27.202554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.653 [2024-07-24 22:34:27.202580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.653 qpair failed and we were unable to recover it. 00:25:01.653 [2024-07-24 22:34:27.202694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.653 [2024-07-24 22:34:27.202720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.653 qpair failed and we were unable to recover it. 00:25:01.653 [2024-07-24 22:34:27.202820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.653 [2024-07-24 22:34:27.202846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.653 qpair failed and we were unable to recover it. 00:25:01.653 [2024-07-24 22:34:27.202948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.653 [2024-07-24 22:34:27.202975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.653 qpair failed and we were unable to recover it. 00:25:01.653 [2024-07-24 22:34:27.203082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.653 [2024-07-24 22:34:27.203111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.653 qpair failed and we were unable to recover it. 00:25:01.653 [2024-07-24 22:34:27.203225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.653 [2024-07-24 22:34:27.203251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.653 qpair failed and we were unable to recover it. 00:25:01.653 [2024-07-24 22:34:27.203359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.653 [2024-07-24 22:34:27.203388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.653 qpair failed and we were unable to recover it. 00:25:01.653 [2024-07-24 22:34:27.203500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.653 [2024-07-24 22:34:27.203528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.653 qpair failed and we were unable to recover it. 00:25:01.653 [2024-07-24 22:34:27.203637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.653 [2024-07-24 22:34:27.203664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.653 qpair failed and we were unable to recover it. 00:25:01.653 [2024-07-24 22:34:27.203786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.653 [2024-07-24 22:34:27.203812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.653 qpair failed and we were unable to recover it. 00:25:01.653 [2024-07-24 22:34:27.203913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.653 [2024-07-24 22:34:27.203939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.653 qpair failed and we were unable to recover it. 00:25:01.653 [2024-07-24 22:34:27.204050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.653 [2024-07-24 22:34:27.204077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.653 qpair failed and we were unable to recover it. 00:25:01.653 [2024-07-24 22:34:27.204171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.653 [2024-07-24 22:34:27.204197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.653 qpair failed and we were unable to recover it. 00:25:01.653 [2024-07-24 22:34:27.204304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.653 [2024-07-24 22:34:27.204337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.653 qpair failed and we were unable to recover it. 00:25:01.653 [2024-07-24 22:34:27.204441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.653 [2024-07-24 22:34:27.204467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.653 qpair failed and we were unable to recover it. 00:25:01.653 [2024-07-24 22:34:27.204582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.653 [2024-07-24 22:34:27.204608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.653 qpair failed and we were unable to recover it. 00:25:01.653 [2024-07-24 22:34:27.204707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.653 [2024-07-24 22:34:27.204733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.653 qpair failed and we were unable to recover it. 00:25:01.653 [2024-07-24 22:34:27.204831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.653 [2024-07-24 22:34:27.204856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.653 qpair failed and we were unable to recover it. 00:25:01.653 [2024-07-24 22:34:27.204973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.653 [2024-07-24 22:34:27.205001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.653 qpair failed and we were unable to recover it. 00:25:01.653 [2024-07-24 22:34:27.205104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.653 [2024-07-24 22:34:27.205130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.653 qpair failed and we were unable to recover it. 00:25:01.653 [2024-07-24 22:34:27.205244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.653 [2024-07-24 22:34:27.205270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.653 qpair failed and we were unable to recover it. 00:25:01.653 [2024-07-24 22:34:27.205371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.653 [2024-07-24 22:34:27.205397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.653 qpair failed and we were unable to recover it. 00:25:01.653 [2024-07-24 22:34:27.205515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.653 [2024-07-24 22:34:27.205543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.653 qpair failed and we were unable to recover it. 00:25:01.653 [2024-07-24 22:34:27.205658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.653 [2024-07-24 22:34:27.205685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.653 qpair failed and we were unable to recover it. 00:25:01.653 [2024-07-24 22:34:27.205800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.653 [2024-07-24 22:34:27.205826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.653 qpair failed and we were unable to recover it. 00:25:01.653 [2024-07-24 22:34:27.205930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.653 [2024-07-24 22:34:27.205957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.653 qpair failed and we were unable to recover it. 00:25:01.653 [2024-07-24 22:34:27.206070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.653 [2024-07-24 22:34:27.206096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.653 qpair failed and we were unable to recover it. 00:25:01.653 [2024-07-24 22:34:27.206202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-24 22:34:27.206229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-24 22:34:27.206328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-24 22:34:27.206354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-24 22:34:27.206452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-24 22:34:27.206478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-24 22:34:27.206594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-24 22:34:27.206620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-24 22:34:27.206727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-24 22:34:27.206754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-24 22:34:27.206856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-24 22:34:27.206883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-24 22:34:27.206988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-24 22:34:27.207013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-24 22:34:27.207117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-24 22:34:27.207143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-24 22:34:27.207252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-24 22:34:27.207279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-24 22:34:27.207387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-24 22:34:27.207416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-24 22:34:27.207529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-24 22:34:27.207559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-24 22:34:27.207680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-24 22:34:27.207707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-24 22:34:27.207812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-24 22:34:27.207837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-24 22:34:27.207959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-24 22:34:27.207986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-24 22:34:27.208085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-24 22:34:27.208111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-24 22:34:27.208213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-24 22:34:27.208240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-24 22:34:27.208389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-24 22:34:27.208418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-24 22:34:27.208517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-24 22:34:27.208544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-24 22:34:27.208645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-24 22:34:27.208671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-24 22:34:27.208782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-24 22:34:27.208810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-24 22:34:27.208922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-24 22:34:27.208948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-24 22:34:27.209048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-24 22:34:27.209075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-24 22:34:27.209179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-24 22:34:27.209206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-24 22:34:27.209326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-24 22:34:27.209352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-24 22:34:27.209455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-24 22:34:27.209487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-24 22:34:27.209592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-24 22:34:27.209619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-24 22:34:27.209732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-24 22:34:27.209764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-24 22:34:27.209862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-24 22:34:27.209888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-24 22:34:27.209983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-24 22:34:27.210010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-24 22:34:27.210106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-24 22:34:27.210132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-24 22:34:27.210242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-24 22:34:27.210271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-24 22:34:27.210383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-24 22:34:27.210410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-24 22:34:27.210523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-24 22:34:27.210551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-24 22:34:27.210676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-24 22:34:27.210704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-24 22:34:27.210821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-24 22:34:27.210847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-24 22:34:27.210962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-24 22:34:27.210988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.654 [2024-07-24 22:34:27.211095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.654 [2024-07-24 22:34:27.211123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.654 qpair failed and we were unable to recover it. 00:25:01.655 [2024-07-24 22:34:27.211243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.655 [2024-07-24 22:34:27.211272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.655 qpair failed and we were unable to recover it. 00:25:01.655 [2024-07-24 22:34:27.211369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.655 [2024-07-24 22:34:27.211394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.655 qpair failed and we were unable to recover it. 00:25:01.655 [2024-07-24 22:34:27.211507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.655 [2024-07-24 22:34:27.211542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.655 qpair failed and we were unable to recover it. 00:25:01.655 [2024-07-24 22:34:27.211653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.655 [2024-07-24 22:34:27.211679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.655 qpair failed and we were unable to recover it. 00:25:01.655 [2024-07-24 22:34:27.211793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.655 [2024-07-24 22:34:27.211819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.655 qpair failed and we were unable to recover it. 00:25:01.655 [2024-07-24 22:34:27.211929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.655 [2024-07-24 22:34:27.211954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.655 qpair failed and we were unable to recover it. 00:25:01.655 [2024-07-24 22:34:27.212057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.655 [2024-07-24 22:34:27.212082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.655 qpair failed and we were unable to recover it. 00:25:01.655 [2024-07-24 22:34:27.212182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.655 [2024-07-24 22:34:27.212208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.655 qpair failed and we were unable to recover it. 00:25:01.655 [2024-07-24 22:34:27.212315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.655 [2024-07-24 22:34:27.212341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.655 qpair failed and we were unable to recover it. 00:25:01.655 [2024-07-24 22:34:27.212437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.655 [2024-07-24 22:34:27.212463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.655 qpair failed and we were unable to recover it. 00:25:01.655 [2024-07-24 22:34:27.212584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.655 [2024-07-24 22:34:27.212609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.655 qpair failed and we were unable to recover it. 00:25:01.655 [2024-07-24 22:34:27.212731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.655 [2024-07-24 22:34:27.212757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.655 qpair failed and we were unable to recover it. 00:25:01.655 [2024-07-24 22:34:27.212857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.655 [2024-07-24 22:34:27.212882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.655 qpair failed and we were unable to recover it. 00:25:01.655 [2024-07-24 22:34:27.212980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.655 [2024-07-24 22:34:27.213005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.655 qpair failed and we were unable to recover it. 00:25:01.655 [2024-07-24 22:34:27.213107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.655 [2024-07-24 22:34:27.213133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.655 qpair failed and we were unable to recover it. 00:25:01.655 [2024-07-24 22:34:27.213235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.655 [2024-07-24 22:34:27.213262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.655 qpair failed and we were unable to recover it. 00:25:01.655 [2024-07-24 22:34:27.213375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.655 [2024-07-24 22:34:27.213400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.655 qpair failed and we were unable to recover it. 00:25:01.655 [2024-07-24 22:34:27.213518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.655 [2024-07-24 22:34:27.213545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.655 qpair failed and we were unable to recover it. 00:25:01.655 [2024-07-24 22:34:27.213646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.655 [2024-07-24 22:34:27.213672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.655 qpair failed and we were unable to recover it. 00:25:01.655 [2024-07-24 22:34:27.213777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.655 [2024-07-24 22:34:27.213804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.655 qpair failed and we were unable to recover it. 00:25:01.655 [2024-07-24 22:34:27.213907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.655 [2024-07-24 22:34:27.213936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.655 qpair failed and we were unable to recover it. 00:25:01.655 [2024-07-24 22:34:27.214049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.655 [2024-07-24 22:34:27.214076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.655 qpair failed and we were unable to recover it. 00:25:01.655 [2024-07-24 22:34:27.214181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.655 [2024-07-24 22:34:27.214208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.655 qpair failed and we were unable to recover it. 00:25:01.655 [2024-07-24 22:34:27.214318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.655 [2024-07-24 22:34:27.214344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.655 qpair failed and we were unable to recover it. 00:25:01.655 [2024-07-24 22:34:27.214447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.655 [2024-07-24 22:34:27.214473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.655 qpair failed and we were unable to recover it. 00:25:01.655 [2024-07-24 22:34:27.214605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.655 [2024-07-24 22:34:27.214632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.655 qpair failed and we were unable to recover it. 00:25:01.655 [2024-07-24 22:34:27.214730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.655 [2024-07-24 22:34:27.214756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.655 qpair failed and we were unable to recover it. 00:25:01.655 [2024-07-24 22:34:27.214860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.655 [2024-07-24 22:34:27.214886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.655 qpair failed and we were unable to recover it. 00:25:01.655 [2024-07-24 22:34:27.214989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.655 [2024-07-24 22:34:27.215017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.655 qpair failed and we were unable to recover it. 00:25:01.655 [2024-07-24 22:34:27.215117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.655 [2024-07-24 22:34:27.215147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.655 qpair failed and we were unable to recover it. 00:25:01.655 [2024-07-24 22:34:27.215250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-24 22:34:27.215277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-24 22:34:27.215371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-24 22:34:27.215397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-24 22:34:27.215499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-24 22:34:27.215528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-24 22:34:27.215637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-24 22:34:27.215666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-24 22:34:27.215777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-24 22:34:27.215803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-24 22:34:27.215901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-24 22:34:27.215928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-24 22:34:27.216045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-24 22:34:27.216071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-24 22:34:27.216173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-24 22:34:27.216200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-24 22:34:27.216308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-24 22:34:27.216336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-24 22:34:27.216455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-24 22:34:27.216489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-24 22:34:27.216597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-24 22:34:27.216624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-24 22:34:27.216722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-24 22:34:27.216748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-24 22:34:27.216851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-24 22:34:27.216879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-24 22:34:27.216990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-24 22:34:27.217016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-24 22:34:27.217123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-24 22:34:27.217150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-24 22:34:27.217252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-24 22:34:27.217281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-24 22:34:27.217384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-24 22:34:27.217410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-24 22:34:27.217511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-24 22:34:27.217538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-24 22:34:27.217654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-24 22:34:27.217680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-24 22:34:27.217777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-24 22:34:27.217803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-24 22:34:27.217918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-24 22:34:27.217944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-24 22:34:27.218066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-24 22:34:27.218095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-24 22:34:27.218202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-24 22:34:27.218229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-24 22:34:27.218322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-24 22:34:27.218348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-24 22:34:27.218450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-24 22:34:27.218478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-24 22:34:27.218600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-24 22:34:27.218627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-24 22:34:27.218753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-24 22:34:27.218780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-24 22:34:27.218884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-24 22:34:27.218910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-24 22:34:27.219031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-24 22:34:27.219057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-24 22:34:27.219177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-24 22:34:27.219206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-24 22:34:27.219312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-24 22:34:27.219341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-24 22:34:27.219447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-24 22:34:27.219474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-24 22:34:27.219585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-24 22:34:27.219611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-24 22:34:27.219728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-24 22:34:27.219752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-24 22:34:27.219869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-24 22:34:27.219896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-24 22:34:27.219999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-24 22:34:27.220026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-24 22:34:27.220132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.656 [2024-07-24 22:34:27.220160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.656 qpair failed and we were unable to recover it. 00:25:01.656 [2024-07-24 22:34:27.220256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.657 [2024-07-24 22:34:27.220282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.657 qpair failed and we were unable to recover it. 00:25:01.657 [2024-07-24 22:34:27.220391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.657 [2024-07-24 22:34:27.220420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.657 qpair failed and we were unable to recover it. 00:25:01.657 [2024-07-24 22:34:27.220530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.657 [2024-07-24 22:34:27.220563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.657 qpair failed and we were unable to recover it. 00:25:01.657 [2024-07-24 22:34:27.220666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.657 [2024-07-24 22:34:27.220692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.657 qpair failed and we were unable to recover it. 00:25:01.657 [2024-07-24 22:34:27.220811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.657 [2024-07-24 22:34:27.220838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.657 qpair failed and we were unable to recover it. 00:25:01.657 [2024-07-24 22:34:27.220952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.657 [2024-07-24 22:34:27.220980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.657 qpair failed and we were unable to recover it. 00:25:01.657 [2024-07-24 22:34:27.221091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.657 [2024-07-24 22:34:27.221116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.657 qpair failed and we were unable to recover it. 00:25:01.657 [2024-07-24 22:34:27.221225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.657 [2024-07-24 22:34:27.221254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.657 qpair failed and we were unable to recover it. 00:25:01.657 [2024-07-24 22:34:27.221360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.657 [2024-07-24 22:34:27.221388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.657 qpair failed and we were unable to recover it. 00:25:01.657 [2024-07-24 22:34:27.221502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.657 [2024-07-24 22:34:27.221530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.657 qpair failed and we were unable to recover it. 00:25:01.657 [2024-07-24 22:34:27.221641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.657 [2024-07-24 22:34:27.221667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.657 qpair failed and we were unable to recover it. 00:25:01.657 [2024-07-24 22:34:27.221769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.657 [2024-07-24 22:34:27.221795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.657 qpair failed and we were unable to recover it. 00:25:01.657 [2024-07-24 22:34:27.221896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.657 [2024-07-24 22:34:27.221924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.657 qpair failed and we were unable to recover it. 00:25:01.657 [2024-07-24 22:34:27.222046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.657 [2024-07-24 22:34:27.222072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.657 qpair failed and we were unable to recover it. 00:25:01.657 [2024-07-24 22:34:27.222177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.657 [2024-07-24 22:34:27.222203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.657 qpair failed and we were unable to recover it. 00:25:01.657 [2024-07-24 22:34:27.222308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.657 [2024-07-24 22:34:27.222335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.657 qpair failed and we were unable to recover it. 00:25:01.657 [2024-07-24 22:34:27.222460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.657 [2024-07-24 22:34:27.222491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.657 qpair failed and we were unable to recover it. 00:25:01.657 [2024-07-24 22:34:27.222608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.657 [2024-07-24 22:34:27.222634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.657 qpair failed and we were unable to recover it. 00:25:01.657 [2024-07-24 22:34:27.222730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.657 [2024-07-24 22:34:27.222756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.657 qpair failed and we were unable to recover it. 00:25:01.657 [2024-07-24 22:34:27.222851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.657 [2024-07-24 22:34:27.222877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.657 qpair failed and we were unable to recover it. 00:25:01.657 [2024-07-24 22:34:27.223011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.657 [2024-07-24 22:34:27.223038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.657 qpair failed and we were unable to recover it. 00:25:01.657 [2024-07-24 22:34:27.223170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.657 [2024-07-24 22:34:27.223196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.657 qpair failed and we were unable to recover it. 00:25:01.657 [2024-07-24 22:34:27.223299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.657 [2024-07-24 22:34:27.223325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.657 qpair failed and we were unable to recover it. 00:25:01.657 [2024-07-24 22:34:27.223423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.657 [2024-07-24 22:34:27.223449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.657 qpair failed and we were unable to recover it. 00:25:01.657 [2024-07-24 22:34:27.223567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.657 [2024-07-24 22:34:27.223596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.657 qpair failed and we were unable to recover it. 00:25:01.657 [2024-07-24 22:34:27.223696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.657 [2024-07-24 22:34:27.223725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.657 qpair failed and we were unable to recover it. 00:25:01.657 [2024-07-24 22:34:27.223831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.657 [2024-07-24 22:34:27.223857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.657 qpair failed and we were unable to recover it. 00:25:01.657 [2024-07-24 22:34:27.223970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.657 [2024-07-24 22:34:27.223997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.657 qpair failed and we were unable to recover it. 00:25:01.657 [2024-07-24 22:34:27.224097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.657 [2024-07-24 22:34:27.224124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.657 qpair failed and we were unable to recover it. 00:25:01.657 [2024-07-24 22:34:27.224235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.657 [2024-07-24 22:34:27.224260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.657 qpair failed and we were unable to recover it. 00:25:01.657 [2024-07-24 22:34:27.224363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.657 [2024-07-24 22:34:27.224388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.657 qpair failed and we were unable to recover it. 00:25:01.657 [2024-07-24 22:34:27.224491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.657 [2024-07-24 22:34:27.224519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.657 qpair failed and we were unable to recover it. 00:25:01.657 [2024-07-24 22:34:27.224618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.657 [2024-07-24 22:34:27.224645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.657 qpair failed and we were unable to recover it. 00:25:01.657 [2024-07-24 22:34:27.224751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.657 [2024-07-24 22:34:27.224777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.657 qpair failed and we were unable to recover it. 00:25:01.657 [2024-07-24 22:34:27.224881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.657 [2024-07-24 22:34:27.224906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.657 qpair failed and we were unable to recover it. 00:25:01.657 [2024-07-24 22:34:27.225012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.657 [2024-07-24 22:34:27.225041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.657 qpair failed and we were unable to recover it. 00:25:01.657 [2024-07-24 22:34:27.225152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.657 [2024-07-24 22:34:27.225179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.657 qpair failed and we were unable to recover it. 00:25:01.657 [2024-07-24 22:34:27.225280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.657 [2024-07-24 22:34:27.225306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.657 qpair failed and we were unable to recover it. 00:25:01.657 [2024-07-24 22:34:27.225412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-24 22:34:27.225437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-24 22:34:27.225547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-24 22:34:27.225573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-24 22:34:27.225666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-24 22:34:27.225692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-24 22:34:27.225787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-24 22:34:27.225813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-24 22:34:27.225920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-24 22:34:27.225953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-24 22:34:27.226058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-24 22:34:27.226085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-24 22:34:27.226191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-24 22:34:27.226220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-24 22:34:27.226321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-24 22:34:27.226347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-24 22:34:27.226469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-24 22:34:27.226502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-24 22:34:27.226607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-24 22:34:27.226634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-24 22:34:27.226734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-24 22:34:27.226760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-24 22:34:27.226865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-24 22:34:27.226894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-24 22:34:27.227010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-24 22:34:27.227036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-24 22:34:27.227141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-24 22:34:27.227167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-24 22:34:27.227263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-24 22:34:27.227289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-24 22:34:27.227401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-24 22:34:27.227430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-24 22:34:27.227531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-24 22:34:27.227558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-24 22:34:27.227661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-24 22:34:27.227688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-24 22:34:27.227807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-24 22:34:27.227833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-24 22:34:27.227931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-24 22:34:27.227956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-24 22:34:27.228064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-24 22:34:27.228089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-24 22:34:27.228188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-24 22:34:27.228214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-24 22:34:27.228313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-24 22:34:27.228338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-24 22:34:27.228432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-24 22:34:27.228458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-24 22:34:27.228577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-24 22:34:27.228606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-24 22:34:27.228708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-24 22:34:27.228735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-24 22:34:27.228847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-24 22:34:27.228873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-24 22:34:27.228978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-24 22:34:27.229004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-24 22:34:27.229107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-24 22:34:27.229133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-24 22:34:27.229232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-24 22:34:27.229258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-24 22:34:27.229361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-24 22:34:27.229387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-24 22:34:27.229507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-24 22:34:27.229535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-24 22:34:27.229645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-24 22:34:27.229671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-24 22:34:27.229784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-24 22:34:27.229810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-24 22:34:27.229925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-24 22:34:27.229951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-24 22:34:27.230066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-24 22:34:27.230092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-24 22:34:27.230198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-24 22:34:27.230227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.658 [2024-07-24 22:34:27.230342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.658 [2024-07-24 22:34:27.230367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.658 qpair failed and we were unable to recover it. 00:25:01.659 [2024-07-24 22:34:27.230464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.659 [2024-07-24 22:34:27.230502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.659 qpair failed and we were unable to recover it. 00:25:01.659 [2024-07-24 22:34:27.230617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.659 [2024-07-24 22:34:27.230642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.659 qpair failed and we were unable to recover it. 00:25:01.659 [2024-07-24 22:34:27.230741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.659 [2024-07-24 22:34:27.230766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.659 qpair failed and we were unable to recover it. 00:25:01.659 [2024-07-24 22:34:27.230865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.659 [2024-07-24 22:34:27.230891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.659 qpair failed and we were unable to recover it. 00:25:01.659 [2024-07-24 22:34:27.231002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.659 [2024-07-24 22:34:27.231030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.659 qpair failed and we were unable to recover it. 00:25:01.659 [2024-07-24 22:34:27.231144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.659 [2024-07-24 22:34:27.231171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.659 qpair failed and we were unable to recover it. 00:25:01.659 [2024-07-24 22:34:27.231276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.659 [2024-07-24 22:34:27.231307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.659 qpair failed and we were unable to recover it. 00:25:01.659 [2024-07-24 22:34:27.231413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.659 [2024-07-24 22:34:27.231440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.659 qpair failed and we were unable to recover it. 00:25:01.659 [2024-07-24 22:34:27.231555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.659 [2024-07-24 22:34:27.231583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.659 qpair failed and we were unable to recover it. 00:25:01.659 [2024-07-24 22:34:27.231684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.659 [2024-07-24 22:34:27.231710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.659 qpair failed and we were unable to recover it. 00:25:01.659 [2024-07-24 22:34:27.231812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.659 [2024-07-24 22:34:27.231838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.659 qpair failed and we were unable to recover it. 00:25:01.659 [2024-07-24 22:34:27.231935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.659 [2024-07-24 22:34:27.231961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.659 qpair failed and we were unable to recover it. 00:25:01.659 [2024-07-24 22:34:27.232066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.659 [2024-07-24 22:34:27.232095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.659 qpair failed and we were unable to recover it. 00:25:01.659 [2024-07-24 22:34:27.232208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.659 [2024-07-24 22:34:27.232234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.659 qpair failed and we were unable to recover it. 00:25:01.659 [2024-07-24 22:34:27.232337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.659 [2024-07-24 22:34:27.232362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.659 qpair failed and we were unable to recover it. 00:25:01.659 [2024-07-24 22:34:27.232463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.659 [2024-07-24 22:34:27.232496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.659 qpair failed and we were unable to recover it. 00:25:01.659 [2024-07-24 22:34:27.232605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.659 [2024-07-24 22:34:27.232633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.659 qpair failed and we were unable to recover it. 00:25:01.659 [2024-07-24 22:34:27.232751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.659 [2024-07-24 22:34:27.232777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.659 qpair failed and we were unable to recover it. 00:25:01.659 [2024-07-24 22:34:27.232881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.659 [2024-07-24 22:34:27.232910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.659 qpair failed and we were unable to recover it. 00:25:01.659 [2024-07-24 22:34:27.233022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.659 [2024-07-24 22:34:27.233050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.659 qpair failed and we were unable to recover it. 00:25:01.659 [2024-07-24 22:34:27.233154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.659 [2024-07-24 22:34:27.233180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.659 qpair failed and we were unable to recover it. 00:25:01.659 [2024-07-24 22:34:27.233285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.659 [2024-07-24 22:34:27.233311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.659 qpair failed and we were unable to recover it. 00:25:01.659 [2024-07-24 22:34:27.233415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.659 [2024-07-24 22:34:27.233440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.659 qpair failed and we were unable to recover it. 00:25:01.659 [2024-07-24 22:34:27.233560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.659 [2024-07-24 22:34:27.233586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.659 qpair failed and we were unable to recover it. 00:25:01.659 [2024-07-24 22:34:27.233684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.659 [2024-07-24 22:34:27.233710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.659 qpair failed and we were unable to recover it. 00:25:01.659 [2024-07-24 22:34:27.233813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.659 [2024-07-24 22:34:27.233838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.659 qpair failed and we were unable to recover it. 00:25:01.659 [2024-07-24 22:34:27.233940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.659 [2024-07-24 22:34:27.233965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.659 qpair failed and we were unable to recover it. 00:25:01.659 [2024-07-24 22:34:27.234075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.659 [2024-07-24 22:34:27.234100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.659 qpair failed and we were unable to recover it. 00:25:01.659 [2024-07-24 22:34:27.234201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.659 [2024-07-24 22:34:27.234227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.659 qpair failed and we were unable to recover it. 00:25:01.659 [2024-07-24 22:34:27.234323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.659 [2024-07-24 22:34:27.234349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.659 qpair failed and we were unable to recover it. 00:25:01.659 [2024-07-24 22:34:27.234451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.659 [2024-07-24 22:34:27.234486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.659 qpair failed and we were unable to recover it. 00:25:01.659 [2024-07-24 22:34:27.234599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.659 [2024-07-24 22:34:27.234627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.659 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-24 22:34:27.234727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-24 22:34:27.234754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-24 22:34:27.234863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-24 22:34:27.234889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-24 22:34:27.234998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-24 22:34:27.235028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-24 22:34:27.235145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-24 22:34:27.235172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-24 22:34:27.235273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-24 22:34:27.235300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-24 22:34:27.235415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-24 22:34:27.235441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-24 22:34:27.235548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-24 22:34:27.235574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-24 22:34:27.235688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-24 22:34:27.235714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-24 22:34:27.235824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-24 22:34:27.235851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-24 22:34:27.236001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-24 22:34:27.236029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-24 22:34:27.236136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-24 22:34:27.236165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-24 22:34:27.236269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-24 22:34:27.236294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-24 22:34:27.236392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-24 22:34:27.236418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-24 22:34:27.236523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-24 22:34:27.236549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-24 22:34:27.236644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-24 22:34:27.236675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-24 22:34:27.236795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-24 22:34:27.236820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-24 22:34:27.236932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-24 22:34:27.236960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-24 22:34:27.237073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-24 22:34:27.237099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-24 22:34:27.237203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-24 22:34:27.237229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-24 22:34:27.237333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-24 22:34:27.237359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-24 22:34:27.237466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-24 22:34:27.237505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-24 22:34:27.237630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-24 22:34:27.237656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-24 22:34:27.237757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-24 22:34:27.237783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-24 22:34:27.237882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-24 22:34:27.237909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-24 22:34:27.238014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-24 22:34:27.238042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-24 22:34:27.238153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-24 22:34:27.238181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-24 22:34:27.238302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-24 22:34:27.238331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-24 22:34:27.238435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-24 22:34:27.238461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-24 22:34:27.238585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-24 22:34:27.238612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-24 22:34:27.238727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-24 22:34:27.238753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-24 22:34:27.238863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-24 22:34:27.238890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-24 22:34:27.238995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-24 22:34:27.239022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-24 22:34:27.239127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-24 22:34:27.239154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-24 22:34:27.239270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-24 22:34:27.239296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-24 22:34:27.239393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-24 22:34:27.239419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-24 22:34:27.239524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-24 22:34:27.239551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-24 22:34:27.239655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-24 22:34:27.239681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.660 [2024-07-24 22:34:27.239791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.660 [2024-07-24 22:34:27.239820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.660 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-24 22:34:27.239924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-24 22:34:27.239951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-24 22:34:27.240052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-24 22:34:27.240078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-24 22:34:27.240189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-24 22:34:27.240215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-24 22:34:27.240327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-24 22:34:27.240355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-24 22:34:27.240451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-24 22:34:27.240477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-24 22:34:27.240585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-24 22:34:27.240612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-24 22:34:27.240725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-24 22:34:27.240751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-24 22:34:27.240854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-24 22:34:27.240880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-24 22:34:27.240986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-24 22:34:27.241014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-24 22:34:27.241137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-24 22:34:27.241165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-24 22:34:27.241261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-24 22:34:27.241288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-24 22:34:27.241392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-24 22:34:27.241419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-24 22:34:27.241520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-24 22:34:27.241546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-24 22:34:27.241643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-24 22:34:27.241670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-24 22:34:27.241781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-24 22:34:27.241807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-24 22:34:27.241910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-24 22:34:27.241936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-24 22:34:27.242050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-24 22:34:27.242080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-24 22:34:27.242190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-24 22:34:27.242216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-24 22:34:27.242331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-24 22:34:27.242356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-24 22:34:27.242455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-24 22:34:27.242488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-24 22:34:27.242591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-24 22:34:27.242619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-24 22:34:27.242717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-24 22:34:27.242743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-24 22:34:27.242855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-24 22:34:27.242881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-24 22:34:27.242980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-24 22:34:27.243005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-24 22:34:27.243104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-24 22:34:27.243130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-24 22:34:27.243234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-24 22:34:27.243262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-24 22:34:27.243368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-24 22:34:27.243395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-24 22:34:27.243505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-24 22:34:27.243531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-24 22:34:27.243635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-24 22:34:27.243661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-24 22:34:27.243779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-24 22:34:27.243805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-24 22:34:27.243913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-24 22:34:27.243939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-24 22:34:27.244043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-24 22:34:27.244069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-24 22:34:27.244174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-24 22:34:27.244200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-24 22:34:27.244303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-24 22:34:27.244329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-24 22:34:27.244425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-24 22:34:27.244450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-24 22:34:27.244551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-24 22:34:27.244576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-24 22:34:27.244676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.661 [2024-07-24 22:34:27.244701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.661 qpair failed and we were unable to recover it. 00:25:01.661 [2024-07-24 22:34:27.244813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.662 [2024-07-24 22:34:27.244838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.662 qpair failed and we were unable to recover it. 00:25:01.662 [2024-07-24 22:34:27.244937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.662 [2024-07-24 22:34:27.244963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.662 qpair failed and we were unable to recover it. 00:25:01.662 [2024-07-24 22:34:27.245070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.662 [2024-07-24 22:34:27.245100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.662 qpair failed and we were unable to recover it. 00:25:01.662 [2024-07-24 22:34:27.245231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.662 [2024-07-24 22:34:27.245257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.662 qpair failed and we were unable to recover it. 00:25:01.662 [2024-07-24 22:34:27.245354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.662 [2024-07-24 22:34:27.245380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.662 qpair failed and we were unable to recover it. 00:25:01.662 [2024-07-24 22:34:27.245493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.662 [2024-07-24 22:34:27.245524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.662 qpair failed and we were unable to recover it. 00:25:01.662 [2024-07-24 22:34:27.245640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.662 [2024-07-24 22:34:27.245676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.662 qpair failed and we were unable to recover it. 00:25:01.662 [2024-07-24 22:34:27.245793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.662 [2024-07-24 22:34:27.245823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.662 qpair failed and we were unable to recover it. 00:25:01.662 [2024-07-24 22:34:27.245928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.662 [2024-07-24 22:34:27.245954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.662 qpair failed and we were unable to recover it. 00:25:01.662 [2024-07-24 22:34:27.246062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.662 [2024-07-24 22:34:27.246087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.662 qpair failed and we were unable to recover it. 00:25:01.662 [2024-07-24 22:34:27.246191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.662 [2024-07-24 22:34:27.246218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.662 qpair failed and we were unable to recover it. 00:25:01.662 [2024-07-24 22:34:27.246332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.662 [2024-07-24 22:34:27.246357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.662 qpair failed and we were unable to recover it. 00:25:01.662 [2024-07-24 22:34:27.246468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.662 [2024-07-24 22:34:27.246514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.662 qpair failed and we were unable to recover it. 00:25:01.662 [2024-07-24 22:34:27.246665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.662 [2024-07-24 22:34:27.246692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.662 qpair failed and we were unable to recover it. 00:25:01.662 [2024-07-24 22:34:27.246789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.662 [2024-07-24 22:34:27.246814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.662 qpair failed and we were unable to recover it. 00:25:01.662 [2024-07-24 22:34:27.246912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.662 [2024-07-24 22:34:27.246938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.662 qpair failed and we were unable to recover it. 00:25:01.662 [2024-07-24 22:34:27.247052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.662 [2024-07-24 22:34:27.247077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.662 qpair failed and we were unable to recover it. 00:25:01.662 [2024-07-24 22:34:27.247194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.662 [2024-07-24 22:34:27.247220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.662 qpair failed and we were unable to recover it. 00:25:01.662 [2024-07-24 22:34:27.247328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.662 [2024-07-24 22:34:27.247353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.662 qpair failed and we were unable to recover it. 00:25:01.662 [2024-07-24 22:34:27.247454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.662 [2024-07-24 22:34:27.247492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.662 qpair failed and we were unable to recover it. 00:25:01.662 [2024-07-24 22:34:27.247596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.662 [2024-07-24 22:34:27.247622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.662 qpair failed and we were unable to recover it. 00:25:01.662 [2024-07-24 22:34:27.247725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.662 [2024-07-24 22:34:27.247751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.662 qpair failed and we were unable to recover it. 00:25:01.662 [2024-07-24 22:34:27.247852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.662 [2024-07-24 22:34:27.247879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.662 qpair failed and we were unable to recover it. 00:25:01.662 [2024-07-24 22:34:27.247987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.662 [2024-07-24 22:34:27.248014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.662 qpair failed and we were unable to recover it. 00:25:01.662 [2024-07-24 22:34:27.248113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.662 [2024-07-24 22:34:27.248139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.662 qpair failed and we were unable to recover it. 00:25:01.662 [2024-07-24 22:34:27.248232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.662 [2024-07-24 22:34:27.248258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.662 qpair failed and we were unable to recover it. 00:25:01.662 [2024-07-24 22:34:27.248358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.662 [2024-07-24 22:34:27.248385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.662 qpair failed and we were unable to recover it. 00:25:01.662 [2024-07-24 22:34:27.248506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.662 [2024-07-24 22:34:27.248536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.662 qpair failed and we were unable to recover it. 00:25:01.662 [2024-07-24 22:34:27.248647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.662 [2024-07-24 22:34:27.248673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.662 qpair failed and we were unable to recover it. 00:25:01.662 [2024-07-24 22:34:27.248782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.662 [2024-07-24 22:34:27.248810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.662 qpair failed and we were unable to recover it. 00:25:01.662 [2024-07-24 22:34:27.248915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.662 [2024-07-24 22:34:27.248943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.662 qpair failed and we were unable to recover it. 00:25:01.662 [2024-07-24 22:34:27.249066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.662 [2024-07-24 22:34:27.249092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.662 qpair failed and we were unable to recover it. 00:25:01.662 [2024-07-24 22:34:27.249195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.662 [2024-07-24 22:34:27.249221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.662 qpair failed and we were unable to recover it. 00:25:01.662 [2024-07-24 22:34:27.249339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.662 [2024-07-24 22:34:27.249367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.662 qpair failed and we were unable to recover it. 00:25:01.662 [2024-07-24 22:34:27.249470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.662 [2024-07-24 22:34:27.249511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.662 qpair failed and we were unable to recover it. 00:25:01.662 [2024-07-24 22:34:27.249662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.662 [2024-07-24 22:34:27.249688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.662 qpair failed and we were unable to recover it. 00:25:01.662 [2024-07-24 22:34:27.249801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.662 [2024-07-24 22:34:27.249827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.662 qpair failed and we were unable to recover it. 00:25:01.662 [2024-07-24 22:34:27.249922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.662 [2024-07-24 22:34:27.249948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.662 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-24 22:34:27.250051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.663 [2024-07-24 22:34:27.250078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.663 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-24 22:34:27.250182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.663 [2024-07-24 22:34:27.250211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.663 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-24 22:34:27.250315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.663 [2024-07-24 22:34:27.250341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.663 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-24 22:34:27.250444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.663 [2024-07-24 22:34:27.250471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.663 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-24 22:34:27.250590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.663 [2024-07-24 22:34:27.250618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.663 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-24 22:34:27.250729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.663 [2024-07-24 22:34:27.250757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.663 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-24 22:34:27.250859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.663 [2024-07-24 22:34:27.250886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.663 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-24 22:34:27.250989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.663 [2024-07-24 22:34:27.251018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.663 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-24 22:34:27.251134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.663 [2024-07-24 22:34:27.251167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.663 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-24 22:34:27.251300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.663 [2024-07-24 22:34:27.251329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.663 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-24 22:34:27.251433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.663 [2024-07-24 22:34:27.251459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.663 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-24 22:34:27.251579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.663 [2024-07-24 22:34:27.251608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.663 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-24 22:34:27.251712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.663 [2024-07-24 22:34:27.251738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.663 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-24 22:34:27.251846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.663 [2024-07-24 22:34:27.251874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.663 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-24 22:34:27.251992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.663 [2024-07-24 22:34:27.252019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.663 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-24 22:34:27.252123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.663 [2024-07-24 22:34:27.252149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.663 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-24 22:34:27.252364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.663 [2024-07-24 22:34:27.252390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.663 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-24 22:34:27.252491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.663 [2024-07-24 22:34:27.252519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.663 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-24 22:34:27.252625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.663 [2024-07-24 22:34:27.252651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.663 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-24 22:34:27.252747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.663 [2024-07-24 22:34:27.252773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.663 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-24 22:34:27.252872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.663 [2024-07-24 22:34:27.252898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.663 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-24 22:34:27.253001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.663 [2024-07-24 22:34:27.253034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.663 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-24 22:34:27.253145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.663 [2024-07-24 22:34:27.253174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.663 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-24 22:34:27.253293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.663 [2024-07-24 22:34:27.253319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.663 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-24 22:34:27.253421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.663 [2024-07-24 22:34:27.253448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.663 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-24 22:34:27.253572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.663 [2024-07-24 22:34:27.253599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.663 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-24 22:34:27.253697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.663 [2024-07-24 22:34:27.253723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.663 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-24 22:34:27.253823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.663 [2024-07-24 22:34:27.253850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.663 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-24 22:34:27.253970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.663 [2024-07-24 22:34:27.253997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.663 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-24 22:34:27.254108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.663 [2024-07-24 22:34:27.254136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.663 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-24 22:34:27.254238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.663 [2024-07-24 22:34:27.254263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.663 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-24 22:34:27.254381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.663 [2024-07-24 22:34:27.254407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.663 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-24 22:34:27.254525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.663 [2024-07-24 22:34:27.254551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.663 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-24 22:34:27.254646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.663 [2024-07-24 22:34:27.254672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.663 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-24 22:34:27.254780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.663 [2024-07-24 22:34:27.254809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.663 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-24 22:34:27.254926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.663 [2024-07-24 22:34:27.254954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.663 qpair failed and we were unable to recover it. 00:25:01.663 [2024-07-24 22:34:27.255061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.664 [2024-07-24 22:34:27.255088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.664 qpair failed and we were unable to recover it. 00:25:01.664 [2024-07-24 22:34:27.255190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.664 [2024-07-24 22:34:27.255216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.664 qpair failed and we were unable to recover it. 00:25:01.664 [2024-07-24 22:34:27.255319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.664 [2024-07-24 22:34:27.255347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.664 qpair failed and we were unable to recover it. 00:25:01.664 [2024-07-24 22:34:27.255459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.664 [2024-07-24 22:34:27.255493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.664 qpair failed and we were unable to recover it. 00:25:01.664 [2024-07-24 22:34:27.255621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.664 [2024-07-24 22:34:27.255647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.664 qpair failed and we were unable to recover it. 00:25:01.664 [2024-07-24 22:34:27.255751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.664 [2024-07-24 22:34:27.255778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.664 qpair failed and we were unable to recover it. 00:25:01.664 [2024-07-24 22:34:27.255889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.664 [2024-07-24 22:34:27.255914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.664 qpair failed and we were unable to recover it. 00:25:01.664 [2024-07-24 22:34:27.256033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.664 [2024-07-24 22:34:27.256061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.664 qpair failed and we were unable to recover it. 00:25:01.664 [2024-07-24 22:34:27.256159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.664 [2024-07-24 22:34:27.256187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.664 qpair failed and we were unable to recover it. 00:25:01.664 [2024-07-24 22:34:27.256287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.664 [2024-07-24 22:34:27.256314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.664 qpair failed and we were unable to recover it. 00:25:01.664 [2024-07-24 22:34:27.256428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.664 [2024-07-24 22:34:27.256454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.664 qpair failed and we were unable to recover it. 00:25:01.664 [2024-07-24 22:34:27.256555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.664 [2024-07-24 22:34:27.256581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.664 qpair failed and we were unable to recover it. 00:25:01.664 [2024-07-24 22:34:27.256806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.664 [2024-07-24 22:34:27.256840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.664 qpair failed and we were unable to recover it. 00:25:01.664 [2024-07-24 22:34:27.256952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.664 [2024-07-24 22:34:27.256979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.664 qpair failed and we were unable to recover it. 00:25:01.664 [2024-07-24 22:34:27.257085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.664 [2024-07-24 22:34:27.257110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.664 qpair failed and we were unable to recover it. 00:25:01.664 [2024-07-24 22:34:27.257215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.664 [2024-07-24 22:34:27.257241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.664 qpair failed and we were unable to recover it. 00:25:01.664 [2024-07-24 22:34:27.257353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.664 [2024-07-24 22:34:27.257379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.664 qpair failed and we were unable to recover it. 00:25:01.664 [2024-07-24 22:34:27.257475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.664 [2024-07-24 22:34:27.257506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.664 qpair failed and we were unable to recover it. 00:25:01.664 [2024-07-24 22:34:27.257608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.664 [2024-07-24 22:34:27.257635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.664 qpair failed and we were unable to recover it. 00:25:01.664 [2024-07-24 22:34:27.257742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.664 [2024-07-24 22:34:27.257768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.664 qpair failed and we were unable to recover it. 00:25:01.664 [2024-07-24 22:34:27.257875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.664 [2024-07-24 22:34:27.257902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.664 qpair failed and we were unable to recover it. 00:25:01.664 [2024-07-24 22:34:27.258015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.664 [2024-07-24 22:34:27.258042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.664 qpair failed and we were unable to recover it. 00:25:01.664 [2024-07-24 22:34:27.258143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.664 [2024-07-24 22:34:27.258169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.664 qpair failed and we were unable to recover it. 00:25:01.664 [2024-07-24 22:34:27.258275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.664 [2024-07-24 22:34:27.258303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.664 qpair failed and we were unable to recover it. 00:25:01.664 [2024-07-24 22:34:27.258412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.664 [2024-07-24 22:34:27.258440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.664 qpair failed and we were unable to recover it. 00:25:01.664 [2024-07-24 22:34:27.258550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.664 [2024-07-24 22:34:27.258584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.664 qpair failed and we were unable to recover it. 00:25:01.664 [2024-07-24 22:34:27.258698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.664 [2024-07-24 22:34:27.258724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.664 qpair failed and we were unable to recover it. 00:25:01.664 [2024-07-24 22:34:27.258839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.664 [2024-07-24 22:34:27.258865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.664 qpair failed and we were unable to recover it. 00:25:01.664 [2024-07-24 22:34:27.258965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.664 [2024-07-24 22:34:27.258991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.664 qpair failed and we were unable to recover it. 00:25:01.664 [2024-07-24 22:34:27.259094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.664 [2024-07-24 22:34:27.259122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.664 qpair failed and we were unable to recover it. 00:25:01.664 [2024-07-24 22:34:27.259240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.664 [2024-07-24 22:34:27.259268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.664 qpair failed and we were unable to recover it. 00:25:01.664 [2024-07-24 22:34:27.259383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.664 [2024-07-24 22:34:27.259409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.664 qpair failed and we were unable to recover it. 00:25:01.665 [2024-07-24 22:34:27.259507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.665 [2024-07-24 22:34:27.259534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.665 qpair failed and we were unable to recover it. 00:25:01.665 [2024-07-24 22:34:27.259640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.665 [2024-07-24 22:34:27.259667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.665 qpair failed and we were unable to recover it. 00:25:01.665 [2024-07-24 22:34:27.259784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.665 [2024-07-24 22:34:27.259809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.665 qpair failed and we were unable to recover it. 00:25:01.665 [2024-07-24 22:34:27.259911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.665 [2024-07-24 22:34:27.259936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.665 qpair failed and we were unable to recover it. 00:25:01.665 [2024-07-24 22:34:27.260032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.665 [2024-07-24 22:34:27.260057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.665 qpair failed and we were unable to recover it. 00:25:01.665 [2024-07-24 22:34:27.260159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.665 [2024-07-24 22:34:27.260185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.665 qpair failed and we were unable to recover it. 00:25:01.665 [2024-07-24 22:34:27.260287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.665 [2024-07-24 22:34:27.260312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.665 qpair failed and we were unable to recover it. 00:25:01.665 [2024-07-24 22:34:27.260428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.665 [2024-07-24 22:34:27.260456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.665 qpair failed and we were unable to recover it. 00:25:01.665 [2024-07-24 22:34:27.260575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.665 [2024-07-24 22:34:27.260604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.665 qpair failed and we were unable to recover it. 00:25:01.665 [2024-07-24 22:34:27.260707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.665 [2024-07-24 22:34:27.260733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.665 qpair failed and we were unable to recover it. 00:25:01.665 [2024-07-24 22:34:27.260850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.665 [2024-07-24 22:34:27.260878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.665 qpair failed and we were unable to recover it. 00:25:01.665 [2024-07-24 22:34:27.261092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.665 [2024-07-24 22:34:27.261118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.665 qpair failed and we were unable to recover it. 00:25:01.665 [2024-07-24 22:34:27.261223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.665 [2024-07-24 22:34:27.261249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.665 qpair failed and we were unable to recover it. 00:25:01.665 [2024-07-24 22:34:27.261360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.665 [2024-07-24 22:34:27.261386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.665 qpair failed and we were unable to recover it. 00:25:01.665 [2024-07-24 22:34:27.261494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.665 [2024-07-24 22:34:27.261524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.665 qpair failed and we were unable to recover it. 00:25:01.665 [2024-07-24 22:34:27.261740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.665 [2024-07-24 22:34:27.261766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.665 qpair failed and we were unable to recover it. 00:25:01.665 [2024-07-24 22:34:27.261866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.665 [2024-07-24 22:34:27.261892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.665 qpair failed and we were unable to recover it. 00:25:01.665 [2024-07-24 22:34:27.261997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.665 [2024-07-24 22:34:27.262024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.665 qpair failed and we were unable to recover it. 00:25:01.665 [2024-07-24 22:34:27.262125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.665 [2024-07-24 22:34:27.262152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.665 qpair failed and we were unable to recover it. 00:25:01.665 [2024-07-24 22:34:27.262262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.665 [2024-07-24 22:34:27.262290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.665 qpair failed and we were unable to recover it. 00:25:01.665 [2024-07-24 22:34:27.262400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.665 [2024-07-24 22:34:27.262434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.665 qpair failed and we were unable to recover it. 00:25:01.665 [2024-07-24 22:34:27.262581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.665 [2024-07-24 22:34:27.262610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.665 qpair failed and we were unable to recover it. 00:25:01.665 [2024-07-24 22:34:27.262728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.665 [2024-07-24 22:34:27.262755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.665 qpair failed and we were unable to recover it. 00:25:01.665 [2024-07-24 22:34:27.262861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.665 [2024-07-24 22:34:27.262888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.665 qpair failed and we were unable to recover it. 00:25:01.665 [2024-07-24 22:34:27.263004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.665 [2024-07-24 22:34:27.263030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.665 qpair failed and we were unable to recover it. 00:25:01.665 [2024-07-24 22:34:27.263127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.665 [2024-07-24 22:34:27.263153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.665 qpair failed and we were unable to recover it. 00:25:01.665 [2024-07-24 22:34:27.263257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.665 [2024-07-24 22:34:27.263284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.665 qpair failed and we were unable to recover it. 00:25:01.665 [2024-07-24 22:34:27.263383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.665 [2024-07-24 22:34:27.263409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.665 qpair failed and we were unable to recover it. 00:25:01.665 [2024-07-24 22:34:27.263517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.665 [2024-07-24 22:34:27.263543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.665 qpair failed and we were unable to recover it. 00:25:01.665 [2024-07-24 22:34:27.263654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.665 [2024-07-24 22:34:27.263680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.665 qpair failed and we were unable to recover it. 00:25:01.665 [2024-07-24 22:34:27.263781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.665 [2024-07-24 22:34:27.263808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.665 qpair failed and we were unable to recover it. 00:25:01.665 [2024-07-24 22:34:27.263933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.665 [2024-07-24 22:34:27.263958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.665 qpair failed and we were unable to recover it. 00:25:01.665 [2024-07-24 22:34:27.264065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.665 [2024-07-24 22:34:27.264092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.665 qpair failed and we were unable to recover it. 00:25:01.665 [2024-07-24 22:34:27.264195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.665 [2024-07-24 22:34:27.264222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.665 qpair failed and we were unable to recover it. 00:25:01.665 [2024-07-24 22:34:27.264329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.665 [2024-07-24 22:34:27.264355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.665 qpair failed and we were unable to recover it. 00:25:01.665 [2024-07-24 22:34:27.264489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.665 [2024-07-24 22:34:27.264515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.665 qpair failed and we were unable to recover it. 00:25:01.665 [2024-07-24 22:34:27.264618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.665 [2024-07-24 22:34:27.264645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.665 qpair failed and we were unable to recover it. 00:25:01.666 [2024-07-24 22:34:27.264740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.666 [2024-07-24 22:34:27.264766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.666 qpair failed and we were unable to recover it. 00:25:01.666 [2024-07-24 22:34:27.264863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.666 [2024-07-24 22:34:27.264889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.666 qpair failed and we were unable to recover it. 00:25:01.666 [2024-07-24 22:34:27.264992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.666 [2024-07-24 22:34:27.265018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.666 qpair failed and we were unable to recover it. 00:25:01.666 [2024-07-24 22:34:27.265122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.666 [2024-07-24 22:34:27.265148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.666 qpair failed and we were unable to recover it. 00:25:01.666 [2024-07-24 22:34:27.265258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.666 [2024-07-24 22:34:27.265287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.666 qpair failed and we were unable to recover it. 00:25:01.666 [2024-07-24 22:34:27.265391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.666 [2024-07-24 22:34:27.265416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.666 qpair failed and we were unable to recover it. 00:25:01.666 [2024-07-24 22:34:27.265523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.666 [2024-07-24 22:34:27.265552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.666 qpair failed and we were unable to recover it. 00:25:01.666 [2024-07-24 22:34:27.265658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.666 [2024-07-24 22:34:27.265686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.666 qpair failed and we were unable to recover it. 00:25:01.666 [2024-07-24 22:34:27.265789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.666 [2024-07-24 22:34:27.265817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.666 qpair failed and we were unable to recover it. 00:25:01.666 [2024-07-24 22:34:27.265917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.666 [2024-07-24 22:34:27.265943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.666 qpair failed and we were unable to recover it. 00:25:01.666 [2024-07-24 22:34:27.266088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.666 [2024-07-24 22:34:27.266114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.666 qpair failed and we were unable to recover it. 00:25:01.666 [2024-07-24 22:34:27.266253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.666 [2024-07-24 22:34:27.266279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.666 qpair failed and we were unable to recover it. 00:25:01.666 [2024-07-24 22:34:27.266414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.666 [2024-07-24 22:34:27.266442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.666 qpair failed and we were unable to recover it. 00:25:01.666 [2024-07-24 22:34:27.266553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.666 [2024-07-24 22:34:27.266579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.666 qpair failed and we were unable to recover it. 00:25:01.666 [2024-07-24 22:34:27.266676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.666 [2024-07-24 22:34:27.266701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.666 qpair failed and we were unable to recover it. 00:25:01.666 [2024-07-24 22:34:27.266802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.666 [2024-07-24 22:34:27.266828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.666 qpair failed and we were unable to recover it. 00:25:01.666 [2024-07-24 22:34:27.266927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.666 [2024-07-24 22:34:27.266952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.666 qpair failed and we were unable to recover it. 00:25:01.666 [2024-07-24 22:34:27.267071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.666 [2024-07-24 22:34:27.267097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.666 qpair failed and we were unable to recover it. 00:25:01.666 [2024-07-24 22:34:27.267199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.666 [2024-07-24 22:34:27.267226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.666 qpair failed and we were unable to recover it. 00:25:01.666 [2024-07-24 22:34:27.267329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.666 [2024-07-24 22:34:27.267357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.666 qpair failed and we were unable to recover it. 00:25:01.666 [2024-07-24 22:34:27.267453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.666 [2024-07-24 22:34:27.267478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.666 qpair failed and we were unable to recover it. 00:25:01.666 [2024-07-24 22:34:27.267593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.666 [2024-07-24 22:34:27.267621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.666 qpair failed and we were unable to recover it. 00:25:01.666 [2024-07-24 22:34:27.267727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.666 [2024-07-24 22:34:27.267753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.666 qpair failed and we were unable to recover it. 00:25:01.666 [2024-07-24 22:34:27.267856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.666 [2024-07-24 22:34:27.267890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.666 qpair failed and we were unable to recover it. 00:25:01.666 [2024-07-24 22:34:27.267988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.666 [2024-07-24 22:34:27.268013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.666 qpair failed and we were unable to recover it. 00:25:01.666 [2024-07-24 22:34:27.268114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.666 [2024-07-24 22:34:27.268141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.666 qpair failed and we were unable to recover it. 00:25:01.666 [2024-07-24 22:34:27.268248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.666 [2024-07-24 22:34:27.268275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.666 qpair failed and we were unable to recover it. 00:25:01.666 [2024-07-24 22:34:27.268376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.666 [2024-07-24 22:34:27.268402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.666 qpair failed and we were unable to recover it. 00:25:01.666 [2024-07-24 22:34:27.268518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.666 [2024-07-24 22:34:27.268545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.666 qpair failed and we were unable to recover it. 00:25:01.666 [2024-07-24 22:34:27.268642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.666 [2024-07-24 22:34:27.268668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.666 qpair failed and we were unable to recover it. 00:25:01.666 [2024-07-24 22:34:27.268768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.666 [2024-07-24 22:34:27.268794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.666 qpair failed and we were unable to recover it. 00:25:01.666 [2024-07-24 22:34:27.268903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.666 [2024-07-24 22:34:27.268930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.666 qpair failed and we were unable to recover it. 00:25:01.666 [2024-07-24 22:34:27.269030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.666 [2024-07-24 22:34:27.269057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.666 qpair failed and we were unable to recover it. 00:25:01.666 [2024-07-24 22:34:27.269160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.666 [2024-07-24 22:34:27.269186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.666 qpair failed and we were unable to recover it. 00:25:01.666 [2024-07-24 22:34:27.269306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.666 [2024-07-24 22:34:27.269336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.666 qpair failed and we were unable to recover it. 00:25:01.666 [2024-07-24 22:34:27.269435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.666 [2024-07-24 22:34:27.269462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.666 qpair failed and we were unable to recover it. 00:25:01.666 [2024-07-24 22:34:27.269577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.667 [2024-07-24 22:34:27.269605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.667 qpair failed and we were unable to recover it. 00:25:01.667 [2024-07-24 22:34:27.269707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.667 [2024-07-24 22:34:27.269733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.667 qpair failed and we were unable to recover it. 00:25:01.667 [2024-07-24 22:34:27.269836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.667 [2024-07-24 22:34:27.269863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.667 qpair failed and we were unable to recover it. 00:25:01.667 [2024-07-24 22:34:27.269966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.667 [2024-07-24 22:34:27.269991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.667 qpair failed and we were unable to recover it. 00:25:01.667 [2024-07-24 22:34:27.270106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.667 [2024-07-24 22:34:27.270132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.667 qpair failed and we were unable to recover it. 00:25:01.667 [2024-07-24 22:34:27.270245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.667 [2024-07-24 22:34:27.270270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.667 qpair failed and we were unable to recover it. 00:25:01.667 [2024-07-24 22:34:27.270375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.667 [2024-07-24 22:34:27.270401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.667 qpair failed and we were unable to recover it. 00:25:01.667 [2024-07-24 22:34:27.270504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.667 [2024-07-24 22:34:27.270530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.667 qpair failed and we were unable to recover it. 00:25:01.667 [2024-07-24 22:34:27.270636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.667 [2024-07-24 22:34:27.270662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.667 qpair failed and we were unable to recover it. 00:25:01.667 [2024-07-24 22:34:27.270764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.667 [2024-07-24 22:34:27.270789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.667 qpair failed and we were unable to recover it. 00:25:01.667 [2024-07-24 22:34:27.270891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.667 [2024-07-24 22:34:27.270916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.667 qpair failed and we were unable to recover it. 00:25:01.667 [2024-07-24 22:34:27.271032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.667 [2024-07-24 22:34:27.271056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.667 qpair failed and we were unable to recover it. 00:25:01.667 [2024-07-24 22:34:27.271181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.667 [2024-07-24 22:34:27.271206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.667 qpair failed and we were unable to recover it. 00:25:01.667 [2024-07-24 22:34:27.271343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.667 [2024-07-24 22:34:27.271377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.667 qpair failed and we were unable to recover it. 00:25:01.667 [2024-07-24 22:34:27.271506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.667 [2024-07-24 22:34:27.271534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.667 qpair failed and we were unable to recover it. 00:25:01.667 [2024-07-24 22:34:27.271632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.667 [2024-07-24 22:34:27.271659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.667 qpair failed and we were unable to recover it. 00:25:01.667 [2024-07-24 22:34:27.271759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.667 [2024-07-24 22:34:27.271785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.667 qpair failed and we were unable to recover it. 00:25:01.667 [2024-07-24 22:34:27.271901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.667 [2024-07-24 22:34:27.271927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.667 qpair failed and we were unable to recover it. 00:25:01.667 [2024-07-24 22:34:27.272074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.667 [2024-07-24 22:34:27.272100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.667 qpair failed and we were unable to recover it. 00:25:01.667 [2024-07-24 22:34:27.272200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.667 [2024-07-24 22:34:27.272225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.667 qpair failed and we were unable to recover it. 00:25:01.667 [2024-07-24 22:34:27.272329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.667 [2024-07-24 22:34:27.272356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.667 qpair failed and we were unable to recover it. 00:25:01.667 [2024-07-24 22:34:27.272455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.667 [2024-07-24 22:34:27.272486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.667 qpair failed and we were unable to recover it. 00:25:01.667 [2024-07-24 22:34:27.272586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.667 [2024-07-24 22:34:27.272612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.667 qpair failed and we were unable to recover it. 00:25:01.667 [2024-07-24 22:34:27.272709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.667 [2024-07-24 22:34:27.272735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.667 qpair failed and we were unable to recover it. 00:25:01.667 [2024-07-24 22:34:27.272842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.667 [2024-07-24 22:34:27.272872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.667 qpair failed and we were unable to recover it. 00:25:01.667 [2024-07-24 22:34:27.272982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.667 [2024-07-24 22:34:27.273011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.667 qpair failed and we were unable to recover it. 00:25:01.667 [2024-07-24 22:34:27.273113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.667 [2024-07-24 22:34:27.273139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.667 qpair failed and we were unable to recover it. 00:25:01.667 [2024-07-24 22:34:27.273239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.667 [2024-07-24 22:34:27.273270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.667 qpair failed and we were unable to recover it. 00:25:01.667 [2024-07-24 22:34:27.273376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.667 [2024-07-24 22:34:27.273403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.667 qpair failed and we were unable to recover it. 00:25:01.667 [2024-07-24 22:34:27.273517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.667 [2024-07-24 22:34:27.273543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.667 qpair failed and we were unable to recover it. 00:25:01.667 [2024-07-24 22:34:27.273647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.667 [2024-07-24 22:34:27.273672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.667 qpair failed and we were unable to recover it. 00:25:01.667 [2024-07-24 22:34:27.273766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.667 [2024-07-24 22:34:27.273791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.667 qpair failed and we were unable to recover it. 00:25:01.667 [2024-07-24 22:34:27.273903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.667 [2024-07-24 22:34:27.273928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.667 qpair failed and we were unable to recover it. 00:25:01.667 [2024-07-24 22:34:27.274025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.667 [2024-07-24 22:34:27.274050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.667 qpair failed and we were unable to recover it. 00:25:01.667 [2024-07-24 22:34:27.274149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.667 [2024-07-24 22:34:27.274174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.667 qpair failed and we were unable to recover it. 00:25:01.667 [2024-07-24 22:34:27.274266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.667 [2024-07-24 22:34:27.274291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.667 qpair failed and we were unable to recover it. 00:25:01.667 [2024-07-24 22:34:27.274409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.668 [2024-07-24 22:34:27.274434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.668 qpair failed and we were unable to recover it. 00:25:01.668 [2024-07-24 22:34:27.274543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.668 [2024-07-24 22:34:27.274569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.668 qpair failed and we were unable to recover it. 00:25:01.668 [2024-07-24 22:34:27.274686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.668 [2024-07-24 22:34:27.274711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.668 qpair failed and we were unable to recover it. 00:25:01.668 [2024-07-24 22:34:27.274821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.668 [2024-07-24 22:34:27.274846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.668 qpair failed and we were unable to recover it. 00:25:01.668 [2024-07-24 22:34:27.274950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.668 [2024-07-24 22:34:27.274976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.668 qpair failed and we were unable to recover it. 00:25:01.668 [2024-07-24 22:34:27.275110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.668 [2024-07-24 22:34:27.275137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.668 qpair failed and we were unable to recover it. 00:25:01.668 [2024-07-24 22:34:27.275260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.668 [2024-07-24 22:34:27.275290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.668 qpair failed and we were unable to recover it. 00:25:01.668 [2024-07-24 22:34:27.275390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.668 [2024-07-24 22:34:27.275417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.668 qpair failed and we were unable to recover it. 00:25:01.668 [2024-07-24 22:34:27.275519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.668 [2024-07-24 22:34:27.275547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.668 qpair failed and we were unable to recover it. 00:25:01.668 [2024-07-24 22:34:27.275650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.668 [2024-07-24 22:34:27.275676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.668 qpair failed and we were unable to recover it. 00:25:01.668 [2024-07-24 22:34:27.275781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.668 [2024-07-24 22:34:27.275810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.668 qpair failed and we were unable to recover it. 00:25:01.668 [2024-07-24 22:34:27.275929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.668 [2024-07-24 22:34:27.275956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.668 qpair failed and we were unable to recover it. 00:25:01.668 [2024-07-24 22:34:27.276070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.668 [2024-07-24 22:34:27.276096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.668 qpair failed and we were unable to recover it. 00:25:01.668 [2024-07-24 22:34:27.276202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.668 [2024-07-24 22:34:27.276230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.668 qpair failed and we were unable to recover it. 00:25:01.668 [2024-07-24 22:34:27.276336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.668 [2024-07-24 22:34:27.276362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.668 qpair failed and we were unable to recover it. 00:25:01.668 [2024-07-24 22:34:27.276466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.668 [2024-07-24 22:34:27.276500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.668 qpair failed and we were unable to recover it. 00:25:01.668 [2024-07-24 22:34:27.276615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.668 [2024-07-24 22:34:27.276641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.668 qpair failed and we were unable to recover it. 00:25:01.668 [2024-07-24 22:34:27.276739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.668 [2024-07-24 22:34:27.276766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.668 qpair failed and we were unable to recover it. 00:25:01.668 [2024-07-24 22:34:27.276874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.668 [2024-07-24 22:34:27.276900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.668 qpair failed and we were unable to recover it. 00:25:01.668 [2024-07-24 22:34:27.277010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.668 [2024-07-24 22:34:27.277035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.668 qpair failed and we were unable to recover it. 00:25:01.668 [2024-07-24 22:34:27.277133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.668 [2024-07-24 22:34:27.277158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.668 qpair failed and we were unable to recover it. 00:25:01.668 [2024-07-24 22:34:27.277259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.668 [2024-07-24 22:34:27.277284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.668 qpair failed and we were unable to recover it. 00:25:01.668 [2024-07-24 22:34:27.277378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.668 [2024-07-24 22:34:27.277404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.668 qpair failed and we were unable to recover it. 00:25:01.668 [2024-07-24 22:34:27.277503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.668 [2024-07-24 22:34:27.277530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.668 qpair failed and we were unable to recover it. 00:25:01.668 [2024-07-24 22:34:27.277632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.668 [2024-07-24 22:34:27.277657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.668 qpair failed and we were unable to recover it. 00:25:01.668 [2024-07-24 22:34:27.277774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.668 [2024-07-24 22:34:27.277800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.668 qpair failed and we were unable to recover it. 00:25:01.668 [2024-07-24 22:34:27.277900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.668 [2024-07-24 22:34:27.277926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.668 qpair failed and we were unable to recover it. 00:25:01.668 [2024-07-24 22:34:27.278024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.668 [2024-07-24 22:34:27.278049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.668 qpair failed and we were unable to recover it. 00:25:01.668 [2024-07-24 22:34:27.278151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.668 [2024-07-24 22:34:27.278177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.668 qpair failed and we were unable to recover it. 00:25:01.668 [2024-07-24 22:34:27.278292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.668 [2024-07-24 22:34:27.278318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.668 qpair failed and we were unable to recover it. 00:25:01.668 [2024-07-24 22:34:27.278426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.668 [2024-07-24 22:34:27.278455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.668 qpair failed and we were unable to recover it. 00:25:01.668 [2024-07-24 22:34:27.278583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.668 [2024-07-24 22:34:27.278618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.668 qpair failed and we were unable to recover it. 00:25:01.668 [2024-07-24 22:34:27.278729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.668 [2024-07-24 22:34:27.278756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.668 qpair failed and we were unable to recover it. 00:25:01.668 [2024-07-24 22:34:27.278860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.668 [2024-07-24 22:34:27.278886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.668 qpair failed and we were unable to recover it. 00:25:01.668 [2024-07-24 22:34:27.279005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.668 [2024-07-24 22:34:27.279031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.668 qpair failed and we were unable to recover it. 00:25:01.668 [2024-07-24 22:34:27.279148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.668 [2024-07-24 22:34:27.279176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.668 qpair failed and we were unable to recover it. 00:25:01.668 [2024-07-24 22:34:27.279276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.668 [2024-07-24 22:34:27.279302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.669 qpair failed and we were unable to recover it. 00:25:01.669 [2024-07-24 22:34:27.279400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.669 [2024-07-24 22:34:27.279428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.669 qpair failed and we were unable to recover it. 00:25:01.669 [2024-07-24 22:34:27.279552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.669 [2024-07-24 22:34:27.279579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.669 qpair failed and we were unable to recover it. 00:25:01.669 [2024-07-24 22:34:27.279682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.669 [2024-07-24 22:34:27.279708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.669 qpair failed and we were unable to recover it. 00:25:01.669 [2024-07-24 22:34:27.279806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.669 [2024-07-24 22:34:27.279832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.669 qpair failed and we were unable to recover it. 00:25:01.669 [2024-07-24 22:34:27.279925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.669 [2024-07-24 22:34:27.279952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.669 qpair failed and we were unable to recover it. 00:25:01.669 [2024-07-24 22:34:27.280055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.669 [2024-07-24 22:34:27.280081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.669 qpair failed and we were unable to recover it. 00:25:01.669 [2024-07-24 22:34:27.280183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.669 [2024-07-24 22:34:27.280210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.669 qpair failed and we were unable to recover it. 00:25:01.669 [2024-07-24 22:34:27.280311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.669 [2024-07-24 22:34:27.280337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.669 qpair failed and we were unable to recover it. 00:25:01.669 [2024-07-24 22:34:27.280461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.669 [2024-07-24 22:34:27.280495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.669 qpair failed and we were unable to recover it. 00:25:01.669 [2024-07-24 22:34:27.280598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.669 [2024-07-24 22:34:27.280627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.669 qpair failed and we were unable to recover it. 00:25:01.669 [2024-07-24 22:34:27.280733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.669 [2024-07-24 22:34:27.280760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.669 qpair failed and we were unable to recover it. 00:25:01.669 [2024-07-24 22:34:27.280860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.669 [2024-07-24 22:34:27.280886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.669 qpair failed and we were unable to recover it. 00:25:01.669 [2024-07-24 22:34:27.280988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.669 [2024-07-24 22:34:27.281014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.669 qpair failed and we were unable to recover it. 00:25:01.669 [2024-07-24 22:34:27.281109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.669 [2024-07-24 22:34:27.281135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.669 qpair failed and we were unable to recover it. 00:25:01.669 [2024-07-24 22:34:27.281234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.669 [2024-07-24 22:34:27.281260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.669 qpair failed and we were unable to recover it. 00:25:01.669 [2024-07-24 22:34:27.281358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.669 [2024-07-24 22:34:27.281384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.669 qpair failed and we were unable to recover it. 00:25:01.669 [2024-07-24 22:34:27.281493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.669 [2024-07-24 22:34:27.281525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.669 qpair failed and we were unable to recover it. 00:25:01.669 [2024-07-24 22:34:27.281634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.669 [2024-07-24 22:34:27.281662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.669 qpair failed and we were unable to recover it. 00:25:01.669 [2024-07-24 22:34:27.281787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.669 [2024-07-24 22:34:27.281819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.669 qpair failed and we were unable to recover it. 00:25:01.669 [2024-07-24 22:34:27.281949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.669 [2024-07-24 22:34:27.281985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.669 qpair failed and we were unable to recover it. 00:25:01.669 [2024-07-24 22:34:27.282107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.669 [2024-07-24 22:34:27.282137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.669 qpair failed and we were unable to recover it. 00:25:01.669 [2024-07-24 22:34:27.282248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.669 [2024-07-24 22:34:27.282277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.669 qpair failed and we were unable to recover it. 00:25:01.669 [2024-07-24 22:34:27.282406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.669 [2024-07-24 22:34:27.282432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.669 qpair failed and we were unable to recover it. 00:25:01.669 [2024-07-24 22:34:27.282540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.669 [2024-07-24 22:34:27.282566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.669 qpair failed and we were unable to recover it. 00:25:01.669 [2024-07-24 22:34:27.282662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.669 [2024-07-24 22:34:27.282687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.669 qpair failed and we were unable to recover it. 00:25:01.669 [2024-07-24 22:34:27.282793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.669 [2024-07-24 22:34:27.282819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.669 qpair failed and we were unable to recover it. 00:25:01.669 [2024-07-24 22:34:27.282923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.669 [2024-07-24 22:34:27.282949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.669 qpair failed and we were unable to recover it. 00:25:01.669 [2024-07-24 22:34:27.283054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.669 [2024-07-24 22:34:27.283085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.669 qpair failed and we were unable to recover it. 00:25:01.669 [2024-07-24 22:34:27.283205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.669 [2024-07-24 22:34:27.283231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.669 qpair failed and we were unable to recover it. 00:25:01.669 [2024-07-24 22:34:27.283333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.669 [2024-07-24 22:34:27.283358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.669 qpair failed and we were unable to recover it. 00:25:01.669 [2024-07-24 22:34:27.283461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.670 [2024-07-24 22:34:27.283495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.670 qpair failed and we were unable to recover it. 00:25:01.670 [2024-07-24 22:34:27.283600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.670 [2024-07-24 22:34:27.283627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.670 qpair failed and we were unable to recover it. 00:25:01.670 [2024-07-24 22:34:27.283734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.670 [2024-07-24 22:34:27.283769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.670 qpair failed and we were unable to recover it. 00:25:01.670 [2024-07-24 22:34:27.283887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.670 [2024-07-24 22:34:27.283916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.670 qpair failed and we were unable to recover it. 00:25:01.670 [2024-07-24 22:34:27.284033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.670 [2024-07-24 22:34:27.284064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.670 qpair failed and we were unable to recover it. 00:25:01.670 [2024-07-24 22:34:27.284171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.670 [2024-07-24 22:34:27.284197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.670 qpair failed and we were unable to recover it. 00:25:01.670 [2024-07-24 22:34:27.284302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.670 [2024-07-24 22:34:27.284329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.670 qpair failed and we were unable to recover it. 00:25:01.670 [2024-07-24 22:34:27.284446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.670 [2024-07-24 22:34:27.284472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.670 qpair failed and we were unable to recover it. 00:25:01.670 [2024-07-24 22:34:27.284596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.670 [2024-07-24 22:34:27.284623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.670 qpair failed and we were unable to recover it. 00:25:01.670 [2024-07-24 22:34:27.284726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.670 [2024-07-24 22:34:27.284752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.670 qpair failed and we were unable to recover it. 00:25:01.670 [2024-07-24 22:34:27.284857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.670 [2024-07-24 22:34:27.284883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.670 qpair failed and we were unable to recover it. 00:25:01.670 [2024-07-24 22:34:27.284985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.670 [2024-07-24 22:34:27.285011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.670 qpair failed and we were unable to recover it. 00:25:01.670 [2024-07-24 22:34:27.285117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.670 [2024-07-24 22:34:27.285143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.670 qpair failed and we were unable to recover it. 00:25:01.670 [2024-07-24 22:34:27.285248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.670 [2024-07-24 22:34:27.285275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.670 qpair failed and we were unable to recover it. 00:25:01.670 [2024-07-24 22:34:27.285377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.670 [2024-07-24 22:34:27.285404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.670 qpair failed and we were unable to recover it. 00:25:01.670 [2024-07-24 22:34:27.285512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.670 [2024-07-24 22:34:27.285540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.670 qpair failed and we were unable to recover it. 00:25:01.960 [2024-07-24 22:34:27.285649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.960 [2024-07-24 22:34:27.285676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.960 qpair failed and we were unable to recover it. 00:25:01.960 [2024-07-24 22:34:27.285784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.960 [2024-07-24 22:34:27.285811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.960 qpair failed and we were unable to recover it. 00:25:01.960 [2024-07-24 22:34:27.285922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.960 [2024-07-24 22:34:27.285950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.960 qpair failed and we were unable to recover it. 00:25:01.960 [2024-07-24 22:34:27.286055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.960 [2024-07-24 22:34:27.286081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.960 qpair failed and we were unable to recover it. 00:25:01.960 [2024-07-24 22:34:27.286184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.960 [2024-07-24 22:34:27.286210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.960 qpair failed and we were unable to recover it. 00:25:01.960 [2024-07-24 22:34:27.286312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.960 [2024-07-24 22:34:27.286338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.960 qpair failed and we were unable to recover it. 00:25:01.960 [2024-07-24 22:34:27.286453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.960 [2024-07-24 22:34:27.286485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.960 qpair failed and we were unable to recover it. 00:25:01.960 [2024-07-24 22:34:27.286589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.960 [2024-07-24 22:34:27.286616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.960 qpair failed and we were unable to recover it. 00:25:01.960 [2024-07-24 22:34:27.286727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.960 [2024-07-24 22:34:27.286753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.960 qpair failed and we were unable to recover it. 00:25:01.960 [2024-07-24 22:34:27.286853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.960 [2024-07-24 22:34:27.286879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.960 qpair failed and we were unable to recover it. 00:25:01.960 [2024-07-24 22:34:27.286982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.960 [2024-07-24 22:34:27.287009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.960 qpair failed and we were unable to recover it. 00:25:01.960 [2024-07-24 22:34:27.287132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.960 [2024-07-24 22:34:27.287161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.960 qpair failed and we were unable to recover it. 00:25:01.960 [2024-07-24 22:34:27.287271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.960 [2024-07-24 22:34:27.287297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.960 qpair failed and we were unable to recover it. 00:25:01.960 [2024-07-24 22:34:27.287398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.960 [2024-07-24 22:34:27.287423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.960 qpair failed and we were unable to recover it. 00:25:01.960 [2024-07-24 22:34:27.287531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.960 [2024-07-24 22:34:27.287558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.960 qpair failed and we were unable to recover it. 00:25:01.960 [2024-07-24 22:34:27.287685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.960 [2024-07-24 22:34:27.287711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.960 qpair failed and we were unable to recover it. 00:25:01.960 [2024-07-24 22:34:27.287816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.960 [2024-07-24 22:34:27.287842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.960 qpair failed and we were unable to recover it. 00:25:01.960 [2024-07-24 22:34:27.287945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.960 [2024-07-24 22:34:27.287970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.960 qpair failed and we were unable to recover it. 00:25:01.960 [2024-07-24 22:34:27.288084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.960 [2024-07-24 22:34:27.288110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.960 qpair failed and we were unable to recover it. 00:25:01.960 [2024-07-24 22:34:27.288213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.960 [2024-07-24 22:34:27.288238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.960 qpair failed and we were unable to recover it. 00:25:01.960 [2024-07-24 22:34:27.288337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.960 [2024-07-24 22:34:27.288363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.960 qpair failed and we were unable to recover it. 00:25:01.960 [2024-07-24 22:34:27.288463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.960 [2024-07-24 22:34:27.288498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.960 qpair failed and we were unable to recover it. 00:25:01.960 [2024-07-24 22:34:27.288617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.960 [2024-07-24 22:34:27.288643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.960 qpair failed and we were unable to recover it. 00:25:01.960 [2024-07-24 22:34:27.288740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.960 [2024-07-24 22:34:27.288765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.960 qpair failed and we were unable to recover it. 00:25:01.960 [2024-07-24 22:34:27.288877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.960 [2024-07-24 22:34:27.288902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.960 qpair failed and we were unable to recover it. 00:25:01.960 [2024-07-24 22:34:27.289007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.960 [2024-07-24 22:34:27.289032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.960 qpair failed and we were unable to recover it. 00:25:01.960 [2024-07-24 22:34:27.289142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.960 [2024-07-24 22:34:27.289171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.960 qpair failed and we were unable to recover it. 00:25:01.960 [2024-07-24 22:34:27.289280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.960 [2024-07-24 22:34:27.289308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.960 qpair failed and we were unable to recover it. 00:25:01.960 [2024-07-24 22:34:27.289414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.960 [2024-07-24 22:34:27.289447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.960 qpair failed and we were unable to recover it. 00:25:01.960 [2024-07-24 22:34:27.289560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.960 [2024-07-24 22:34:27.289586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.960 qpair failed and we were unable to recover it. 00:25:01.960 [2024-07-24 22:34:27.289692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.960 [2024-07-24 22:34:27.289718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.961 qpair failed and we were unable to recover it. 00:25:01.961 [2024-07-24 22:34:27.289817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.961 [2024-07-24 22:34:27.289844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.961 qpair failed and we were unable to recover it. 00:25:01.961 [2024-07-24 22:34:27.289961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.961 [2024-07-24 22:34:27.289987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.961 qpair failed and we were unable to recover it. 00:25:01.961 [2024-07-24 22:34:27.290089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.961 [2024-07-24 22:34:27.290115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.961 qpair failed and we were unable to recover it. 00:25:01.961 [2024-07-24 22:34:27.290224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.961 [2024-07-24 22:34:27.290251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.961 qpair failed and we were unable to recover it. 00:25:01.961 [2024-07-24 22:34:27.290355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.961 [2024-07-24 22:34:27.290382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.961 qpair failed and we were unable to recover it. 00:25:01.961 [2024-07-24 22:34:27.290488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.961 [2024-07-24 22:34:27.290515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.961 qpair failed and we were unable to recover it. 00:25:01.961 [2024-07-24 22:34:27.290630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.961 [2024-07-24 22:34:27.290656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.961 qpair failed and we were unable to recover it. 00:25:01.961 [2024-07-24 22:34:27.290763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.961 [2024-07-24 22:34:27.290791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.961 qpair failed and we were unable to recover it. 00:25:01.961 [2024-07-24 22:34:27.290893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.961 [2024-07-24 22:34:27.290919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.961 qpair failed and we were unable to recover it. 00:25:01.961 [2024-07-24 22:34:27.291020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.961 [2024-07-24 22:34:27.291047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.961 qpair failed and we were unable to recover it. 00:25:01.961 [2024-07-24 22:34:27.291144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.961 [2024-07-24 22:34:27.291170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.961 qpair failed and we were unable to recover it. 00:25:01.961 [2024-07-24 22:34:27.291288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.961 [2024-07-24 22:34:27.291314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.961 qpair failed and we were unable to recover it. 00:25:01.961 [2024-07-24 22:34:27.291427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.961 [2024-07-24 22:34:27.291454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.961 qpair failed and we were unable to recover it. 00:25:01.961 [2024-07-24 22:34:27.291584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.961 [2024-07-24 22:34:27.291613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.961 qpair failed and we were unable to recover it. 00:25:01.961 [2024-07-24 22:34:27.291718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.961 [2024-07-24 22:34:27.291743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.961 qpair failed and we were unable to recover it. 00:25:01.961 [2024-07-24 22:34:27.291850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.961 [2024-07-24 22:34:27.291876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.961 qpair failed and we were unable to recover it. 00:25:01.961 [2024-07-24 22:34:27.291990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.961 [2024-07-24 22:34:27.292015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.961 qpair failed and we were unable to recover it. 00:25:01.961 [2024-07-24 22:34:27.292113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.961 [2024-07-24 22:34:27.292138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.961 qpair failed and we were unable to recover it. 00:25:01.961 [2024-07-24 22:34:27.292241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.961 [2024-07-24 22:34:27.292267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.961 qpair failed and we were unable to recover it. 00:25:01.961 [2024-07-24 22:34:27.292364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.961 [2024-07-24 22:34:27.292392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.961 qpair failed and we were unable to recover it. 00:25:01.961 [2024-07-24 22:34:27.292502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.961 [2024-07-24 22:34:27.292529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.961 qpair failed and we were unable to recover it. 00:25:01.961 [2024-07-24 22:34:27.292645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.961 [2024-07-24 22:34:27.292680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.961 qpair failed and we were unable to recover it. 00:25:01.961 [2024-07-24 22:34:27.292794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.961 [2024-07-24 22:34:27.292821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.961 qpair failed and we were unable to recover it. 00:25:01.961 [2024-07-24 22:34:27.292925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.961 [2024-07-24 22:34:27.292950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.961 qpair failed and we were unable to recover it. 00:25:01.961 [2024-07-24 22:34:27.293069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.961 [2024-07-24 22:34:27.293094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.961 qpair failed and we were unable to recover it. 00:25:01.961 [2024-07-24 22:34:27.293195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.961 [2024-07-24 22:34:27.293222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.961 qpair failed and we were unable to recover it. 00:25:01.961 [2024-07-24 22:34:27.293316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.961 [2024-07-24 22:34:27.293341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.961 qpair failed and we were unable to recover it. 00:25:01.961 [2024-07-24 22:34:27.293440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.961 [2024-07-24 22:34:27.293467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.961 qpair failed and we were unable to recover it. 00:25:01.961 [2024-07-24 22:34:27.293576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.961 [2024-07-24 22:34:27.293602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.961 qpair failed and we were unable to recover it. 00:25:01.961 [2024-07-24 22:34:27.293707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.961 [2024-07-24 22:34:27.293731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.961 qpair failed and we were unable to recover it. 00:25:01.961 [2024-07-24 22:34:27.293847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.961 [2024-07-24 22:34:27.293871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.961 qpair failed and we were unable to recover it. 00:25:01.961 [2024-07-24 22:34:27.293993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.961 [2024-07-24 22:34:27.294019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.961 qpair failed and we were unable to recover it. 00:25:01.961 [2024-07-24 22:34:27.294122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.961 [2024-07-24 22:34:27.294148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.961 qpair failed and we were unable to recover it. 00:25:01.961 [2024-07-24 22:34:27.294265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.961 [2024-07-24 22:34:27.294293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.961 qpair failed and we were unable to recover it. 00:25:01.961 [2024-07-24 22:34:27.294394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.961 [2024-07-24 22:34:27.294419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.961 qpair failed and we were unable to recover it. 00:25:01.961 [2024-07-24 22:34:27.294524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.961 [2024-07-24 22:34:27.294551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.961 qpair failed and we were unable to recover it. 00:25:01.961 [2024-07-24 22:34:27.294671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.961 [2024-07-24 22:34:27.294711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.961 qpair failed and we were unable to recover it. 00:25:01.962 [2024-07-24 22:34:27.294819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.962 [2024-07-24 22:34:27.294852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.962 qpair failed and we were unable to recover it. 00:25:01.962 [2024-07-24 22:34:27.294950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.962 [2024-07-24 22:34:27.294976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.962 qpair failed and we were unable to recover it. 00:25:01.962 [2024-07-24 22:34:27.295077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.962 [2024-07-24 22:34:27.295103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.962 qpair failed and we were unable to recover it. 00:25:01.962 [2024-07-24 22:34:27.295230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.962 [2024-07-24 22:34:27.295257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.962 qpair failed and we were unable to recover it. 00:25:01.962 [2024-07-24 22:34:27.295357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.962 [2024-07-24 22:34:27.295383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.962 qpair failed and we were unable to recover it. 00:25:01.962 [2024-07-24 22:34:27.295478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.962 [2024-07-24 22:34:27.295512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.962 qpair failed and we were unable to recover it. 00:25:01.962 [2024-07-24 22:34:27.295613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.962 [2024-07-24 22:34:27.295639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.962 qpair failed and we were unable to recover it. 00:25:01.962 [2024-07-24 22:34:27.295751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.962 [2024-07-24 22:34:27.295778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.962 qpair failed and we were unable to recover it. 00:25:01.962 [2024-07-24 22:34:27.295889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.962 [2024-07-24 22:34:27.295918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.962 qpair failed and we were unable to recover it. 00:25:01.962 [2024-07-24 22:34:27.296017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.962 [2024-07-24 22:34:27.296042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.962 qpair failed and we were unable to recover it. 00:25:01.962 [2024-07-24 22:34:27.296147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.962 [2024-07-24 22:34:27.296174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.962 qpair failed and we were unable to recover it. 00:25:01.962 [2024-07-24 22:34:27.296271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.962 [2024-07-24 22:34:27.296296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.962 qpair failed and we were unable to recover it. 00:25:01.962 [2024-07-24 22:34:27.296409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.962 [2024-07-24 22:34:27.296435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.962 qpair failed and we were unable to recover it. 00:25:01.962 [2024-07-24 22:34:27.296540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.962 [2024-07-24 22:34:27.296565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.962 qpair failed and we were unable to recover it. 00:25:01.962 [2024-07-24 22:34:27.296679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.962 [2024-07-24 22:34:27.296704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.962 qpair failed and we were unable to recover it. 00:25:01.962 [2024-07-24 22:34:27.296800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.962 [2024-07-24 22:34:27.296825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.962 qpair failed and we were unable to recover it. 00:25:01.962 [2024-07-24 22:34:27.296926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.962 [2024-07-24 22:34:27.296951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.962 qpair failed and we were unable to recover it. 00:25:01.962 [2024-07-24 22:34:27.297050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.962 [2024-07-24 22:34:27.297075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.962 qpair failed and we were unable to recover it. 00:25:01.962 [2024-07-24 22:34:27.297180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.962 [2024-07-24 22:34:27.297212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.962 qpair failed and we were unable to recover it. 00:25:01.962 [2024-07-24 22:34:27.297320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.962 [2024-07-24 22:34:27.297348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.962 qpair failed and we were unable to recover it. 00:25:01.962 [2024-07-24 22:34:27.297445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.962 [2024-07-24 22:34:27.297472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.962 qpair failed and we were unable to recover it. 00:25:01.962 [2024-07-24 22:34:27.297595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.962 [2024-07-24 22:34:27.297621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.962 qpair failed and we were unable to recover it. 00:25:01.962 [2024-07-24 22:34:27.297723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.962 [2024-07-24 22:34:27.297750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.962 qpair failed and we were unable to recover it. 00:25:01.962 [2024-07-24 22:34:27.297856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.962 [2024-07-24 22:34:27.297883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.962 qpair failed and we were unable to recover it. 00:25:01.962 [2024-07-24 22:34:27.297988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.962 [2024-07-24 22:34:27.298015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.962 qpair failed and we were unable to recover it. 00:25:01.962 [2024-07-24 22:34:27.298120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.962 [2024-07-24 22:34:27.298147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.962 qpair failed and we were unable to recover it. 00:25:01.962 [2024-07-24 22:34:27.298256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.962 [2024-07-24 22:34:27.298285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.962 qpair failed and we were unable to recover it. 00:25:01.962 [2024-07-24 22:34:27.298406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.962 [2024-07-24 22:34:27.298434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.962 qpair failed and we were unable to recover it. 00:25:01.962 [2024-07-24 22:34:27.298575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.962 [2024-07-24 22:34:27.298610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.962 qpair failed and we were unable to recover it. 00:25:01.962 [2024-07-24 22:34:27.298713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.962 [2024-07-24 22:34:27.298741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.962 qpair failed and we were unable to recover it. 00:25:01.962 [2024-07-24 22:34:27.298843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.962 [2024-07-24 22:34:27.298869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.962 qpair failed and we were unable to recover it. 00:25:01.962 [2024-07-24 22:34:27.298974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.962 [2024-07-24 22:34:27.298999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.962 qpair failed and we were unable to recover it. 00:25:01.962 [2024-07-24 22:34:27.299108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.962 [2024-07-24 22:34:27.299136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.962 qpair failed and we were unable to recover it. 00:25:01.962 [2024-07-24 22:34:27.299248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.962 [2024-07-24 22:34:27.299273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.962 qpair failed and we were unable to recover it. 00:25:01.962 [2024-07-24 22:34:27.299388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.962 [2024-07-24 22:34:27.299413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.962 qpair failed and we were unable to recover it. 00:25:01.962 [2024-07-24 22:34:27.299518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.962 [2024-07-24 22:34:27.299545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.962 qpair failed and we were unable to recover it. 00:25:01.962 [2024-07-24 22:34:27.299648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.962 [2024-07-24 22:34:27.299673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.962 qpair failed and we were unable to recover it. 00:25:01.962 [2024-07-24 22:34:27.299771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.962 [2024-07-24 22:34:27.299796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.962 qpair failed and we were unable to recover it. 00:25:01.963 [2024-07-24 22:34:27.299901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.963 [2024-07-24 22:34:27.299926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.963 qpair failed and we were unable to recover it. 00:25:01.963 [2024-07-24 22:34:27.300031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.963 [2024-07-24 22:34:27.300058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.963 qpair failed and we were unable to recover it. 00:25:01.963 [2024-07-24 22:34:27.300157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.963 [2024-07-24 22:34:27.300187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.963 qpair failed and we were unable to recover it. 00:25:01.963 [2024-07-24 22:34:27.300288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.963 [2024-07-24 22:34:27.300313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.963 qpair failed and we were unable to recover it. 00:25:01.963 [2024-07-24 22:34:27.300426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.963 [2024-07-24 22:34:27.300451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.963 qpair failed and we were unable to recover it. 00:25:01.963 [2024-07-24 22:34:27.300564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.963 [2024-07-24 22:34:27.300589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.963 qpair failed and we were unable to recover it. 00:25:01.963 [2024-07-24 22:34:27.300696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.963 [2024-07-24 22:34:27.300726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.963 qpair failed and we were unable to recover it. 00:25:01.963 [2024-07-24 22:34:27.300840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.963 [2024-07-24 22:34:27.300865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.963 qpair failed and we were unable to recover it. 00:25:01.963 [2024-07-24 22:34:27.300966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.963 [2024-07-24 22:34:27.300992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.963 qpair failed and we were unable to recover it. 00:25:01.963 [2024-07-24 22:34:27.301098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.963 [2024-07-24 22:34:27.301125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.963 qpair failed and we were unable to recover it. 00:25:01.963 [2024-07-24 22:34:27.301226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.963 [2024-07-24 22:34:27.301252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.963 qpair failed and we were unable to recover it. 00:25:01.963 [2024-07-24 22:34:27.301365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.963 [2024-07-24 22:34:27.301391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.963 qpair failed and we were unable to recover it. 00:25:01.963 [2024-07-24 22:34:27.301505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.963 [2024-07-24 22:34:27.301541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.963 qpair failed and we were unable to recover it. 00:25:01.963 [2024-07-24 22:34:27.301635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.963 [2024-07-24 22:34:27.301661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.963 qpair failed and we were unable to recover it. 00:25:01.963 [2024-07-24 22:34:27.301779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.963 [2024-07-24 22:34:27.301805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.963 qpair failed and we were unable to recover it. 00:25:01.963 [2024-07-24 22:34:27.301910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.963 [2024-07-24 22:34:27.301936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.963 qpair failed and we were unable to recover it. 00:25:01.963 [2024-07-24 22:34:27.302046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.963 [2024-07-24 22:34:27.302071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.963 qpair failed and we were unable to recover it. 00:25:01.963 [2024-07-24 22:34:27.302171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.963 [2024-07-24 22:34:27.302198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.963 qpair failed and we were unable to recover it. 00:25:01.963 [2024-07-24 22:34:27.302297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.963 [2024-07-24 22:34:27.302322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.963 qpair failed and we were unable to recover it. 00:25:01.963 [2024-07-24 22:34:27.302441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.963 [2024-07-24 22:34:27.302471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.963 qpair failed and we were unable to recover it. 00:25:01.963 [2024-07-24 22:34:27.302592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.963 [2024-07-24 22:34:27.302619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.963 qpair failed and we were unable to recover it. 00:25:01.963 [2024-07-24 22:34:27.302724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.963 [2024-07-24 22:34:27.302751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.963 qpair failed and we were unable to recover it. 00:25:01.963 [2024-07-24 22:34:27.302851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.963 [2024-07-24 22:34:27.302877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.963 qpair failed and we were unable to recover it. 00:25:01.963 [2024-07-24 22:34:27.302918] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1699190 (9): Bad file descriptor 00:25:01.963 [2024-07-24 22:34:27.303057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.963 [2024-07-24 22:34:27.303086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.963 qpair failed and we were unable to recover it. 00:25:01.963 [2024-07-24 22:34:27.303185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.963 [2024-07-24 22:34:27.303212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.963 qpair failed and we were unable to recover it. 00:25:01.963 [2024-07-24 22:34:27.303312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.963 [2024-07-24 22:34:27.303337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.963 qpair failed and we were unable to recover it. 00:25:01.963 [2024-07-24 22:34:27.303435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.963 [2024-07-24 22:34:27.303461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.963 qpair failed and we were unable to recover it. 00:25:01.963 [2024-07-24 22:34:27.303578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.963 [2024-07-24 22:34:27.303603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.963 qpair failed and we were unable to recover it. 00:25:01.963 [2024-07-24 22:34:27.303708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.963 [2024-07-24 22:34:27.303736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.963 qpair failed and we were unable to recover it. 00:25:01.963 [2024-07-24 22:34:27.303861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.963 [2024-07-24 22:34:27.303887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.963 qpair failed and we were unable to recover it. 00:25:01.963 [2024-07-24 22:34:27.303996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.963 [2024-07-24 22:34:27.304022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.963 qpair failed and we were unable to recover it. 00:25:01.963 [2024-07-24 22:34:27.304169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.963 [2024-07-24 22:34:27.304196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.963 qpair failed and we were unable to recover it. 00:25:01.963 [2024-07-24 22:34:27.304291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.963 [2024-07-24 22:34:27.304317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.963 qpair failed and we were unable to recover it. 00:25:01.963 [2024-07-24 22:34:27.304423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.963 [2024-07-24 22:34:27.304451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.963 qpair failed and we were unable to recover it. 00:25:01.963 [2024-07-24 22:34:27.304565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.963 [2024-07-24 22:34:27.304592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.963 qpair failed and we were unable to recover it. 00:25:01.963 [2024-07-24 22:34:27.304692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.963 [2024-07-24 22:34:27.304718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.963 qpair failed and we were unable to recover it. 00:25:01.963 [2024-07-24 22:34:27.304829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.963 [2024-07-24 22:34:27.304858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.963 qpair failed and we were unable to recover it. 00:25:01.964 [2024-07-24 22:34:27.304981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.964 [2024-07-24 22:34:27.305009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.964 qpair failed and we were unable to recover it. 00:25:01.964 [2024-07-24 22:34:27.305115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.964 [2024-07-24 22:34:27.305141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.964 qpair failed and we were unable to recover it. 00:25:01.964 [2024-07-24 22:34:27.305244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.964 [2024-07-24 22:34:27.305270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.964 qpair failed and we were unable to recover it. 00:25:01.964 [2024-07-24 22:34:27.305379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.964 [2024-07-24 22:34:27.305405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.964 qpair failed and we were unable to recover it. 00:25:01.964 [2024-07-24 22:34:27.305521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.964 [2024-07-24 22:34:27.305548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.964 qpair failed and we were unable to recover it. 00:25:01.964 [2024-07-24 22:34:27.305667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.964 [2024-07-24 22:34:27.305693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.964 qpair failed and we were unable to recover it. 00:25:01.964 [2024-07-24 22:34:27.305793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.964 [2024-07-24 22:34:27.305819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.964 qpair failed and we were unable to recover it. 00:25:01.964 [2024-07-24 22:34:27.305924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.964 [2024-07-24 22:34:27.305951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.964 qpair failed and we were unable to recover it. 00:25:01.964 [2024-07-24 22:34:27.306055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.964 [2024-07-24 22:34:27.306082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.964 qpair failed and we were unable to recover it. 00:25:01.964 [2024-07-24 22:34:27.306190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.964 [2024-07-24 22:34:27.306216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.964 qpair failed and we were unable to recover it. 00:25:01.964 [2024-07-24 22:34:27.306316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.964 [2024-07-24 22:34:27.306342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.964 qpair failed and we were unable to recover it. 00:25:01.964 [2024-07-24 22:34:27.306448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.964 [2024-07-24 22:34:27.306477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.964 qpair failed and we were unable to recover it. 00:25:01.964 [2024-07-24 22:34:27.306593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.964 [2024-07-24 22:34:27.306620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.964 qpair failed and we were unable to recover it. 00:25:01.964 [2024-07-24 22:34:27.306734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.964 [2024-07-24 22:34:27.306761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.964 qpair failed and we were unable to recover it. 00:25:01.964 [2024-07-24 22:34:27.306875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.964 [2024-07-24 22:34:27.306901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.964 qpair failed and we were unable to recover it. 00:25:01.964 [2024-07-24 22:34:27.307009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.964 [2024-07-24 22:34:27.307038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.964 qpair failed and we were unable to recover it. 00:25:01.964 [2024-07-24 22:34:27.307134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.964 [2024-07-24 22:34:27.307159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.964 qpair failed and we were unable to recover it. 00:25:01.964 [2024-07-24 22:34:27.307280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.964 [2024-07-24 22:34:27.307306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.964 qpair failed and we were unable to recover it. 00:25:01.964 [2024-07-24 22:34:27.307427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.964 [2024-07-24 22:34:27.307459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.964 qpair failed and we were unable to recover it. 00:25:01.964 [2024-07-24 22:34:27.307575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.964 [2024-07-24 22:34:27.307600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.964 qpair failed and we were unable to recover it. 00:25:01.964 [2024-07-24 22:34:27.307703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.964 [2024-07-24 22:34:27.307729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.964 qpair failed and we were unable to recover it. 00:25:01.964 [2024-07-24 22:34:27.307826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.964 [2024-07-24 22:34:27.307851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.964 qpair failed and we were unable to recover it. 00:25:01.964 [2024-07-24 22:34:27.307955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.964 [2024-07-24 22:34:27.307980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.964 qpair failed and we were unable to recover it. 00:25:01.964 [2024-07-24 22:34:27.308078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.964 [2024-07-24 22:34:27.308103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.964 qpair failed and we were unable to recover it. 00:25:01.964 [2024-07-24 22:34:27.308202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.964 [2024-07-24 22:34:27.308229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.964 qpair failed and we were unable to recover it. 00:25:01.964 [2024-07-24 22:34:27.308324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.964 [2024-07-24 22:34:27.308350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.964 qpair failed and we were unable to recover it. 00:25:01.964 [2024-07-24 22:34:27.308456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.964 [2024-07-24 22:34:27.308490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.964 qpair failed and we were unable to recover it. 00:25:01.964 [2024-07-24 22:34:27.308611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.964 [2024-07-24 22:34:27.308640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.964 qpair failed and we were unable to recover it. 00:25:01.964 [2024-07-24 22:34:27.308762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.964 [2024-07-24 22:34:27.308790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.964 qpair failed and we were unable to recover it. 00:25:01.964 [2024-07-24 22:34:27.308905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.964 [2024-07-24 22:34:27.308931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.965 qpair failed and we were unable to recover it. 00:25:01.965 [2024-07-24 22:34:27.309080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.965 [2024-07-24 22:34:27.309107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.965 qpair failed and we were unable to recover it. 00:25:01.965 [2024-07-24 22:34:27.309200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.965 [2024-07-24 22:34:27.309227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.965 qpair failed and we were unable to recover it. 00:25:01.965 [2024-07-24 22:34:27.309333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.965 [2024-07-24 22:34:27.309359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.965 qpair failed and we were unable to recover it. 00:25:01.965 [2024-07-24 22:34:27.309477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.965 [2024-07-24 22:34:27.309511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.965 qpair failed and we were unable to recover it. 00:25:01.965 [2024-07-24 22:34:27.309616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.965 [2024-07-24 22:34:27.309643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.965 qpair failed and we were unable to recover it. 00:25:01.965 [2024-07-24 22:34:27.309742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.965 [2024-07-24 22:34:27.309767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.965 qpair failed and we were unable to recover it. 00:25:01.965 [2024-07-24 22:34:27.309873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.965 [2024-07-24 22:34:27.309898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.965 qpair failed and we were unable to recover it. 00:25:01.965 [2024-07-24 22:34:27.310017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.965 [2024-07-24 22:34:27.310042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.965 qpair failed and we were unable to recover it. 00:25:01.965 [2024-07-24 22:34:27.310161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.965 [2024-07-24 22:34:27.310191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.965 qpair failed and we were unable to recover it. 00:25:01.965 [2024-07-24 22:34:27.310296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.965 [2024-07-24 22:34:27.310324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.965 qpair failed and we were unable to recover it. 00:25:01.965 [2024-07-24 22:34:27.310425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.965 [2024-07-24 22:34:27.310453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.965 qpair failed and we were unable to recover it. 00:25:01.965 [2024-07-24 22:34:27.310579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.965 [2024-07-24 22:34:27.310605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.965 qpair failed and we were unable to recover it. 00:25:01.965 [2024-07-24 22:34:27.310727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.965 [2024-07-24 22:34:27.310756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.965 qpair failed and we were unable to recover it. 00:25:01.965 [2024-07-24 22:34:27.310908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.965 [2024-07-24 22:34:27.310935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.965 qpair failed and we were unable to recover it. 00:25:01.965 [2024-07-24 22:34:27.311039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.965 [2024-07-24 22:34:27.311067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.965 qpair failed and we were unable to recover it. 00:25:01.965 [2024-07-24 22:34:27.311175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.965 [2024-07-24 22:34:27.311201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.965 qpair failed and we were unable to recover it. 00:25:01.965 [2024-07-24 22:34:27.311308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.965 [2024-07-24 22:34:27.311334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.965 qpair failed and we were unable to recover it. 00:25:01.965 [2024-07-24 22:34:27.311434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.965 [2024-07-24 22:34:27.311459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.965 qpair failed and we were unable to recover it. 00:25:01.965 [2024-07-24 22:34:27.311572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.965 [2024-07-24 22:34:27.311601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.965 qpair failed and we were unable to recover it. 00:25:01.965 [2024-07-24 22:34:27.311709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.965 [2024-07-24 22:34:27.311736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.965 qpair failed and we were unable to recover it. 00:25:01.965 [2024-07-24 22:34:27.311852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.965 [2024-07-24 22:34:27.311880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.965 qpair failed and we were unable to recover it. 00:25:01.965 [2024-07-24 22:34:27.311993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.965 [2024-07-24 22:34:27.312019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.965 qpair failed and we were unable to recover it. 00:25:01.965 [2024-07-24 22:34:27.312123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.965 [2024-07-24 22:34:27.312150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.965 qpair failed and we were unable to recover it. 00:25:01.965 [2024-07-24 22:34:27.312259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.965 [2024-07-24 22:34:27.312288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.965 qpair failed and we were unable to recover it. 00:25:01.965 [2024-07-24 22:34:27.312392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.965 [2024-07-24 22:34:27.312419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.965 qpair failed and we were unable to recover it. 00:25:01.965 [2024-07-24 22:34:27.312522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.965 [2024-07-24 22:34:27.312550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.965 qpair failed and we were unable to recover it. 00:25:01.965 [2024-07-24 22:34:27.312654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.965 [2024-07-24 22:34:27.312680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.965 qpair failed and we were unable to recover it. 00:25:01.965 [2024-07-24 22:34:27.312784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.965 [2024-07-24 22:34:27.312809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.965 qpair failed and we were unable to recover it. 00:25:01.965 [2024-07-24 22:34:27.312957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.965 [2024-07-24 22:34:27.312988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.965 qpair failed and we were unable to recover it. 00:25:01.965 [2024-07-24 22:34:27.313093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.965 [2024-07-24 22:34:27.313119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.965 qpair failed and we were unable to recover it. 00:25:01.965 [2024-07-24 22:34:27.313228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.965 [2024-07-24 22:34:27.313257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.965 qpair failed and we were unable to recover it. 00:25:01.965 [2024-07-24 22:34:27.313372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.965 [2024-07-24 22:34:27.313397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.965 qpair failed and we were unable to recover it. 00:25:01.965 [2024-07-24 22:34:27.313501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.965 [2024-07-24 22:34:27.313527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.965 qpair failed and we were unable to recover it. 00:25:01.965 [2024-07-24 22:34:27.313626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.965 [2024-07-24 22:34:27.313651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.965 qpair failed and we were unable to recover it. 00:25:01.965 [2024-07-24 22:34:27.313750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.965 [2024-07-24 22:34:27.313775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.965 qpair failed and we were unable to recover it. 00:25:01.965 [2024-07-24 22:34:27.313875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.965 [2024-07-24 22:34:27.313902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.965 qpair failed and we were unable to recover it. 00:25:01.965 [2024-07-24 22:34:27.314005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.965 [2024-07-24 22:34:27.314030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.965 qpair failed and we were unable to recover it. 00:25:01.965 [2024-07-24 22:34:27.314145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.966 [2024-07-24 22:34:27.314171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.966 qpair failed and we were unable to recover it. 00:25:01.966 [2024-07-24 22:34:27.314286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.966 [2024-07-24 22:34:27.314311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.966 qpair failed and we were unable to recover it. 00:25:01.966 [2024-07-24 22:34:27.314423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.966 [2024-07-24 22:34:27.314451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.966 qpair failed and we were unable to recover it. 00:25:01.966 [2024-07-24 22:34:27.314562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.966 [2024-07-24 22:34:27.314589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.966 qpair failed and we were unable to recover it. 00:25:01.966 [2024-07-24 22:34:27.314701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.966 [2024-07-24 22:34:27.314727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.966 qpair failed and we were unable to recover it. 00:25:01.966 [2024-07-24 22:34:27.314831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.966 [2024-07-24 22:34:27.314858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.966 qpair failed and we were unable to recover it. 00:25:01.966 [2024-07-24 22:34:27.314964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.966 [2024-07-24 22:34:27.314991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.966 qpair failed and we were unable to recover it. 00:25:01.966 [2024-07-24 22:34:27.315097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.966 [2024-07-24 22:34:27.315125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.966 qpair failed and we were unable to recover it. 00:25:01.966 [2024-07-24 22:34:27.315220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.966 [2024-07-24 22:34:27.315246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.966 qpair failed and we were unable to recover it. 00:25:01.966 [2024-07-24 22:34:27.315351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.966 [2024-07-24 22:34:27.315376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.966 qpair failed and we were unable to recover it. 00:25:01.966 [2024-07-24 22:34:27.315495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.966 [2024-07-24 22:34:27.315522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.966 qpair failed and we were unable to recover it. 00:25:01.966 [2024-07-24 22:34:27.315671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.966 [2024-07-24 22:34:27.315697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.966 qpair failed and we were unable to recover it. 00:25:01.966 [2024-07-24 22:34:27.315800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.966 [2024-07-24 22:34:27.315827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.966 qpair failed and we were unable to recover it. 00:25:01.966 [2024-07-24 22:34:27.315927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.966 [2024-07-24 22:34:27.315954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.966 qpair failed and we were unable to recover it. 00:25:01.966 [2024-07-24 22:34:27.316057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.966 [2024-07-24 22:34:27.316084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.966 qpair failed and we were unable to recover it. 00:25:01.966 [2024-07-24 22:34:27.316187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.966 [2024-07-24 22:34:27.316214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.966 qpair failed and we were unable to recover it. 00:25:01.966 [2024-07-24 22:34:27.316318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.966 [2024-07-24 22:34:27.316344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.966 qpair failed and we were unable to recover it. 00:25:01.966 [2024-07-24 22:34:27.316441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.966 [2024-07-24 22:34:27.316467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.966 qpair failed and we were unable to recover it. 00:25:01.966 [2024-07-24 22:34:27.316592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.966 [2024-07-24 22:34:27.316618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.966 qpair failed and we were unable to recover it. 00:25:01.966 [2024-07-24 22:34:27.316731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.966 [2024-07-24 22:34:27.316757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.966 qpair failed and we were unable to recover it. 00:25:01.966 [2024-07-24 22:34:27.316854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.966 [2024-07-24 22:34:27.316880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.966 qpair failed and we were unable to recover it. 00:25:01.966 [2024-07-24 22:34:27.316976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.966 [2024-07-24 22:34:27.317002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.966 qpair failed and we were unable to recover it. 00:25:01.966 [2024-07-24 22:34:27.317109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.966 [2024-07-24 22:34:27.317135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.966 qpair failed and we were unable to recover it. 00:25:01.966 [2024-07-24 22:34:27.317235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.966 [2024-07-24 22:34:27.317264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.966 qpair failed and we were unable to recover it. 00:25:01.966 [2024-07-24 22:34:27.317366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.966 [2024-07-24 22:34:27.317392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.966 qpair failed and we were unable to recover it. 00:25:01.966 [2024-07-24 22:34:27.317504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.966 [2024-07-24 22:34:27.317537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.966 qpair failed and we were unable to recover it. 00:25:01.966 [2024-07-24 22:34:27.317646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.966 [2024-07-24 22:34:27.317672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.966 qpair failed and we were unable to recover it. 00:25:01.966 [2024-07-24 22:34:27.317773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.966 [2024-07-24 22:34:27.317798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.966 qpair failed and we were unable to recover it. 00:25:01.966 [2024-07-24 22:34:27.317913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.966 [2024-07-24 22:34:27.317939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.966 qpair failed and we were unable to recover it. 00:25:01.966 [2024-07-24 22:34:27.318040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.966 [2024-07-24 22:34:27.318065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.966 qpair failed and we were unable to recover it. 00:25:01.966 [2024-07-24 22:34:27.318160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.966 [2024-07-24 22:34:27.318186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.966 qpair failed and we were unable to recover it. 00:25:01.966 [2024-07-24 22:34:27.318288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.966 [2024-07-24 22:34:27.318318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.966 qpair failed and we were unable to recover it. 00:25:01.966 [2024-07-24 22:34:27.318432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.966 [2024-07-24 22:34:27.318461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.966 qpair failed and we were unable to recover it. 00:25:01.966 [2024-07-24 22:34:27.318629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.966 [2024-07-24 22:34:27.318656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.966 qpair failed and we were unable to recover it. 00:25:01.966 [2024-07-24 22:34:27.318752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.966 [2024-07-24 22:34:27.318778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.966 qpair failed and we were unable to recover it. 00:25:01.966 [2024-07-24 22:34:27.318877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.966 [2024-07-24 22:34:27.318903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.966 qpair failed and we were unable to recover it. 00:25:01.966 [2024-07-24 22:34:27.319004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.966 [2024-07-24 22:34:27.319030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.966 qpair failed and we were unable to recover it. 00:25:01.966 [2024-07-24 22:34:27.319131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.966 [2024-07-24 22:34:27.319158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.966 qpair failed and we were unable to recover it. 00:25:01.967 [2024-07-24 22:34:27.319306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.967 [2024-07-24 22:34:27.319332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.967 qpair failed and we were unable to recover it. 00:25:01.967 [2024-07-24 22:34:27.319431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.967 [2024-07-24 22:34:27.319457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.967 qpair failed and we were unable to recover it. 00:25:01.967 [2024-07-24 22:34:27.319572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.967 [2024-07-24 22:34:27.319598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.967 qpair failed and we were unable to recover it. 00:25:01.967 [2024-07-24 22:34:27.319706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.967 [2024-07-24 22:34:27.319734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.967 qpair failed and we were unable to recover it. 00:25:01.967 [2024-07-24 22:34:27.319832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.967 [2024-07-24 22:34:27.319858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.967 qpair failed and we were unable to recover it. 00:25:01.967 [2024-07-24 22:34:27.319957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.967 [2024-07-24 22:34:27.319983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.967 qpair failed and we were unable to recover it. 00:25:01.967 [2024-07-24 22:34:27.320088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.967 [2024-07-24 22:34:27.320112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.967 qpair failed and we were unable to recover it. 00:25:01.967 [2024-07-24 22:34:27.320222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.967 [2024-07-24 22:34:27.320248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.967 qpair failed and we were unable to recover it. 00:25:01.967 [2024-07-24 22:34:27.320350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.967 [2024-07-24 22:34:27.320377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.967 qpair failed and we were unable to recover it. 00:25:01.967 [2024-07-24 22:34:27.320477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.967 [2024-07-24 22:34:27.320514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.967 qpair failed and we were unable to recover it. 00:25:01.967 [2024-07-24 22:34:27.320621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.967 [2024-07-24 22:34:27.320647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.967 qpair failed and we were unable to recover it. 00:25:01.967 [2024-07-24 22:34:27.320760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.967 [2024-07-24 22:34:27.320786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.967 qpair failed and we were unable to recover it. 00:25:01.967 [2024-07-24 22:34:27.320907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.967 [2024-07-24 22:34:27.320935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.967 qpair failed and we were unable to recover it. 00:25:01.967 [2024-07-24 22:34:27.321042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.967 [2024-07-24 22:34:27.321069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.967 qpair failed and we were unable to recover it. 00:25:01.967 [2024-07-24 22:34:27.321175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.967 [2024-07-24 22:34:27.321205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.967 qpair failed and we were unable to recover it. 00:25:01.967 [2024-07-24 22:34:27.321327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.967 [2024-07-24 22:34:27.321354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.967 qpair failed and we were unable to recover it. 00:25:01.967 [2024-07-24 22:34:27.321470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.967 [2024-07-24 22:34:27.321506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.967 qpair failed and we were unable to recover it. 00:25:01.967 [2024-07-24 22:34:27.321632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.967 [2024-07-24 22:34:27.321661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.967 qpair failed and we were unable to recover it. 00:25:01.967 [2024-07-24 22:34:27.321766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.967 [2024-07-24 22:34:27.321793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.967 qpair failed and we were unable to recover it. 00:25:01.967 [2024-07-24 22:34:27.321895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.967 [2024-07-24 22:34:27.321921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.967 qpair failed and we were unable to recover it. 00:25:01.967 [2024-07-24 22:34:27.322035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.967 [2024-07-24 22:34:27.322064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.967 qpair failed and we were unable to recover it. 00:25:01.967 [2024-07-24 22:34:27.322168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.967 [2024-07-24 22:34:27.322195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.967 qpair failed and we were unable to recover it. 00:25:01.967 [2024-07-24 22:34:27.322303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.967 [2024-07-24 22:34:27.322330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.967 qpair failed and we were unable to recover it. 00:25:01.967 [2024-07-24 22:34:27.322433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.967 [2024-07-24 22:34:27.322459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.967 qpair failed and we were unable to recover it. 00:25:01.967 [2024-07-24 22:34:27.322584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.967 [2024-07-24 22:34:27.322612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.967 qpair failed and we were unable to recover it. 00:25:01.967 [2024-07-24 22:34:27.322720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.967 [2024-07-24 22:34:27.322747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.967 qpair failed and we were unable to recover it. 00:25:01.967 [2024-07-24 22:34:27.322856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.967 [2024-07-24 22:34:27.322884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.967 qpair failed and we were unable to recover it. 00:25:01.967 [2024-07-24 22:34:27.323004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.967 [2024-07-24 22:34:27.323031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.967 qpair failed and we were unable to recover it. 00:25:01.967 [2024-07-24 22:34:27.323154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.967 [2024-07-24 22:34:27.323180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.967 qpair failed and we were unable to recover it. 00:25:01.967 [2024-07-24 22:34:27.323281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.967 [2024-07-24 22:34:27.323307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.967 qpair failed and we were unable to recover it. 00:25:01.967 [2024-07-24 22:34:27.323413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.967 [2024-07-24 22:34:27.323440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.967 qpair failed and we were unable to recover it. 00:25:01.967 [2024-07-24 22:34:27.323587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.967 [2024-07-24 22:34:27.323613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.967 qpair failed and we were unable to recover it. 00:25:01.967 [2024-07-24 22:34:27.323718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.967 [2024-07-24 22:34:27.323745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.967 qpair failed and we were unable to recover it. 00:25:01.967 [2024-07-24 22:34:27.323870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.967 [2024-07-24 22:34:27.323903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.967 qpair failed and we were unable to recover it. 00:25:01.967 [2024-07-24 22:34:27.324004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.967 [2024-07-24 22:34:27.324030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.967 qpair failed and we were unable to recover it. 00:25:01.967 [2024-07-24 22:34:27.324137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.967 [2024-07-24 22:34:27.324164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.967 qpair failed and we were unable to recover it. 00:25:01.967 [2024-07-24 22:34:27.324264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.967 [2024-07-24 22:34:27.324290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.967 qpair failed and we were unable to recover it. 00:25:01.967 [2024-07-24 22:34:27.324400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.967 [2024-07-24 22:34:27.324427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.968 qpair failed and we were unable to recover it. 00:25:01.968 [2024-07-24 22:34:27.324549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.968 [2024-07-24 22:34:27.324576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.968 qpair failed and we were unable to recover it. 00:25:01.968 [2024-07-24 22:34:27.324674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.968 [2024-07-24 22:34:27.324700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.968 qpair failed and we were unable to recover it. 00:25:01.968 [2024-07-24 22:34:27.324796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.968 [2024-07-24 22:34:27.324822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.968 qpair failed and we were unable to recover it. 00:25:01.968 [2024-07-24 22:34:27.324949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.968 [2024-07-24 22:34:27.324975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.968 qpair failed and we were unable to recover it. 00:25:01.968 [2024-07-24 22:34:27.325076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.968 [2024-07-24 22:34:27.325102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.968 qpair failed and we were unable to recover it. 00:25:01.968 [2024-07-24 22:34:27.325214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.968 [2024-07-24 22:34:27.325240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.968 qpair failed and we were unable to recover it. 00:25:01.968 [2024-07-24 22:34:27.325341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.968 [2024-07-24 22:34:27.325368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.968 qpair failed and we were unable to recover it. 00:25:01.968 [2024-07-24 22:34:27.325478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.968 [2024-07-24 22:34:27.325519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.968 qpair failed and we were unable to recover it. 00:25:01.968 [2024-07-24 22:34:27.325624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.968 [2024-07-24 22:34:27.325650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.968 qpair failed and we were unable to recover it. 00:25:01.968 [2024-07-24 22:34:27.325758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.968 [2024-07-24 22:34:27.325785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.968 qpair failed and we were unable to recover it. 00:25:01.968 [2024-07-24 22:34:27.325888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.968 [2024-07-24 22:34:27.325915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.968 qpair failed and we were unable to recover it. 00:25:01.968 [2024-07-24 22:34:27.326034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.968 [2024-07-24 22:34:27.326060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.968 qpair failed and we were unable to recover it. 00:25:01.968 [2024-07-24 22:34:27.326164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.968 [2024-07-24 22:34:27.326192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.968 qpair failed and we were unable to recover it. 00:25:01.968 [2024-07-24 22:34:27.326294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.968 [2024-07-24 22:34:27.326321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.968 qpair failed and we were unable to recover it. 00:25:01.968 [2024-07-24 22:34:27.326434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.968 [2024-07-24 22:34:27.326460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.968 qpair failed and we were unable to recover it. 00:25:01.968 [2024-07-24 22:34:27.326596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.968 [2024-07-24 22:34:27.326622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.968 qpair failed and we were unable to recover it. 00:25:01.968 [2024-07-24 22:34:27.326727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.968 [2024-07-24 22:34:27.326752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.968 qpair failed and we were unable to recover it. 00:25:01.968 [2024-07-24 22:34:27.326871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.968 [2024-07-24 22:34:27.326898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.968 qpair failed and we were unable to recover it. 00:25:01.968 [2024-07-24 22:34:27.327000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.968 [2024-07-24 22:34:27.327029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.968 qpair failed and we were unable to recover it. 00:25:01.968 [2024-07-24 22:34:27.327128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.968 [2024-07-24 22:34:27.327155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.968 qpair failed and we were unable to recover it. 00:25:01.968 [2024-07-24 22:34:27.327256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.968 [2024-07-24 22:34:27.327282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.968 qpair failed and we were unable to recover it. 00:25:01.968 [2024-07-24 22:34:27.327384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.968 [2024-07-24 22:34:27.327409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.968 qpair failed and we were unable to recover it. 00:25:01.968 [2024-07-24 22:34:27.327538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.968 [2024-07-24 22:34:27.327564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.968 qpair failed and we were unable to recover it. 00:25:01.968 [2024-07-24 22:34:27.327669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.968 [2024-07-24 22:34:27.327694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.968 qpair failed and we were unable to recover it. 00:25:01.968 [2024-07-24 22:34:27.327793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.968 [2024-07-24 22:34:27.327818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.968 qpair failed and we were unable to recover it. 00:25:01.968 [2024-07-24 22:34:27.327933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.968 [2024-07-24 22:34:27.327963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.968 qpair failed and we were unable to recover it. 00:25:01.968 [2024-07-24 22:34:27.328072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.968 [2024-07-24 22:34:27.328099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.968 qpair failed and we were unable to recover it. 00:25:01.968 [2024-07-24 22:34:27.328210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.968 [2024-07-24 22:34:27.328237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.968 qpair failed and we were unable to recover it. 00:25:01.968 [2024-07-24 22:34:27.328349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.968 [2024-07-24 22:34:27.328375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.968 qpair failed and we were unable to recover it. 00:25:01.968 [2024-07-24 22:34:27.328476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.968 [2024-07-24 22:34:27.328510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.968 qpair failed and we were unable to recover it. 00:25:01.968 [2024-07-24 22:34:27.328626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.968 [2024-07-24 22:34:27.328653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.968 qpair failed and we were unable to recover it. 00:25:01.968 [2024-07-24 22:34:27.328760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.968 [2024-07-24 22:34:27.328787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.968 qpair failed and we were unable to recover it. 00:25:01.968 [2024-07-24 22:34:27.328884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.968 [2024-07-24 22:34:27.328910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.968 qpair failed and we were unable to recover it. 00:25:01.968 [2024-07-24 22:34:27.329019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.968 [2024-07-24 22:34:27.329049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.968 qpair failed and we were unable to recover it. 00:25:01.968 [2024-07-24 22:34:27.329153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.968 [2024-07-24 22:34:27.329180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.968 qpair failed and we were unable to recover it. 00:25:01.968 [2024-07-24 22:34:27.329281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.968 [2024-07-24 22:34:27.329314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.968 qpair failed and we were unable to recover it. 00:25:01.968 [2024-07-24 22:34:27.329415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.968 [2024-07-24 22:34:27.329441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.968 qpair failed and we were unable to recover it. 00:25:01.968 [2024-07-24 22:34:27.329552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.969 [2024-07-24 22:34:27.329579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.969 qpair failed and we were unable to recover it. 00:25:01.969 [2024-07-24 22:34:27.329695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.969 [2024-07-24 22:34:27.329720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.969 qpair failed and we were unable to recover it. 00:25:01.969 [2024-07-24 22:34:27.329822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.969 [2024-07-24 22:34:27.329847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.969 qpair failed and we were unable to recover it. 00:25:01.969 [2024-07-24 22:34:27.329946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.969 [2024-07-24 22:34:27.329971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.969 qpair failed and we were unable to recover it. 00:25:01.969 [2024-07-24 22:34:27.330074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.969 [2024-07-24 22:34:27.330100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.969 qpair failed and we were unable to recover it. 00:25:01.969 [2024-07-24 22:34:27.330213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.969 [2024-07-24 22:34:27.330238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.969 qpair failed and we were unable to recover it. 00:25:01.969 [2024-07-24 22:34:27.330343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.969 [2024-07-24 22:34:27.330369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.969 qpair failed and we were unable to recover it. 00:25:01.969 [2024-07-24 22:34:27.330471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.969 [2024-07-24 22:34:27.330505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.969 qpair failed and we were unable to recover it. 00:25:01.969 [2024-07-24 22:34:27.330607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.969 [2024-07-24 22:34:27.330632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.969 qpair failed and we were unable to recover it. 00:25:01.969 [2024-07-24 22:34:27.330733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.969 [2024-07-24 22:34:27.330758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.969 qpair failed and we were unable to recover it. 00:25:01.969 [2024-07-24 22:34:27.330861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.969 [2024-07-24 22:34:27.330887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.969 qpair failed and we were unable to recover it. 00:25:01.969 [2024-07-24 22:34:27.330996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.969 [2024-07-24 22:34:27.331025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.969 qpair failed and we were unable to recover it. 00:25:01.969 [2024-07-24 22:34:27.331155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.969 [2024-07-24 22:34:27.331185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.969 qpair failed and we were unable to recover it. 00:25:01.969 [2024-07-24 22:34:27.331289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.969 [2024-07-24 22:34:27.331315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.969 qpair failed and we were unable to recover it. 00:25:01.969 [2024-07-24 22:34:27.331433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.969 [2024-07-24 22:34:27.331459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.969 qpair failed and we were unable to recover it. 00:25:01.969 [2024-07-24 22:34:27.331562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.969 [2024-07-24 22:34:27.331589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.969 qpair failed and we were unable to recover it. 00:25:01.969 [2024-07-24 22:34:27.331691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.969 [2024-07-24 22:34:27.331718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.969 qpair failed and we were unable to recover it. 00:25:01.969 [2024-07-24 22:34:27.331824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.969 [2024-07-24 22:34:27.331852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.969 qpair failed and we were unable to recover it. 00:25:01.969 [2024-07-24 22:34:27.331963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.969 [2024-07-24 22:34:27.331992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.969 qpair failed and we were unable to recover it. 00:25:01.969 [2024-07-24 22:34:27.332106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.969 [2024-07-24 22:34:27.332132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.969 qpair failed and we were unable to recover it. 00:25:01.969 [2024-07-24 22:34:27.332243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.969 [2024-07-24 22:34:27.332269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.969 qpair failed and we were unable to recover it. 00:25:01.969 [2024-07-24 22:34:27.332369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.969 [2024-07-24 22:34:27.332395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.969 qpair failed and we were unable to recover it. 00:25:01.969 [2024-07-24 22:34:27.332516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.969 [2024-07-24 22:34:27.332543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.969 qpair failed and we were unable to recover it. 00:25:01.969 [2024-07-24 22:34:27.332658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.969 [2024-07-24 22:34:27.332685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.969 qpair failed and we were unable to recover it. 00:25:01.969 [2024-07-24 22:34:27.332793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.969 [2024-07-24 22:34:27.332821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.969 qpair failed and we were unable to recover it. 00:25:01.969 [2024-07-24 22:34:27.332922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.969 [2024-07-24 22:34:27.332948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.969 qpair failed and we were unable to recover it. 00:25:01.969 [2024-07-24 22:34:27.333047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.969 [2024-07-24 22:34:27.333073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.969 qpair failed and we were unable to recover it. 00:25:01.969 [2024-07-24 22:34:27.333185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.969 [2024-07-24 22:34:27.333214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.969 qpair failed and we were unable to recover it. 00:25:01.969 [2024-07-24 22:34:27.333314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.969 [2024-07-24 22:34:27.333340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.969 qpair failed and we were unable to recover it. 00:25:01.969 [2024-07-24 22:34:27.333440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.969 [2024-07-24 22:34:27.333465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.969 qpair failed and we were unable to recover it. 00:25:01.969 [2024-07-24 22:34:27.333582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.969 [2024-07-24 22:34:27.333607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.969 qpair failed and we were unable to recover it. 00:25:01.969 [2024-07-24 22:34:27.333711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.969 [2024-07-24 22:34:27.333737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.969 qpair failed and we were unable to recover it. 00:25:01.969 [2024-07-24 22:34:27.333847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.969 [2024-07-24 22:34:27.333872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.970 qpair failed and we were unable to recover it. 00:25:01.970 [2024-07-24 22:34:27.333972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.970 [2024-07-24 22:34:27.333997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.970 qpair failed and we were unable to recover it. 00:25:01.970 [2024-07-24 22:34:27.334093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.970 [2024-07-24 22:34:27.334118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.970 qpair failed and we were unable to recover it. 00:25:01.970 [2024-07-24 22:34:27.334222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.970 [2024-07-24 22:34:27.334247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.970 qpair failed and we were unable to recover it. 00:25:01.970 [2024-07-24 22:34:27.334366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.970 [2024-07-24 22:34:27.334392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.970 qpair failed and we were unable to recover it. 00:25:01.970 [2024-07-24 22:34:27.334499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.970 [2024-07-24 22:34:27.334525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.970 qpair failed and we were unable to recover it. 00:25:01.970 [2024-07-24 22:34:27.334645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.970 [2024-07-24 22:34:27.334684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.970 qpair failed and we were unable to recover it. 00:25:01.970 [2024-07-24 22:34:27.334789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.970 [2024-07-24 22:34:27.334816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.970 qpair failed and we were unable to recover it. 00:25:01.970 [2024-07-24 22:34:27.334916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.970 [2024-07-24 22:34:27.334942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.970 qpair failed and we were unable to recover it. 00:25:01.970 [2024-07-24 22:34:27.335044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.970 [2024-07-24 22:34:27.335071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.970 qpair failed and we were unable to recover it. 00:25:01.970 [2024-07-24 22:34:27.335185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.970 [2024-07-24 22:34:27.335213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.970 qpair failed and we were unable to recover it. 00:25:01.970 [2024-07-24 22:34:27.335322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.970 [2024-07-24 22:34:27.335348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.970 qpair failed and we were unable to recover it. 00:25:01.970 [2024-07-24 22:34:27.335450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.970 [2024-07-24 22:34:27.335476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.970 qpair failed and we were unable to recover it. 00:25:01.970 [2024-07-24 22:34:27.335589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.970 [2024-07-24 22:34:27.335615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.970 qpair failed and we were unable to recover it. 00:25:01.970 [2024-07-24 22:34:27.335716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.970 [2024-07-24 22:34:27.335743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.970 qpair failed and we were unable to recover it. 00:25:01.970 [2024-07-24 22:34:27.335851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.970 [2024-07-24 22:34:27.335879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.970 qpair failed and we were unable to recover it. 00:25:01.970 [2024-07-24 22:34:27.335996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.970 [2024-07-24 22:34:27.336021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.970 qpair failed and we were unable to recover it. 00:25:01.970 [2024-07-24 22:34:27.336127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.970 [2024-07-24 22:34:27.336153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.970 qpair failed and we were unable to recover it. 00:25:01.970 [2024-07-24 22:34:27.336252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.970 [2024-07-24 22:34:27.336280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.970 qpair failed and we were unable to recover it. 00:25:01.970 [2024-07-24 22:34:27.336378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.970 [2024-07-24 22:34:27.336404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.970 qpair failed and we were unable to recover it. 00:25:01.970 [2024-07-24 22:34:27.336510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.970 [2024-07-24 22:34:27.336537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.970 qpair failed and we were unable to recover it. 00:25:01.970 [2024-07-24 22:34:27.336634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.970 [2024-07-24 22:34:27.336660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.970 qpair failed and we were unable to recover it. 00:25:01.970 [2024-07-24 22:34:27.336761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.970 [2024-07-24 22:34:27.336788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.970 qpair failed and we were unable to recover it. 00:25:01.970 [2024-07-24 22:34:27.336883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.970 [2024-07-24 22:34:27.336908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.970 qpair failed and we were unable to recover it. 00:25:01.970 [2024-07-24 22:34:27.337016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.970 [2024-07-24 22:34:27.337044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.970 qpair failed and we were unable to recover it. 00:25:01.970 [2024-07-24 22:34:27.337146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.970 [2024-07-24 22:34:27.337174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.970 qpair failed and we were unable to recover it. 00:25:01.970 [2024-07-24 22:34:27.337270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.970 [2024-07-24 22:34:27.337297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.970 qpair failed and we were unable to recover it. 00:25:01.970 [2024-07-24 22:34:27.337404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.970 [2024-07-24 22:34:27.337429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.970 qpair failed and we were unable to recover it. 00:25:01.970 [2024-07-24 22:34:27.337535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.970 [2024-07-24 22:34:27.337563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.970 qpair failed and we were unable to recover it. 00:25:01.970 [2024-07-24 22:34:27.337677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.970 [2024-07-24 22:34:27.337703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.970 qpair failed and we were unable to recover it. 00:25:01.970 [2024-07-24 22:34:27.337804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.970 [2024-07-24 22:34:27.337832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.970 qpair failed and we were unable to recover it. 00:25:01.970 [2024-07-24 22:34:27.337936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.970 [2024-07-24 22:34:27.337961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.970 qpair failed and we were unable to recover it. 00:25:01.970 [2024-07-24 22:34:27.338065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.970 [2024-07-24 22:34:27.338092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.970 qpair failed and we were unable to recover it. 00:25:01.970 [2024-07-24 22:34:27.338200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.970 [2024-07-24 22:34:27.338228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.970 qpair failed and we were unable to recover it. 00:25:01.970 [2024-07-24 22:34:27.338331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.970 [2024-07-24 22:34:27.338361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.970 qpair failed and we were unable to recover it. 00:25:01.970 [2024-07-24 22:34:27.338464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.970 [2024-07-24 22:34:27.338498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.970 qpair failed and we were unable to recover it. 00:25:01.970 [2024-07-24 22:34:27.338607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.970 [2024-07-24 22:34:27.338633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.970 qpair failed and we were unable to recover it. 00:25:01.970 [2024-07-24 22:34:27.338738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.971 [2024-07-24 22:34:27.338766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.971 qpair failed and we were unable to recover it. 00:25:01.971 [2024-07-24 22:34:27.338887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.971 [2024-07-24 22:34:27.338915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.971 qpair failed and we were unable to recover it. 00:25:01.971 [2024-07-24 22:34:27.339025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.971 [2024-07-24 22:34:27.339052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.971 qpair failed and we were unable to recover it. 00:25:01.971 [2024-07-24 22:34:27.339154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.971 [2024-07-24 22:34:27.339181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.971 qpair failed and we were unable to recover it. 00:25:01.971 [2024-07-24 22:34:27.339300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.971 [2024-07-24 22:34:27.339327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.971 qpair failed and we were unable to recover it. 00:25:01.971 [2024-07-24 22:34:27.339428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.971 [2024-07-24 22:34:27.339455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.971 qpair failed and we were unable to recover it. 00:25:01.971 [2024-07-24 22:34:27.339568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.971 [2024-07-24 22:34:27.339597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.971 qpair failed and we were unable to recover it. 00:25:01.971 [2024-07-24 22:34:27.339713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.971 [2024-07-24 22:34:27.339739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.971 qpair failed and we were unable to recover it. 00:25:01.971 [2024-07-24 22:34:27.339840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.971 [2024-07-24 22:34:27.339866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.971 qpair failed and we were unable to recover it. 00:25:01.971 [2024-07-24 22:34:27.339972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.971 [2024-07-24 22:34:27.340002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.971 qpair failed and we were unable to recover it. 00:25:01.971 [2024-07-24 22:34:27.340122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.971 [2024-07-24 22:34:27.340149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.971 qpair failed and we were unable to recover it. 00:25:01.971 [2024-07-24 22:34:27.340266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.971 [2024-07-24 22:34:27.340293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.971 qpair failed and we were unable to recover it. 00:25:01.971 [2024-07-24 22:34:27.340404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.971 [2024-07-24 22:34:27.340430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.971 qpair failed and we were unable to recover it. 00:25:01.971 [2024-07-24 22:34:27.340547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.971 [2024-07-24 22:34:27.340575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.971 qpair failed and we were unable to recover it. 00:25:01.971 [2024-07-24 22:34:27.340680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.971 [2024-07-24 22:34:27.340707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.971 qpair failed and we were unable to recover it. 00:25:01.971 [2024-07-24 22:34:27.340818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.971 [2024-07-24 22:34:27.340846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.971 qpair failed and we were unable to recover it. 00:25:01.971 [2024-07-24 22:34:27.340949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.971 [2024-07-24 22:34:27.340975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.971 qpair failed and we were unable to recover it. 00:25:01.971 [2024-07-24 22:34:27.341076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.971 [2024-07-24 22:34:27.341102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.971 qpair failed and we were unable to recover it. 00:25:01.971 [2024-07-24 22:34:27.341206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.971 [2024-07-24 22:34:27.341234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.971 qpair failed and we were unable to recover it. 00:25:01.971 [2024-07-24 22:34:27.341348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.971 [2024-07-24 22:34:27.341374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.971 qpair failed and we were unable to recover it. 00:25:01.971 [2024-07-24 22:34:27.341535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.971 [2024-07-24 22:34:27.341561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.971 qpair failed and we were unable to recover it. 00:25:01.971 [2024-07-24 22:34:27.341680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.971 [2024-07-24 22:34:27.341706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.971 qpair failed and we were unable to recover it. 00:25:01.971 [2024-07-24 22:34:27.341804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.971 [2024-07-24 22:34:27.341829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.971 qpair failed and we were unable to recover it. 00:25:01.971 [2024-07-24 22:34:27.341944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.971 [2024-07-24 22:34:27.341970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.971 qpair failed and we were unable to recover it. 00:25:01.971 [2024-07-24 22:34:27.342085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.971 [2024-07-24 22:34:27.342110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.971 qpair failed and we were unable to recover it. 00:25:01.971 [2024-07-24 22:34:27.342220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.971 [2024-07-24 22:34:27.342245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.971 qpair failed and we were unable to recover it. 00:25:01.971 [2024-07-24 22:34:27.342344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.971 [2024-07-24 22:34:27.342370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.971 qpair failed and we were unable to recover it. 00:25:01.971 [2024-07-24 22:34:27.342489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.971 [2024-07-24 22:34:27.342516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.971 qpair failed and we were unable to recover it. 00:25:01.971 [2024-07-24 22:34:27.342617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.971 [2024-07-24 22:34:27.342643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.971 qpair failed and we were unable to recover it. 00:25:01.971 [2024-07-24 22:34:27.342763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.971 [2024-07-24 22:34:27.342793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.971 qpair failed and we were unable to recover it. 00:25:01.971 [2024-07-24 22:34:27.342909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.971 [2024-07-24 22:34:27.342936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.971 qpair failed and we were unable to recover it. 00:25:01.971 [2024-07-24 22:34:27.343039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.971 [2024-07-24 22:34:27.343066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.971 qpair failed and we were unable to recover it. 00:25:01.971 [2024-07-24 22:34:27.343174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.971 [2024-07-24 22:34:27.343202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.971 qpair failed and we were unable to recover it. 00:25:01.971 [2024-07-24 22:34:27.343306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.971 [2024-07-24 22:34:27.343332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.971 qpair failed and we were unable to recover it. 00:25:01.971 [2024-07-24 22:34:27.343434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.971 [2024-07-24 22:34:27.343460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.971 qpair failed and we were unable to recover it. 00:25:01.971 [2024-07-24 22:34:27.343564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.971 [2024-07-24 22:34:27.343590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.971 qpair failed and we were unable to recover it. 00:25:01.971 [2024-07-24 22:34:27.343702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.971 [2024-07-24 22:34:27.343731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.971 qpair failed and we were unable to recover it. 00:25:01.972 [2024-07-24 22:34:27.343848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.972 [2024-07-24 22:34:27.343874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.972 qpair failed and we were unable to recover it. 00:25:01.972 [2024-07-24 22:34:27.343979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.972 [2024-07-24 22:34:27.344005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.972 qpair failed and we were unable to recover it. 00:25:01.972 [2024-07-24 22:34:27.344112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.972 [2024-07-24 22:34:27.344140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.972 qpair failed and we were unable to recover it. 00:25:01.972 [2024-07-24 22:34:27.344245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.972 [2024-07-24 22:34:27.344271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.972 qpair failed and we were unable to recover it. 00:25:01.972 [2024-07-24 22:34:27.344383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.972 [2024-07-24 22:34:27.344409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.972 qpair failed and we were unable to recover it. 00:25:01.972 [2024-07-24 22:34:27.344513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.972 [2024-07-24 22:34:27.344540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.972 qpair failed and we were unable to recover it. 00:25:01.972 [2024-07-24 22:34:27.344641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.972 [2024-07-24 22:34:27.344668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.972 qpair failed and we were unable to recover it. 00:25:01.972 [2024-07-24 22:34:27.344782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.972 [2024-07-24 22:34:27.344808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.972 qpair failed and we were unable to recover it. 00:25:01.972 [2024-07-24 22:34:27.344912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.972 [2024-07-24 22:34:27.344941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.972 qpair failed and we were unable to recover it. 00:25:01.972 [2024-07-24 22:34:27.345048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.972 [2024-07-24 22:34:27.345076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.972 qpair failed and we were unable to recover it. 00:25:01.972 [2024-07-24 22:34:27.345198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.972 [2024-07-24 22:34:27.345228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.972 qpair failed and we were unable to recover it. 00:25:01.972 [2024-07-24 22:34:27.345329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.972 [2024-07-24 22:34:27.345355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.972 qpair failed and we were unable to recover it. 00:25:01.972 [2024-07-24 22:34:27.345453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.972 [2024-07-24 22:34:27.345491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.972 qpair failed and we were unable to recover it. 00:25:01.972 [2024-07-24 22:34:27.345596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.972 [2024-07-24 22:34:27.345622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.972 qpair failed and we were unable to recover it. 00:25:01.972 [2024-07-24 22:34:27.345732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.972 [2024-07-24 22:34:27.345758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.972 qpair failed and we were unable to recover it. 00:25:01.972 [2024-07-24 22:34:27.345877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.972 [2024-07-24 22:34:27.345903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.972 qpair failed and we were unable to recover it. 00:25:01.972 [2024-07-24 22:34:27.346008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.972 [2024-07-24 22:34:27.346035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.972 qpair failed and we were unable to recover it. 00:25:01.972 [2024-07-24 22:34:27.346144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.972 [2024-07-24 22:34:27.346171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.972 qpair failed and we were unable to recover it. 00:25:01.972 [2024-07-24 22:34:27.346277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.972 [2024-07-24 22:34:27.346304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.972 qpair failed and we were unable to recover it. 00:25:01.972 [2024-07-24 22:34:27.346408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.972 [2024-07-24 22:34:27.346435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.972 qpair failed and we were unable to recover it. 00:25:01.972 [2024-07-24 22:34:27.346551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.972 [2024-07-24 22:34:27.346578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.972 qpair failed and we were unable to recover it. 00:25:01.972 [2024-07-24 22:34:27.346690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.972 [2024-07-24 22:34:27.346716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.972 qpair failed and we were unable to recover it. 00:25:01.972 [2024-07-24 22:34:27.346817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.972 [2024-07-24 22:34:27.346844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.972 qpair failed and we were unable to recover it. 00:25:01.972 [2024-07-24 22:34:27.346992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.972 [2024-07-24 22:34:27.347017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.972 qpair failed and we were unable to recover it. 00:25:01.972 [2024-07-24 22:34:27.347122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.972 [2024-07-24 22:34:27.347149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.972 qpair failed and we were unable to recover it. 00:25:01.972 [2024-07-24 22:34:27.347259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.972 [2024-07-24 22:34:27.347288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.972 qpair failed and we were unable to recover it. 00:25:01.972 [2024-07-24 22:34:27.347397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.972 [2024-07-24 22:34:27.347423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.972 qpair failed and we were unable to recover it. 00:25:01.972 [2024-07-24 22:34:27.347532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.972 [2024-07-24 22:34:27.347559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.972 qpair failed and we were unable to recover it. 00:25:01.972 [2024-07-24 22:34:27.347658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.972 [2024-07-24 22:34:27.347684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.972 qpair failed and we were unable to recover it. 00:25:01.972 [2024-07-24 22:34:27.347786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.972 [2024-07-24 22:34:27.347815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.972 qpair failed and we were unable to recover it. 00:25:01.972 [2024-07-24 22:34:27.347920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.972 [2024-07-24 22:34:27.347947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.972 qpair failed and we were unable to recover it. 00:25:01.972 [2024-07-24 22:34:27.348096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.972 [2024-07-24 22:34:27.348122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.972 qpair failed and we were unable to recover it. 00:25:01.972 [2024-07-24 22:34:27.348239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.972 [2024-07-24 22:34:27.348265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.972 qpair failed and we were unable to recover it. 00:25:01.972 [2024-07-24 22:34:27.348366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.972 [2024-07-24 22:34:27.348392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.972 qpair failed and we were unable to recover it. 00:25:01.972 [2024-07-24 22:34:27.348505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.972 [2024-07-24 22:34:27.348533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.972 qpair failed and we were unable to recover it. 00:25:01.972 [2024-07-24 22:34:27.348638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.972 [2024-07-24 22:34:27.348666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.972 qpair failed and we were unable to recover it. 00:25:01.973 [2024-07-24 22:34:27.348779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.973 [2024-07-24 22:34:27.348805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.973 qpair failed and we were unable to recover it. 00:25:01.973 [2024-07-24 22:34:27.348921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.973 [2024-07-24 22:34:27.348946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.973 qpair failed and we were unable to recover it. 00:25:01.973 [2024-07-24 22:34:27.349047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.973 [2024-07-24 22:34:27.349073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.973 qpair failed and we were unable to recover it. 00:25:01.973 [2024-07-24 22:34:27.349198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.973 [2024-07-24 22:34:27.349237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.973 qpair failed and we were unable to recover it. 00:25:01.973 [2024-07-24 22:34:27.349364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.973 [2024-07-24 22:34:27.349393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.973 qpair failed and we were unable to recover it. 00:25:01.973 [2024-07-24 22:34:27.349493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.973 [2024-07-24 22:34:27.349520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.973 qpair failed and we were unable to recover it. 00:25:01.973 [2024-07-24 22:34:27.349622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.973 [2024-07-24 22:34:27.349649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.973 qpair failed and we were unable to recover it. 00:25:01.973 [2024-07-24 22:34:27.349758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.973 [2024-07-24 22:34:27.349784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.973 qpair failed and we were unable to recover it. 00:25:01.973 [2024-07-24 22:34:27.349904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.973 [2024-07-24 22:34:27.349930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.973 qpair failed and we were unable to recover it. 00:25:01.973 [2024-07-24 22:34:27.350035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.973 [2024-07-24 22:34:27.350061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.973 qpair failed and we were unable to recover it. 00:25:01.973 [2024-07-24 22:34:27.350158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.973 [2024-07-24 22:34:27.350184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.973 qpair failed and we were unable to recover it. 00:25:01.973 [2024-07-24 22:34:27.350296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.973 [2024-07-24 22:34:27.350323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.973 qpair failed and we were unable to recover it. 00:25:01.973 [2024-07-24 22:34:27.350423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.973 [2024-07-24 22:34:27.350451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.973 qpair failed and we were unable to recover it. 00:25:01.973 [2024-07-24 22:34:27.350562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.973 [2024-07-24 22:34:27.350589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.973 qpair failed and we were unable to recover it. 00:25:01.973 [2024-07-24 22:34:27.350704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.973 [2024-07-24 22:34:27.350730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.973 qpair failed and we were unable to recover it. 00:25:01.973 [2024-07-24 22:34:27.350833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.973 [2024-07-24 22:34:27.350859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.973 qpair failed and we were unable to recover it. 00:25:01.973 [2024-07-24 22:34:27.350963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.973 [2024-07-24 22:34:27.350998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.973 qpair failed and we were unable to recover it. 00:25:01.973 [2024-07-24 22:34:27.351124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.973 [2024-07-24 22:34:27.351153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.973 qpair failed and we were unable to recover it. 00:25:01.973 [2024-07-24 22:34:27.351271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.973 [2024-07-24 22:34:27.351299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.973 qpair failed and we were unable to recover it. 00:25:01.973 [2024-07-24 22:34:27.351400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.973 [2024-07-24 22:34:27.351426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.973 qpair failed and we were unable to recover it. 00:25:01.973 [2024-07-24 22:34:27.351529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.973 [2024-07-24 22:34:27.351557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.973 qpair failed and we were unable to recover it. 00:25:01.973 [2024-07-24 22:34:27.351657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.973 [2024-07-24 22:34:27.351683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.973 qpair failed and we were unable to recover it. 00:25:01.973 [2024-07-24 22:34:27.351788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.973 [2024-07-24 22:34:27.351815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.973 qpair failed and we were unable to recover it. 00:25:01.973 [2024-07-24 22:34:27.351921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.973 [2024-07-24 22:34:27.351948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.973 qpair failed and we were unable to recover it. 00:25:01.973 [2024-07-24 22:34:27.352070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.973 [2024-07-24 22:34:27.352098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.973 qpair failed and we were unable to recover it. 00:25:01.973 [2024-07-24 22:34:27.352212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.973 [2024-07-24 22:34:27.352238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.973 qpair failed and we were unable to recover it. 00:25:01.973 [2024-07-24 22:34:27.352336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.973 [2024-07-24 22:34:27.352362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.973 qpair failed and we were unable to recover it. 00:25:01.973 [2024-07-24 22:34:27.352460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.973 [2024-07-24 22:34:27.352491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.973 qpair failed and we were unable to recover it. 00:25:01.973 [2024-07-24 22:34:27.352586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.973 [2024-07-24 22:34:27.352612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.973 qpair failed and we were unable to recover it. 00:25:01.973 [2024-07-24 22:34:27.352708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.973 [2024-07-24 22:34:27.352735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.973 qpair failed and we were unable to recover it. 00:25:01.973 [2024-07-24 22:34:27.352894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.973 [2024-07-24 22:34:27.352921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.973 qpair failed and we were unable to recover it. 00:25:01.973 [2024-07-24 22:34:27.353020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.973 [2024-07-24 22:34:27.353046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.973 qpair failed and we were unable to recover it. 00:25:01.973 [2024-07-24 22:34:27.353144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.973 [2024-07-24 22:34:27.353169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.973 qpair failed and we were unable to recover it. 00:25:01.973 [2024-07-24 22:34:27.353285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.973 [2024-07-24 22:34:27.353310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.973 qpair failed and we were unable to recover it. 00:25:01.973 [2024-07-24 22:34:27.353406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.973 [2024-07-24 22:34:27.353432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.973 qpair failed and we were unable to recover it. 00:25:01.973 [2024-07-24 22:34:27.353544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.973 [2024-07-24 22:34:27.353572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.973 qpair failed and we were unable to recover it. 00:25:01.973 [2024-07-24 22:34:27.353678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.973 [2024-07-24 22:34:27.353706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.973 qpair failed and we were unable to recover it. 00:25:01.974 [2024-07-24 22:34:27.353805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.974 [2024-07-24 22:34:27.353831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.974 qpair failed and we were unable to recover it. 00:25:01.974 [2024-07-24 22:34:27.353932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.974 [2024-07-24 22:34:27.353958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.974 qpair failed and we were unable to recover it. 00:25:01.974 [2024-07-24 22:34:27.354064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.974 [2024-07-24 22:34:27.354091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.974 qpair failed and we were unable to recover it. 00:25:01.974 [2024-07-24 22:34:27.354193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.974 [2024-07-24 22:34:27.354219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.974 qpair failed and we were unable to recover it. 00:25:01.974 [2024-07-24 22:34:27.354325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.974 [2024-07-24 22:34:27.354355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.974 qpair failed and we were unable to recover it. 00:25:01.974 [2024-07-24 22:34:27.354456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.974 [2024-07-24 22:34:27.354490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.974 qpair failed and we were unable to recover it. 00:25:01.974 [2024-07-24 22:34:27.354655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.974 [2024-07-24 22:34:27.354690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.974 qpair failed and we were unable to recover it. 00:25:01.974 [2024-07-24 22:34:27.354814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.974 [2024-07-24 22:34:27.354842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.974 qpair failed and we were unable to recover it. 00:25:01.974 [2024-07-24 22:34:27.354957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.974 [2024-07-24 22:34:27.354983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.974 qpair failed and we were unable to recover it. 00:25:01.974 [2024-07-24 22:34:27.355080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.974 [2024-07-24 22:34:27.355106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.974 qpair failed and we were unable to recover it. 00:25:01.974 [2024-07-24 22:34:27.355210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.974 [2024-07-24 22:34:27.355235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.974 qpair failed and we were unable to recover it. 00:25:01.974 [2024-07-24 22:34:27.355341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.974 [2024-07-24 22:34:27.355368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.974 qpair failed and we were unable to recover it. 00:25:01.974 [2024-07-24 22:34:27.355499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.974 [2024-07-24 22:34:27.355527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.974 qpair failed and we were unable to recover it. 00:25:01.974 [2024-07-24 22:34:27.355632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.974 [2024-07-24 22:34:27.355659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.974 qpair failed and we were unable to recover it. 00:25:01.974 [2024-07-24 22:34:27.355759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.974 [2024-07-24 22:34:27.355785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.974 qpair failed and we were unable to recover it. 00:25:01.974 [2024-07-24 22:34:27.355895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.974 [2024-07-24 22:34:27.355922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.974 qpair failed and we were unable to recover it. 00:25:01.974 [2024-07-24 22:34:27.356019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.974 [2024-07-24 22:34:27.356045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.974 qpair failed and we were unable to recover it. 00:25:01.974 [2024-07-24 22:34:27.356149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.974 [2024-07-24 22:34:27.356176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.974 qpair failed and we were unable to recover it. 00:25:01.974 [2024-07-24 22:34:27.356325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.974 [2024-07-24 22:34:27.356351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.974 qpair failed and we were unable to recover it. 00:25:01.974 [2024-07-24 22:34:27.356454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.974 [2024-07-24 22:34:27.356489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.974 qpair failed and we were unable to recover it. 00:25:01.974 [2024-07-24 22:34:27.356600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.974 [2024-07-24 22:34:27.356626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.974 qpair failed and we were unable to recover it. 00:25:01.974 [2024-07-24 22:34:27.356774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.974 [2024-07-24 22:34:27.356800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.974 qpair failed and we were unable to recover it. 00:25:01.974 [2024-07-24 22:34:27.356899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.974 [2024-07-24 22:34:27.356926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.974 qpair failed and we were unable to recover it. 00:25:01.974 [2024-07-24 22:34:27.357031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.974 [2024-07-24 22:34:27.357058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.974 qpair failed and we were unable to recover it. 00:25:01.974 [2024-07-24 22:34:27.357169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.974 [2024-07-24 22:34:27.357198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.974 qpair failed and we were unable to recover it. 00:25:01.974 [2024-07-24 22:34:27.357306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.974 [2024-07-24 22:34:27.357335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.974 qpair failed and we were unable to recover it. 00:25:01.974 [2024-07-24 22:34:27.357454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.974 [2024-07-24 22:34:27.357493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.974 qpair failed and we were unable to recover it. 00:25:01.974 [2024-07-24 22:34:27.357605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.974 [2024-07-24 22:34:27.357631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.974 qpair failed and we were unable to recover it. 00:25:01.974 [2024-07-24 22:34:27.357758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.974 [2024-07-24 22:34:27.357784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.974 qpair failed and we were unable to recover it. 00:25:01.974 [2024-07-24 22:34:27.357883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.974 [2024-07-24 22:34:27.357909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.974 qpair failed and we were unable to recover it. 00:25:01.974 [2024-07-24 22:34:27.358004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.974 [2024-07-24 22:34:27.358030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.974 qpair failed and we were unable to recover it. 00:25:01.974 [2024-07-24 22:34:27.358133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.975 [2024-07-24 22:34:27.358160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.975 qpair failed and we were unable to recover it. 00:25:01.975 [2024-07-24 22:34:27.358260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.975 [2024-07-24 22:34:27.358287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.975 qpair failed and we were unable to recover it. 00:25:01.975 [2024-07-24 22:34:27.358442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.975 [2024-07-24 22:34:27.358468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.975 qpair failed and we were unable to recover it. 00:25:01.975 [2024-07-24 22:34:27.358581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.975 [2024-07-24 22:34:27.358607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.975 qpair failed and we were unable to recover it. 00:25:01.975 [2024-07-24 22:34:27.358738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.975 [2024-07-24 22:34:27.358764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.975 qpair failed and we were unable to recover it. 00:25:01.975 [2024-07-24 22:34:27.358864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.975 [2024-07-24 22:34:27.358890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.975 qpair failed and we were unable to recover it. 00:25:01.975 [2024-07-24 22:34:27.359001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.975 [2024-07-24 22:34:27.359029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.975 qpair failed and we were unable to recover it. 00:25:01.975 [2024-07-24 22:34:27.359132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.975 [2024-07-24 22:34:27.359158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.975 qpair failed and we were unable to recover it. 00:25:01.975 [2024-07-24 22:34:27.359252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.975 [2024-07-24 22:34:27.359278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.975 qpair failed and we were unable to recover it. 00:25:01.975 [2024-07-24 22:34:27.359381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.975 [2024-07-24 22:34:27.359408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.975 qpair failed and we were unable to recover it. 00:25:01.975 [2024-07-24 22:34:27.359515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.975 [2024-07-24 22:34:27.359545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.975 qpair failed and we were unable to recover it. 00:25:01.975 [2024-07-24 22:34:27.359669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.975 [2024-07-24 22:34:27.359696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.975 qpair failed and we were unable to recover it. 00:25:01.975 [2024-07-24 22:34:27.359815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.975 [2024-07-24 22:34:27.359841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.975 qpair failed and we were unable to recover it. 00:25:01.975 [2024-07-24 22:34:27.359940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.975 [2024-07-24 22:34:27.359966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.975 qpair failed and we were unable to recover it. 00:25:01.975 [2024-07-24 22:34:27.360117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.975 [2024-07-24 22:34:27.360146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.975 qpair failed and we were unable to recover it. 00:25:01.975 [2024-07-24 22:34:27.360242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.975 [2024-07-24 22:34:27.360275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.975 qpair failed and we were unable to recover it. 00:25:01.975 [2024-07-24 22:34:27.360381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.975 [2024-07-24 22:34:27.360407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.975 qpair failed and we were unable to recover it. 00:25:01.975 [2024-07-24 22:34:27.360508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.975 [2024-07-24 22:34:27.360534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.975 qpair failed and we were unable to recover it. 00:25:01.975 [2024-07-24 22:34:27.360636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.975 [2024-07-24 22:34:27.360662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.975 qpair failed and we were unable to recover it. 00:25:01.975 [2024-07-24 22:34:27.360761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.975 [2024-07-24 22:34:27.360786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.975 qpair failed and we were unable to recover it. 00:25:01.975 [2024-07-24 22:34:27.360905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.975 [2024-07-24 22:34:27.360931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.975 qpair failed and we were unable to recover it. 00:25:01.975 [2024-07-24 22:34:27.361035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.975 [2024-07-24 22:34:27.361062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.975 qpair failed and we were unable to recover it. 00:25:01.975 [2024-07-24 22:34:27.361163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.975 [2024-07-24 22:34:27.361193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.975 qpair failed and we were unable to recover it. 00:25:01.975 [2024-07-24 22:34:27.361298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.975 [2024-07-24 22:34:27.361327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.975 qpair failed and we were unable to recover it. 00:25:01.975 [2024-07-24 22:34:27.361430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.975 [2024-07-24 22:34:27.361456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.975 qpair failed and we were unable to recover it. 00:25:01.975 [2024-07-24 22:34:27.361581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.975 [2024-07-24 22:34:27.361607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.975 qpair failed and we were unable to recover it. 00:25:01.975 [2024-07-24 22:34:27.361728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.975 [2024-07-24 22:34:27.361754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.975 qpair failed and we were unable to recover it. 00:25:01.975 [2024-07-24 22:34:27.361853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.975 [2024-07-24 22:34:27.361880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.975 qpair failed and we were unable to recover it. 00:25:01.975 [2024-07-24 22:34:27.361975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.975 [2024-07-24 22:34:27.362001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.975 qpair failed and we were unable to recover it. 00:25:01.975 [2024-07-24 22:34:27.362108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.975 [2024-07-24 22:34:27.362134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.975 qpair failed and we were unable to recover it. 00:25:01.975 [2024-07-24 22:34:27.362234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.975 [2024-07-24 22:34:27.362259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.975 qpair failed and we were unable to recover it. 00:25:01.975 [2024-07-24 22:34:27.362355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.975 [2024-07-24 22:34:27.362381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.975 qpair failed and we were unable to recover it. 00:25:01.975 [2024-07-24 22:34:27.362475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.975 [2024-07-24 22:34:27.362508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.975 qpair failed and we were unable to recover it. 00:25:01.975 [2024-07-24 22:34:27.362607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.975 [2024-07-24 22:34:27.362632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.975 qpair failed and we were unable to recover it. 00:25:01.975 [2024-07-24 22:34:27.362739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.975 [2024-07-24 22:34:27.362765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.975 qpair failed and we were unable to recover it. 00:25:01.975 [2024-07-24 22:34:27.362886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.975 [2024-07-24 22:34:27.362913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.975 qpair failed and we were unable to recover it. 00:25:01.975 [2024-07-24 22:34:27.363030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.976 [2024-07-24 22:34:27.363056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.976 qpair failed and we were unable to recover it. 00:25:01.976 [2024-07-24 22:34:27.363176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.976 [2024-07-24 22:34:27.363204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.976 qpair failed and we were unable to recover it. 00:25:01.976 [2024-07-24 22:34:27.363314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.976 [2024-07-24 22:34:27.363342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.976 qpair failed and we were unable to recover it. 00:25:01.976 [2024-07-24 22:34:27.363449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.976 [2024-07-24 22:34:27.363476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.976 qpair failed and we were unable to recover it. 00:25:01.976 [2024-07-24 22:34:27.363597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.976 [2024-07-24 22:34:27.363622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.976 qpair failed and we were unable to recover it. 00:25:01.976 [2024-07-24 22:34:27.363737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.976 [2024-07-24 22:34:27.363763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.976 qpair failed and we were unable to recover it. 00:25:01.976 [2024-07-24 22:34:27.363868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.976 [2024-07-24 22:34:27.363894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.976 qpair failed and we were unable to recover it. 00:25:01.976 [2024-07-24 22:34:27.364001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.976 [2024-07-24 22:34:27.364030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.976 qpair failed and we were unable to recover it. 00:25:01.976 [2024-07-24 22:34:27.364135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.976 [2024-07-24 22:34:27.364162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.976 qpair failed and we were unable to recover it. 00:25:01.976 [2024-07-24 22:34:27.364269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.976 [2024-07-24 22:34:27.364295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.976 qpair failed and we were unable to recover it. 00:25:01.976 [2024-07-24 22:34:27.364395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.976 [2024-07-24 22:34:27.364421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.976 qpair failed and we were unable to recover it. 00:25:01.976 [2024-07-24 22:34:27.364521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.976 [2024-07-24 22:34:27.364550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.976 qpair failed and we were unable to recover it. 00:25:01.976 [2024-07-24 22:34:27.364652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.976 [2024-07-24 22:34:27.364679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.976 qpair failed and we were unable to recover it. 00:25:01.976 [2024-07-24 22:34:27.364795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.976 [2024-07-24 22:34:27.364821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.976 qpair failed and we were unable to recover it. 00:25:01.976 [2024-07-24 22:34:27.364927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.976 [2024-07-24 22:34:27.364957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.976 qpair failed and we were unable to recover it. 00:25:01.976 [2024-07-24 22:34:27.365056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.976 [2024-07-24 22:34:27.365081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.976 qpair failed and we were unable to recover it. 00:25:01.976 [2024-07-24 22:34:27.365181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.976 [2024-07-24 22:34:27.365211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.976 qpair failed and we were unable to recover it. 00:25:01.976 [2024-07-24 22:34:27.365313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.976 [2024-07-24 22:34:27.365340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.976 qpair failed and we were unable to recover it. 00:25:01.976 [2024-07-24 22:34:27.365444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.976 [2024-07-24 22:34:27.365471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.976 qpair failed and we were unable to recover it. 00:25:01.976 [2024-07-24 22:34:27.365590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.976 [2024-07-24 22:34:27.365620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.976 qpair failed and we were unable to recover it. 00:25:01.976 [2024-07-24 22:34:27.365742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.976 [2024-07-24 22:34:27.365769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.976 qpair failed and we were unable to recover it. 00:25:01.976 [2024-07-24 22:34:27.365876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.976 [2024-07-24 22:34:27.365904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.976 qpair failed and we were unable to recover it. 00:25:01.976 [2024-07-24 22:34:27.366002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.976 [2024-07-24 22:34:27.366028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.976 qpair failed and we were unable to recover it. 00:25:01.976 [2024-07-24 22:34:27.366131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.976 [2024-07-24 22:34:27.366158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.976 qpair failed and we were unable to recover it. 00:25:01.976 [2024-07-24 22:34:27.366270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.976 [2024-07-24 22:34:27.366296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.976 qpair failed and we were unable to recover it. 00:25:01.976 [2024-07-24 22:34:27.366403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.976 [2024-07-24 22:34:27.366431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.976 qpair failed and we were unable to recover it. 00:25:01.976 [2024-07-24 22:34:27.366546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.976 [2024-07-24 22:34:27.366573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.976 qpair failed and we were unable to recover it. 00:25:01.976 [2024-07-24 22:34:27.366682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.976 [2024-07-24 22:34:27.366709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.976 qpair failed and we were unable to recover it. 00:25:01.976 [2024-07-24 22:34:27.366812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.976 [2024-07-24 22:34:27.366838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.976 qpair failed and we were unable to recover it. 00:25:01.976 [2024-07-24 22:34:27.366936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.976 [2024-07-24 22:34:27.366962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.976 qpair failed and we were unable to recover it. 00:25:01.976 [2024-07-24 22:34:27.367061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.976 [2024-07-24 22:34:27.367087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.976 qpair failed and we were unable to recover it. 00:25:01.976 [2024-07-24 22:34:27.367191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.976 [2024-07-24 22:34:27.367218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.976 qpair failed and we were unable to recover it. 00:25:01.976 [2024-07-24 22:34:27.367321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.976 [2024-07-24 22:34:27.367347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.976 qpair failed and we were unable to recover it. 00:25:01.976 [2024-07-24 22:34:27.367456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.976 [2024-07-24 22:34:27.367488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.976 qpair failed and we were unable to recover it. 00:25:01.976 [2024-07-24 22:34:27.367609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.976 [2024-07-24 22:34:27.367636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.976 qpair failed and we were unable to recover it. 00:25:01.976 [2024-07-24 22:34:27.367753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.976 [2024-07-24 22:34:27.367779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.976 qpair failed and we were unable to recover it. 00:25:01.976 [2024-07-24 22:34:27.367887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.977 [2024-07-24 22:34:27.367914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.977 qpair failed and we were unable to recover it. 00:25:01.977 [2024-07-24 22:34:27.368035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.977 [2024-07-24 22:34:27.368061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.977 qpair failed and we were unable to recover it. 00:25:01.977 [2024-07-24 22:34:27.368177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.977 [2024-07-24 22:34:27.368207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.977 qpair failed and we were unable to recover it. 00:25:01.977 [2024-07-24 22:34:27.368309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.977 [2024-07-24 22:34:27.368336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.977 qpair failed and we were unable to recover it. 00:25:01.977 [2024-07-24 22:34:27.368446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.977 [2024-07-24 22:34:27.368474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.977 qpair failed and we were unable to recover it. 00:25:01.977 [2024-07-24 22:34:27.368594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.977 [2024-07-24 22:34:27.368622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.977 qpair failed and we were unable to recover it. 00:25:01.977 [2024-07-24 22:34:27.368734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.977 [2024-07-24 22:34:27.368760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.977 qpair failed and we were unable to recover it. 00:25:01.977 [2024-07-24 22:34:27.368862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.977 [2024-07-24 22:34:27.368887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.977 qpair failed and we were unable to recover it. 00:25:01.977 [2024-07-24 22:34:27.368990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.977 [2024-07-24 22:34:27.369016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.977 qpair failed and we were unable to recover it. 00:25:01.977 [2024-07-24 22:34:27.369118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.977 [2024-07-24 22:34:27.369144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.977 qpair failed and we were unable to recover it. 00:25:01.977 [2024-07-24 22:34:27.369251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.977 [2024-07-24 22:34:27.369280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.977 qpair failed and we were unable to recover it. 00:25:01.977 [2024-07-24 22:34:27.369375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.977 [2024-07-24 22:34:27.369401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.977 qpair failed and we were unable to recover it. 00:25:01.977 [2024-07-24 22:34:27.369507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.977 [2024-07-24 22:34:27.369534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.977 qpair failed and we were unable to recover it. 00:25:01.977 [2024-07-24 22:34:27.369645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.977 [2024-07-24 22:34:27.369672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.977 qpair failed and we were unable to recover it. 00:25:01.977 [2024-07-24 22:34:27.369779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.977 [2024-07-24 22:34:27.369806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.977 qpair failed and we were unable to recover it. 00:25:01.977 [2024-07-24 22:34:27.369908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.977 [2024-07-24 22:34:27.369934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.977 qpair failed and we were unable to recover it. 00:25:01.977 [2024-07-24 22:34:27.370037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.977 [2024-07-24 22:34:27.370063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.977 qpair failed and we were unable to recover it. 00:25:01.977 [2024-07-24 22:34:27.370171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.977 [2024-07-24 22:34:27.370200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.977 qpair failed and we were unable to recover it. 00:25:01.977 [2024-07-24 22:34:27.370414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.977 [2024-07-24 22:34:27.370440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.977 qpair failed and we were unable to recover it. 00:25:01.977 [2024-07-24 22:34:27.370564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.977 [2024-07-24 22:34:27.370591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.977 qpair failed and we were unable to recover it. 00:25:01.977 [2024-07-24 22:34:27.370694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.977 [2024-07-24 22:34:27.370720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.977 qpair failed and we were unable to recover it. 00:25:01.977 [2024-07-24 22:34:27.370829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.977 [2024-07-24 22:34:27.370856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.977 qpair failed and we were unable to recover it. 00:25:01.977 [2024-07-24 22:34:27.370962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.977 [2024-07-24 22:34:27.370989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.977 qpair failed and we were unable to recover it. 00:25:01.977 [2024-07-24 22:34:27.371106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.977 [2024-07-24 22:34:27.371138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.977 qpair failed and we were unable to recover it. 00:25:01.977 [2024-07-24 22:34:27.371238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.977 [2024-07-24 22:34:27.371263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.977 qpair failed and we were unable to recover it. 00:25:01.977 [2024-07-24 22:34:27.371371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.977 [2024-07-24 22:34:27.371400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.977 qpair failed and we were unable to recover it. 00:25:01.977 [2024-07-24 22:34:27.371504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.977 [2024-07-24 22:34:27.371532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.977 qpair failed and we were unable to recover it. 00:25:01.977 [2024-07-24 22:34:27.371635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.977 [2024-07-24 22:34:27.371661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.977 qpair failed and we were unable to recover it. 00:25:01.977 [2024-07-24 22:34:27.371761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.977 [2024-07-24 22:34:27.371787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.977 qpair failed and we were unable to recover it. 00:25:01.977 [2024-07-24 22:34:27.371900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.977 [2024-07-24 22:34:27.371927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.977 qpair failed and we were unable to recover it. 00:25:01.977 [2024-07-24 22:34:27.372027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.977 [2024-07-24 22:34:27.372052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.977 qpair failed and we were unable to recover it. 00:25:01.977 [2024-07-24 22:34:27.372151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.977 [2024-07-24 22:34:27.372179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.977 qpair failed and we were unable to recover it. 00:25:01.977 [2024-07-24 22:34:27.372287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.977 [2024-07-24 22:34:27.372315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.977 qpair failed and we were unable to recover it. 00:25:01.977 [2024-07-24 22:34:27.372425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.977 [2024-07-24 22:34:27.372452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.977 qpair failed and we were unable to recover it. 00:25:01.977 [2024-07-24 22:34:27.372571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.977 [2024-07-24 22:34:27.372599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.977 qpair failed and we were unable to recover it. 00:25:01.977 [2024-07-24 22:34:27.372716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.977 [2024-07-24 22:34:27.372743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.977 qpair failed and we were unable to recover it. 00:25:01.977 [2024-07-24 22:34:27.372844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.977 [2024-07-24 22:34:27.372870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.977 qpair failed and we were unable to recover it. 00:25:01.977 [2024-07-24 22:34:27.372978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.978 [2024-07-24 22:34:27.373004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.978 qpair failed and we were unable to recover it. 00:25:01.978 [2024-07-24 22:34:27.373104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.978 [2024-07-24 22:34:27.373131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.978 qpair failed and we were unable to recover it. 00:25:01.978 [2024-07-24 22:34:27.373227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.978 [2024-07-24 22:34:27.373253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.978 qpair failed and we were unable to recover it. 00:25:01.978 [2024-07-24 22:34:27.373351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.978 [2024-07-24 22:34:27.373377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.978 qpair failed and we were unable to recover it. 00:25:01.978 [2024-07-24 22:34:27.373501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.978 [2024-07-24 22:34:27.373528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.978 qpair failed and we were unable to recover it. 00:25:01.978 [2024-07-24 22:34:27.373634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.978 [2024-07-24 22:34:27.373660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.978 qpair failed and we were unable to recover it. 00:25:01.978 [2024-07-24 22:34:27.373776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.978 [2024-07-24 22:34:27.373803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.978 qpair failed and we were unable to recover it. 00:25:01.978 [2024-07-24 22:34:27.373913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.978 [2024-07-24 22:34:27.373942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.978 qpair failed and we were unable to recover it. 00:25:01.978 [2024-07-24 22:34:27.374048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.978 [2024-07-24 22:34:27.374074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.978 qpair failed and we were unable to recover it. 00:25:01.978 [2024-07-24 22:34:27.374183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.978 [2024-07-24 22:34:27.374212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.978 qpair failed and we were unable to recover it. 00:25:01.978 [2024-07-24 22:34:27.374320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.978 [2024-07-24 22:34:27.374347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.978 qpair failed and we were unable to recover it. 00:25:01.978 [2024-07-24 22:34:27.374450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.978 [2024-07-24 22:34:27.374478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.978 qpair failed and we were unable to recover it. 00:25:01.978 [2024-07-24 22:34:27.374591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.978 [2024-07-24 22:34:27.374618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.978 qpair failed and we were unable to recover it. 00:25:01.978 [2024-07-24 22:34:27.374727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.978 [2024-07-24 22:34:27.374755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.978 qpair failed and we were unable to recover it. 00:25:01.978 [2024-07-24 22:34:27.374860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.978 [2024-07-24 22:34:27.374887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.978 qpair failed and we were unable to recover it. 00:25:01.978 [2024-07-24 22:34:27.374991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.978 [2024-07-24 22:34:27.375017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.978 qpair failed and we were unable to recover it. 00:25:01.978 [2024-07-24 22:34:27.375122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.978 [2024-07-24 22:34:27.375149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.978 qpair failed and we were unable to recover it. 00:25:01.978 [2024-07-24 22:34:27.375262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.978 [2024-07-24 22:34:27.375287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.978 qpair failed and we were unable to recover it. 00:25:01.978 [2024-07-24 22:34:27.375387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.978 [2024-07-24 22:34:27.375413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.978 qpair failed and we were unable to recover it. 00:25:01.978 [2024-07-24 22:34:27.375521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.978 [2024-07-24 22:34:27.375549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.978 qpair failed and we were unable to recover it. 00:25:01.978 [2024-07-24 22:34:27.375647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.978 [2024-07-24 22:34:27.375673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.978 qpair failed and we were unable to recover it. 00:25:01.978 [2024-07-24 22:34:27.375770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.978 [2024-07-24 22:34:27.375795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.978 qpair failed and we were unable to recover it. 00:25:01.978 [2024-07-24 22:34:27.375909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.978 [2024-07-24 22:34:27.375934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.978 qpair failed and we were unable to recover it. 00:25:01.978 [2024-07-24 22:34:27.376034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.978 [2024-07-24 22:34:27.376059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.978 qpair failed and we were unable to recover it. 00:25:01.978 [2024-07-24 22:34:27.376157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.978 [2024-07-24 22:34:27.376182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.978 qpair failed and we were unable to recover it. 00:25:01.978 [2024-07-24 22:34:27.376285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.978 [2024-07-24 22:34:27.376313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.978 qpair failed and we were unable to recover it. 00:25:01.978 [2024-07-24 22:34:27.376416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.978 [2024-07-24 22:34:27.376446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.978 qpair failed and we were unable to recover it. 00:25:01.978 [2024-07-24 22:34:27.376552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.978 [2024-07-24 22:34:27.376579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.978 qpair failed and we were unable to recover it. 00:25:01.978 [2024-07-24 22:34:27.376682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.978 [2024-07-24 22:34:27.376709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.978 qpair failed and we were unable to recover it. 00:25:01.978 [2024-07-24 22:34:27.376815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.978 [2024-07-24 22:34:27.376842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.978 qpair failed and we were unable to recover it. 00:25:01.978 [2024-07-24 22:34:27.376956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.978 [2024-07-24 22:34:27.376982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.978 qpair failed and we were unable to recover it. 00:25:01.978 [2024-07-24 22:34:27.377083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.978 [2024-07-24 22:34:27.377112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.978 qpair failed and we were unable to recover it. 00:25:01.978 [2024-07-24 22:34:27.377217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.978 [2024-07-24 22:34:27.377243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.979 qpair failed and we were unable to recover it. 00:25:01.979 [2024-07-24 22:34:27.377341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.979 [2024-07-24 22:34:27.377367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.979 qpair failed and we were unable to recover it. 00:25:01.979 [2024-07-24 22:34:27.377584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.979 [2024-07-24 22:34:27.377613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.979 qpair failed and we were unable to recover it. 00:25:01.979 [2024-07-24 22:34:27.377728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.979 [2024-07-24 22:34:27.377755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.979 qpair failed and we were unable to recover it. 00:25:01.979 [2024-07-24 22:34:27.377872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.979 [2024-07-24 22:34:27.377897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.979 qpair failed and we were unable to recover it. 00:25:01.979 [2024-07-24 22:34:27.377998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.979 [2024-07-24 22:34:27.378024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.979 qpair failed and we were unable to recover it. 00:25:01.979 [2024-07-24 22:34:27.378129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.979 [2024-07-24 22:34:27.378156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.979 qpair failed and we were unable to recover it. 00:25:01.979 [2024-07-24 22:34:27.378272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.979 [2024-07-24 22:34:27.378298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.979 qpair failed and we were unable to recover it. 00:25:01.979 [2024-07-24 22:34:27.378401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.979 [2024-07-24 22:34:27.378429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.979 qpair failed and we were unable to recover it. 00:25:01.979 [2024-07-24 22:34:27.378535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.979 [2024-07-24 22:34:27.378562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.979 qpair failed and we were unable to recover it. 00:25:01.979 [2024-07-24 22:34:27.378774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.979 [2024-07-24 22:34:27.378800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.979 qpair failed and we were unable to recover it. 00:25:01.979 [2024-07-24 22:34:27.378909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.979 [2024-07-24 22:34:27.378937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.979 qpair failed and we were unable to recover it. 00:25:01.979 [2024-07-24 22:34:27.379036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.979 [2024-07-24 22:34:27.379062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.979 qpair failed and we were unable to recover it. 00:25:01.979 [2024-07-24 22:34:27.379182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.979 [2024-07-24 22:34:27.379208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.979 qpair failed and we were unable to recover it. 00:25:01.979 [2024-07-24 22:34:27.379306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.979 [2024-07-24 22:34:27.379331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.979 qpair failed and we were unable to recover it. 00:25:01.979 [2024-07-24 22:34:27.379433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.979 [2024-07-24 22:34:27.379460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.979 qpair failed and we were unable to recover it. 00:25:01.979 [2024-07-24 22:34:27.379580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.979 [2024-07-24 22:34:27.379606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.979 qpair failed and we were unable to recover it. 00:25:01.979 [2024-07-24 22:34:27.379729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.979 [2024-07-24 22:34:27.379757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.979 qpair failed and we were unable to recover it. 00:25:01.979 [2024-07-24 22:34:27.379861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.979 [2024-07-24 22:34:27.379887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.979 qpair failed and we were unable to recover it. 00:25:01.979 [2024-07-24 22:34:27.379992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.979 [2024-07-24 22:34:27.380021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.979 qpair failed and we were unable to recover it. 00:25:01.979 [2024-07-24 22:34:27.380122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.979 [2024-07-24 22:34:27.380149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.979 qpair failed and we were unable to recover it. 00:25:01.979 [2024-07-24 22:34:27.380260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.979 [2024-07-24 22:34:27.380287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.979 qpair failed and we were unable to recover it. 00:25:01.979 [2024-07-24 22:34:27.380405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.979 [2024-07-24 22:34:27.380431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.979 qpair failed and we were unable to recover it. 00:25:01.979 [2024-07-24 22:34:27.380529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.979 [2024-07-24 22:34:27.380557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.979 qpair failed and we were unable to recover it. 00:25:01.979 [2024-07-24 22:34:27.380663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.979 [2024-07-24 22:34:27.380690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.979 qpair failed and we were unable to recover it. 00:25:01.979 [2024-07-24 22:34:27.380796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.979 [2024-07-24 22:34:27.380822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.979 qpair failed and we were unable to recover it. 00:25:01.979 [2024-07-24 22:34:27.380921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.979 [2024-07-24 22:34:27.380947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.979 qpair failed and we were unable to recover it. 00:25:01.979 [2024-07-24 22:34:27.381052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.979 [2024-07-24 22:34:27.381079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.979 qpair failed and we were unable to recover it. 00:25:01.979 [2024-07-24 22:34:27.381181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.979 [2024-07-24 22:34:27.381208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.979 qpair failed and we were unable to recover it. 00:25:01.979 [2024-07-24 22:34:27.381312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.979 [2024-07-24 22:34:27.381341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.979 qpair failed and we were unable to recover it. 00:25:01.979 [2024-07-24 22:34:27.381443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.979 [2024-07-24 22:34:27.381469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.979 qpair failed and we were unable to recover it. 00:25:01.979 [2024-07-24 22:34:27.381578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.979 [2024-07-24 22:34:27.381608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.979 qpair failed and we were unable to recover it. 00:25:01.979 [2024-07-24 22:34:27.381823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.979 [2024-07-24 22:34:27.381850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.979 qpair failed and we were unable to recover it. 00:25:01.979 [2024-07-24 22:34:27.381950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.979 [2024-07-24 22:34:27.381976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.979 qpair failed and we were unable to recover it. 00:25:01.979 [2024-07-24 22:34:27.382076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.979 [2024-07-24 22:34:27.382108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.979 qpair failed and we were unable to recover it. 00:25:01.979 [2024-07-24 22:34:27.382216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.979 [2024-07-24 22:34:27.382243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.979 qpair failed and we were unable to recover it. 00:25:01.979 [2024-07-24 22:34:27.382357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.979 [2024-07-24 22:34:27.382383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.979 qpair failed and we were unable to recover it. 00:25:01.979 [2024-07-24 22:34:27.382491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.980 [2024-07-24 22:34:27.382518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.980 qpair failed and we were unable to recover it. 00:25:01.980 [2024-07-24 22:34:27.382623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.980 [2024-07-24 22:34:27.382649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.980 qpair failed and we were unable to recover it. 00:25:01.980 [2024-07-24 22:34:27.382751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.980 [2024-07-24 22:34:27.382778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.980 qpair failed and we were unable to recover it. 00:25:01.980 [2024-07-24 22:34:27.382894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.980 [2024-07-24 22:34:27.382920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.980 qpair failed and we were unable to recover it. 00:25:01.980 [2024-07-24 22:34:27.383032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.980 [2024-07-24 22:34:27.383058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.980 qpair failed and we were unable to recover it. 00:25:01.980 [2024-07-24 22:34:27.383161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.980 [2024-07-24 22:34:27.383188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.980 qpair failed and we were unable to recover it. 00:25:01.980 [2024-07-24 22:34:27.383292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.980 [2024-07-24 22:34:27.383318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.980 qpair failed and we were unable to recover it. 00:25:01.980 [2024-07-24 22:34:27.383418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.980 [2024-07-24 22:34:27.383447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.980 qpair failed and we were unable to recover it. 00:25:01.980 [2024-07-24 22:34:27.383562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.980 [2024-07-24 22:34:27.383589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.980 qpair failed and we were unable to recover it. 00:25:01.980 [2024-07-24 22:34:27.383696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.980 [2024-07-24 22:34:27.383722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.980 qpair failed and we were unable to recover it. 00:25:01.980 [2024-07-24 22:34:27.383827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.980 [2024-07-24 22:34:27.383855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.980 qpair failed and we were unable to recover it. 00:25:01.980 [2024-07-24 22:34:27.383960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.980 [2024-07-24 22:34:27.383987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.980 qpair failed and we were unable to recover it. 00:25:01.980 [2024-07-24 22:34:27.384097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.980 [2024-07-24 22:34:27.384123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.980 qpair failed and we were unable to recover it. 00:25:01.980 [2024-07-24 22:34:27.384236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.980 [2024-07-24 22:34:27.384262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.980 qpair failed and we were unable to recover it. 00:25:01.980 [2024-07-24 22:34:27.384373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.980 [2024-07-24 22:34:27.384399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.980 qpair failed and we were unable to recover it. 00:25:01.980 [2024-07-24 22:34:27.384514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.980 [2024-07-24 22:34:27.384541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.980 qpair failed and we were unable to recover it. 00:25:01.980 [2024-07-24 22:34:27.384643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.980 [2024-07-24 22:34:27.384669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.980 qpair failed and we were unable to recover it. 00:25:01.980 [2024-07-24 22:34:27.384776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.980 [2024-07-24 22:34:27.384802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.980 qpair failed and we were unable to recover it. 00:25:01.980 [2024-07-24 22:34:27.384897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.980 [2024-07-24 22:34:27.384923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.980 qpair failed and we were unable to recover it. 00:25:01.980 [2024-07-24 22:34:27.385024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.980 [2024-07-24 22:34:27.385050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.980 qpair failed and we were unable to recover it. 00:25:01.980 [2024-07-24 22:34:27.385169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.980 [2024-07-24 22:34:27.385195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.980 qpair failed and we were unable to recover it. 00:25:01.980 [2024-07-24 22:34:27.385303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.980 [2024-07-24 22:34:27.385331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.980 qpair failed and we were unable to recover it. 00:25:01.980 [2024-07-24 22:34:27.385440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.980 [2024-07-24 22:34:27.385470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.980 qpair failed and we were unable to recover it. 00:25:01.980 [2024-07-24 22:34:27.385580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.980 [2024-07-24 22:34:27.385606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.980 qpair failed and we were unable to recover it. 00:25:01.980 [2024-07-24 22:34:27.385729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.980 [2024-07-24 22:34:27.385756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.980 qpair failed and we were unable to recover it. 00:25:01.980 [2024-07-24 22:34:27.385857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.980 [2024-07-24 22:34:27.385882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.980 qpair failed and we were unable to recover it. 00:25:01.980 [2024-07-24 22:34:27.385990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.980 [2024-07-24 22:34:27.386016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.980 qpair failed and we were unable to recover it. 00:25:01.980 [2024-07-24 22:34:27.386132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.980 [2024-07-24 22:34:27.386158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.980 qpair failed and we were unable to recover it. 00:25:01.980 [2024-07-24 22:34:27.386262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.980 [2024-07-24 22:34:27.386289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.980 qpair failed and we were unable to recover it. 00:25:01.980 [2024-07-24 22:34:27.386391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.980 [2024-07-24 22:34:27.386417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.980 qpair failed and we were unable to recover it. 00:25:01.980 [2024-07-24 22:34:27.386539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.980 [2024-07-24 22:34:27.386565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.980 qpair failed and we were unable to recover it. 00:25:01.980 [2024-07-24 22:34:27.386777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.980 [2024-07-24 22:34:27.386803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.980 qpair failed and we were unable to recover it. 00:25:01.980 [2024-07-24 22:34:27.386898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.980 [2024-07-24 22:34:27.386925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.980 qpair failed and we were unable to recover it. 00:25:01.980 [2024-07-24 22:34:27.387026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.980 [2024-07-24 22:34:27.387053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.980 qpair failed and we were unable to recover it. 00:25:01.980 [2024-07-24 22:34:27.387172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.980 [2024-07-24 22:34:27.387198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.980 qpair failed and we were unable to recover it. 00:25:01.980 [2024-07-24 22:34:27.387310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.980 [2024-07-24 22:34:27.387335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.980 qpair failed and we were unable to recover it. 00:25:01.980 [2024-07-24 22:34:27.387460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.980 [2024-07-24 22:34:27.387495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.980 qpair failed and we were unable to recover it. 00:25:01.980 [2024-07-24 22:34:27.387604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.980 [2024-07-24 22:34:27.387636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.981 qpair failed and we were unable to recover it. 00:25:01.981 [2024-07-24 22:34:27.387738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.981 [2024-07-24 22:34:27.387765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.981 qpair failed and we were unable to recover it. 00:25:01.981 [2024-07-24 22:34:27.387884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.981 [2024-07-24 22:34:27.387910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.981 qpair failed and we were unable to recover it. 00:25:01.981 [2024-07-24 22:34:27.388018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.981 [2024-07-24 22:34:27.388043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.981 qpair failed and we were unable to recover it. 00:25:01.981 [2024-07-24 22:34:27.388150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.981 [2024-07-24 22:34:27.388178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.981 qpair failed and we were unable to recover it. 00:25:01.981 [2024-07-24 22:34:27.388281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.981 [2024-07-24 22:34:27.388309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.981 qpair failed and we were unable to recover it. 00:25:01.981 [2024-07-24 22:34:27.388416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.981 [2024-07-24 22:34:27.388445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.981 qpair failed and we were unable to recover it. 00:25:01.981 [2024-07-24 22:34:27.388555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.981 [2024-07-24 22:34:27.388582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.981 qpair failed and we were unable to recover it. 00:25:01.981 [2024-07-24 22:34:27.388685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.981 [2024-07-24 22:34:27.388712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.981 qpair failed and we were unable to recover it. 00:25:01.981 [2024-07-24 22:34:27.388808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.981 [2024-07-24 22:34:27.388834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.981 qpair failed and we were unable to recover it. 00:25:01.981 [2024-07-24 22:34:27.388936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.981 [2024-07-24 22:34:27.388962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.981 qpair failed and we were unable to recover it. 00:25:01.981 [2024-07-24 22:34:27.389063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.981 [2024-07-24 22:34:27.389088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.981 qpair failed and we were unable to recover it. 00:25:01.981 [2024-07-24 22:34:27.389237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.981 [2024-07-24 22:34:27.389262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.981 qpair failed and we were unable to recover it. 00:25:01.981 [2024-07-24 22:34:27.389377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.981 [2024-07-24 22:34:27.389401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.981 qpair failed and we were unable to recover it. 00:25:01.981 [2024-07-24 22:34:27.389527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.981 [2024-07-24 22:34:27.389554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.981 qpair failed and we were unable to recover it. 00:25:01.981 [2024-07-24 22:34:27.389657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.981 [2024-07-24 22:34:27.389682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.981 qpair failed and we were unable to recover it. 00:25:01.981 [2024-07-24 22:34:27.389783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.981 [2024-07-24 22:34:27.389808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.981 qpair failed and we were unable to recover it. 00:25:01.981 [2024-07-24 22:34:27.389913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.981 [2024-07-24 22:34:27.389940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.981 qpair failed and we were unable to recover it. 00:25:01.981 [2024-07-24 22:34:27.390036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.981 [2024-07-24 22:34:27.390061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.981 qpair failed and we were unable to recover it. 00:25:01.981 [2024-07-24 22:34:27.390158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.981 [2024-07-24 22:34:27.390183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.981 qpair failed and we were unable to recover it. 00:25:01.981 [2024-07-24 22:34:27.390282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.981 [2024-07-24 22:34:27.390308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.981 qpair failed and we were unable to recover it. 00:25:01.981 [2024-07-24 22:34:27.390402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.981 [2024-07-24 22:34:27.390428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.981 qpair failed and we were unable to recover it. 00:25:01.981 [2024-07-24 22:34:27.390546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.981 [2024-07-24 22:34:27.390573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.981 qpair failed and we were unable to recover it. 00:25:01.981 [2024-07-24 22:34:27.390690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.981 [2024-07-24 22:34:27.390719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.981 qpair failed and we were unable to recover it. 00:25:01.981 [2024-07-24 22:34:27.390825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.981 [2024-07-24 22:34:27.390854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.981 qpair failed and we were unable to recover it. 00:25:01.981 [2024-07-24 22:34:27.390960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.981 [2024-07-24 22:34:27.390986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.981 qpair failed and we were unable to recover it. 00:25:01.981 [2024-07-24 22:34:27.391196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.981 [2024-07-24 22:34:27.391222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.981 qpair failed and we were unable to recover it. 00:25:01.981 [2024-07-24 22:34:27.391346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.981 [2024-07-24 22:34:27.391374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.981 qpair failed and we were unable to recover it. 00:25:01.981 [2024-07-24 22:34:27.391475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.981 [2024-07-24 22:34:27.391506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.981 qpair failed and we were unable to recover it. 00:25:01.981 [2024-07-24 22:34:27.391620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.981 [2024-07-24 22:34:27.391646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.981 qpair failed and we were unable to recover it. 00:25:01.981 [2024-07-24 22:34:27.391751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.981 [2024-07-24 22:34:27.391778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.981 qpair failed and we were unable to recover it. 00:25:01.981 [2024-07-24 22:34:27.391897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.981 [2024-07-24 22:34:27.391924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.981 qpair failed and we were unable to recover it. 00:25:01.981 [2024-07-24 22:34:27.392027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.981 [2024-07-24 22:34:27.392055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.981 qpair failed and we were unable to recover it. 00:25:01.981 [2024-07-24 22:34:27.392157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.981 [2024-07-24 22:34:27.392182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.981 qpair failed and we were unable to recover it. 00:25:01.981 [2024-07-24 22:34:27.392335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.981 [2024-07-24 22:34:27.392361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.981 qpair failed and we were unable to recover it. 00:25:01.981 [2024-07-24 22:34:27.392463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.981 [2024-07-24 22:34:27.392493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.981 qpair failed and we were unable to recover it. 00:25:01.981 [2024-07-24 22:34:27.392608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.981 [2024-07-24 22:34:27.392634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.981 qpair failed and we were unable to recover it. 00:25:01.981 [2024-07-24 22:34:27.392735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.981 [2024-07-24 22:34:27.392762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.981 qpair failed and we were unable to recover it. 00:25:01.981 [2024-07-24 22:34:27.392862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.982 [2024-07-24 22:34:27.392888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.982 qpair failed and we were unable to recover it. 00:25:01.982 [2024-07-24 22:34:27.392987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.982 [2024-07-24 22:34:27.393012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.982 qpair failed and we were unable to recover it. 00:25:01.982 [2024-07-24 22:34:27.393108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.982 [2024-07-24 22:34:27.393140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.982 qpair failed and we were unable to recover it. 00:25:01.982 [2024-07-24 22:34:27.393242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.982 [2024-07-24 22:34:27.393268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.982 qpair failed and we were unable to recover it. 00:25:01.982 [2024-07-24 22:34:27.393427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.982 [2024-07-24 22:34:27.393453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.982 qpair failed and we were unable to recover it. 00:25:01.982 [2024-07-24 22:34:27.393577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.982 [2024-07-24 22:34:27.393603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.982 qpair failed and we were unable to recover it. 00:25:01.982 [2024-07-24 22:34:27.393722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.982 [2024-07-24 22:34:27.393750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.982 qpair failed and we were unable to recover it. 00:25:01.982 [2024-07-24 22:34:27.393850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.982 [2024-07-24 22:34:27.393875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.982 qpair failed and we were unable to recover it. 00:25:01.982 [2024-07-24 22:34:27.393988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.982 [2024-07-24 22:34:27.394013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.982 qpair failed and we were unable to recover it. 00:25:01.982 [2024-07-24 22:34:27.394123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.982 [2024-07-24 22:34:27.394150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.982 qpair failed and we were unable to recover it. 00:25:01.982 [2024-07-24 22:34:27.394244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.982 [2024-07-24 22:34:27.394269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.982 qpair failed and we were unable to recover it. 00:25:01.982 [2024-07-24 22:34:27.394374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.982 [2024-07-24 22:34:27.394399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.982 qpair failed and we were unable to recover it. 00:25:01.982 [2024-07-24 22:34:27.394506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.982 [2024-07-24 22:34:27.394535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.982 qpair failed and we were unable to recover it. 00:25:01.982 [2024-07-24 22:34:27.394657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.982 [2024-07-24 22:34:27.394684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.982 qpair failed and we were unable to recover it. 00:25:01.982 [2024-07-24 22:34:27.394787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.982 [2024-07-24 22:34:27.394815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.982 qpair failed and we were unable to recover it. 00:25:01.982 [2024-07-24 22:34:27.394913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.982 [2024-07-24 22:34:27.394940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.982 qpair failed and we were unable to recover it. 00:25:01.982 [2024-07-24 22:34:27.395050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.982 [2024-07-24 22:34:27.395076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.982 qpair failed and we were unable to recover it. 00:25:01.982 [2024-07-24 22:34:27.395193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.982 [2024-07-24 22:34:27.395220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.982 qpair failed and we were unable to recover it. 00:25:01.982 [2024-07-24 22:34:27.395327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.982 [2024-07-24 22:34:27.395355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.982 qpair failed and we were unable to recover it. 00:25:01.982 [2024-07-24 22:34:27.395461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.982 [2024-07-24 22:34:27.395500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.982 qpair failed and we were unable to recover it. 00:25:01.982 [2024-07-24 22:34:27.395602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.982 [2024-07-24 22:34:27.395628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.982 qpair failed and we were unable to recover it. 00:25:01.982 [2024-07-24 22:34:27.395726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.982 [2024-07-24 22:34:27.395752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.982 qpair failed and we were unable to recover it. 00:25:01.982 [2024-07-24 22:34:27.395863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.982 [2024-07-24 22:34:27.395889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.982 qpair failed and we were unable to recover it. 00:25:01.982 [2024-07-24 22:34:27.395986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.982 [2024-07-24 22:34:27.396012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.982 qpair failed and we were unable to recover it. 00:25:01.982 [2024-07-24 22:34:27.396113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.982 [2024-07-24 22:34:27.396140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.982 qpair failed and we were unable to recover it. 00:25:01.982 [2024-07-24 22:34:27.396248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.982 [2024-07-24 22:34:27.396278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.982 qpair failed and we were unable to recover it. 00:25:01.982 [2024-07-24 22:34:27.396374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.982 [2024-07-24 22:34:27.396401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.982 qpair failed and we were unable to recover it. 00:25:01.982 [2024-07-24 22:34:27.396520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.982 [2024-07-24 22:34:27.396549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.982 qpair failed and we were unable to recover it. 00:25:01.982 [2024-07-24 22:34:27.396658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.982 [2024-07-24 22:34:27.396687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.982 qpair failed and we were unable to recover it. 00:25:01.982 [2024-07-24 22:34:27.396804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.982 [2024-07-24 22:34:27.396839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.982 qpair failed and we were unable to recover it. 00:25:01.982 [2024-07-24 22:34:27.396970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.982 [2024-07-24 22:34:27.396998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.982 qpair failed and we were unable to recover it. 00:25:01.982 [2024-07-24 22:34:27.397099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.982 [2024-07-24 22:34:27.397126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.982 qpair failed and we were unable to recover it. 00:25:01.982 [2024-07-24 22:34:27.397239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.982 [2024-07-24 22:34:27.397265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.982 qpair failed and we were unable to recover it. 00:25:01.982 [2024-07-24 22:34:27.397378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.982 [2024-07-24 22:34:27.397405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.982 qpair failed and we were unable to recover it. 00:25:01.982 [2024-07-24 22:34:27.397508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.982 [2024-07-24 22:34:27.397535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.982 qpair failed and we were unable to recover it. 00:25:01.982 [2024-07-24 22:34:27.397635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.982 [2024-07-24 22:34:27.397662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.982 qpair failed and we were unable to recover it. 00:25:01.982 [2024-07-24 22:34:27.397764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.982 [2024-07-24 22:34:27.397790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.982 qpair failed and we were unable to recover it. 00:25:01.982 [2024-07-24 22:34:27.397894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.982 [2024-07-24 22:34:27.397920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.983 qpair failed and we were unable to recover it. 00:25:01.983 [2024-07-24 22:34:27.398134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.983 [2024-07-24 22:34:27.398160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.983 qpair failed and we were unable to recover it. 00:25:01.983 [2024-07-24 22:34:27.398255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.983 [2024-07-24 22:34:27.398281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.983 qpair failed and we were unable to recover it. 00:25:01.983 [2024-07-24 22:34:27.398385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.983 [2024-07-24 22:34:27.398411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.983 qpair failed and we were unable to recover it. 00:25:01.983 [2024-07-24 22:34:27.398518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.983 [2024-07-24 22:34:27.398544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.983 qpair failed and we were unable to recover it. 00:25:01.983 [2024-07-24 22:34:27.398648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.983 [2024-07-24 22:34:27.398679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.983 qpair failed and we were unable to recover it. 00:25:01.983 [2024-07-24 22:34:27.398782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.983 [2024-07-24 22:34:27.398807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.983 qpair failed and we were unable to recover it. 00:25:01.983 [2024-07-24 22:34:27.398911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.983 [2024-07-24 22:34:27.398938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.983 qpair failed and we were unable to recover it. 00:25:01.983 [2024-07-24 22:34:27.399042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.983 [2024-07-24 22:34:27.399069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.983 qpair failed and we were unable to recover it. 00:25:01.983 [2024-07-24 22:34:27.399167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.983 [2024-07-24 22:34:27.399196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.983 qpair failed and we were unable to recover it. 00:25:01.983 [2024-07-24 22:34:27.399316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.983 [2024-07-24 22:34:27.399346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.983 qpair failed and we were unable to recover it. 00:25:01.983 [2024-07-24 22:34:27.399471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.983 [2024-07-24 22:34:27.399509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.983 qpair failed and we were unable to recover it. 00:25:01.983 [2024-07-24 22:34:27.399617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.983 [2024-07-24 22:34:27.399643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.983 qpair failed and we were unable to recover it. 00:25:01.983 [2024-07-24 22:34:27.399762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.983 [2024-07-24 22:34:27.399788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.983 qpair failed and we were unable to recover it. 00:25:01.983 [2024-07-24 22:34:27.399904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.983 [2024-07-24 22:34:27.399930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.983 qpair failed and we were unable to recover it. 00:25:01.983 [2024-07-24 22:34:27.400037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.983 [2024-07-24 22:34:27.400064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.983 qpair failed and we were unable to recover it. 00:25:01.983 [2024-07-24 22:34:27.400165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.983 [2024-07-24 22:34:27.400191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.983 qpair failed and we were unable to recover it. 00:25:01.983 [2024-07-24 22:34:27.400289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.983 [2024-07-24 22:34:27.400315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.983 qpair failed and we were unable to recover it. 00:25:01.983 [2024-07-24 22:34:27.400413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.983 [2024-07-24 22:34:27.400440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.983 qpair failed and we were unable to recover it. 00:25:01.983 [2024-07-24 22:34:27.400554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.983 [2024-07-24 22:34:27.400583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.983 qpair failed and we were unable to recover it. 00:25:01.983 [2024-07-24 22:34:27.400698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.983 [2024-07-24 22:34:27.400724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.983 qpair failed and we were unable to recover it. 00:25:01.983 [2024-07-24 22:34:27.400877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.983 [2024-07-24 22:34:27.400906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.983 qpair failed and we were unable to recover it. 00:25:01.983 [2024-07-24 22:34:27.401012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.983 [2024-07-24 22:34:27.401038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.983 qpair failed and we were unable to recover it. 00:25:01.983 [2024-07-24 22:34:27.401141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.983 [2024-07-24 22:34:27.401169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.983 qpair failed and we were unable to recover it. 00:25:01.983 [2024-07-24 22:34:27.401269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.983 [2024-07-24 22:34:27.401295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.983 qpair failed and we were unable to recover it. 00:25:01.983 [2024-07-24 22:34:27.401396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.983 [2024-07-24 22:34:27.401423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.983 qpair failed and we were unable to recover it. 00:25:01.983 [2024-07-24 22:34:27.401530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.983 [2024-07-24 22:34:27.401557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.983 qpair failed and we were unable to recover it. 00:25:01.983 [2024-07-24 22:34:27.401669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.983 [2024-07-24 22:34:27.401695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.983 qpair failed and we were unable to recover it. 00:25:01.983 [2024-07-24 22:34:27.401813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.983 [2024-07-24 22:34:27.401838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.983 qpair failed and we were unable to recover it. 00:25:01.983 [2024-07-24 22:34:27.401950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.983 [2024-07-24 22:34:27.401976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.983 qpair failed and we were unable to recover it. 00:25:01.983 [2024-07-24 22:34:27.402097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.983 [2024-07-24 22:34:27.402125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.983 qpair failed and we were unable to recover it. 00:25:01.983 [2024-07-24 22:34:27.402249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.983 [2024-07-24 22:34:27.402276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.983 qpair failed and we were unable to recover it. 00:25:01.984 [2024-07-24 22:34:27.402387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.984 [2024-07-24 22:34:27.402421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.984 qpair failed and we were unable to recover it. 00:25:01.984 [2024-07-24 22:34:27.402549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.984 [2024-07-24 22:34:27.402577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.984 qpair failed and we were unable to recover it. 00:25:01.984 [2024-07-24 22:34:27.402686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.984 [2024-07-24 22:34:27.402712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.984 qpair failed and we were unable to recover it. 00:25:01.984 [2024-07-24 22:34:27.402807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.984 [2024-07-24 22:34:27.402832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.984 qpair failed and we were unable to recover it. 00:25:01.984 [2024-07-24 22:34:27.402926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.984 [2024-07-24 22:34:27.402951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.984 qpair failed and we were unable to recover it. 00:25:01.984 [2024-07-24 22:34:27.403056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.984 [2024-07-24 22:34:27.403082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.984 qpair failed and we were unable to recover it. 00:25:01.984 [2024-07-24 22:34:27.403184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.984 [2024-07-24 22:34:27.403210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.984 qpair failed and we were unable to recover it. 00:25:01.984 [2024-07-24 22:34:27.403313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.984 [2024-07-24 22:34:27.403339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.984 qpair failed and we were unable to recover it. 00:25:01.984 [2024-07-24 22:34:27.403441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.984 [2024-07-24 22:34:27.403467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.984 qpair failed and we were unable to recover it. 00:25:01.984 [2024-07-24 22:34:27.403592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.984 [2024-07-24 22:34:27.403619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.984 qpair failed and we were unable to recover it. 00:25:01.984 [2024-07-24 22:34:27.403726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.984 [2024-07-24 22:34:27.403754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.984 qpair failed and we were unable to recover it. 00:25:01.984 [2024-07-24 22:34:27.403868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.984 [2024-07-24 22:34:27.403894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.984 qpair failed and we were unable to recover it. 00:25:01.984 [2024-07-24 22:34:27.403988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.984 [2024-07-24 22:34:27.404014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.984 qpair failed and we were unable to recover it. 00:25:01.984 [2024-07-24 22:34:27.404111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.984 [2024-07-24 22:34:27.404143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.984 qpair failed and we were unable to recover it. 00:25:01.984 [2024-07-24 22:34:27.404254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.984 [2024-07-24 22:34:27.404282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.984 qpair failed and we were unable to recover it. 00:25:01.984 [2024-07-24 22:34:27.404397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.984 [2024-07-24 22:34:27.404422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.984 qpair failed and we were unable to recover it. 00:25:01.984 [2024-07-24 22:34:27.404530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.984 [2024-07-24 22:34:27.404557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.984 qpair failed and we were unable to recover it. 00:25:01.984 [2024-07-24 22:34:27.404665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.984 [2024-07-24 22:34:27.404695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.984 qpair failed and we were unable to recover it. 00:25:01.984 [2024-07-24 22:34:27.404813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.984 [2024-07-24 22:34:27.404841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.984 qpair failed and we were unable to recover it. 00:25:01.984 [2024-07-24 22:34:27.404959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.984 [2024-07-24 22:34:27.404984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.984 qpair failed and we were unable to recover it. 00:25:01.984 [2024-07-24 22:34:27.405095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.984 [2024-07-24 22:34:27.405124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.984 qpair failed and we were unable to recover it. 00:25:01.984 [2024-07-24 22:34:27.405235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.984 [2024-07-24 22:34:27.405261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.984 qpair failed and we were unable to recover it. 00:25:01.984 [2024-07-24 22:34:27.405357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.984 [2024-07-24 22:34:27.405383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.984 qpair failed and we were unable to recover it. 00:25:01.984 [2024-07-24 22:34:27.405494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.984 [2024-07-24 22:34:27.405521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.984 qpair failed and we were unable to recover it. 00:25:01.984 [2024-07-24 22:34:27.405623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.984 [2024-07-24 22:34:27.405649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.984 qpair failed and we were unable to recover it. 00:25:01.984 [2024-07-24 22:34:27.405757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.984 [2024-07-24 22:34:27.405785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.984 qpair failed and we were unable to recover it. 00:25:01.984 [2024-07-24 22:34:27.405899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.984 [2024-07-24 22:34:27.405926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.984 qpair failed and we were unable to recover it. 00:25:01.984 [2024-07-24 22:34:27.406051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.984 [2024-07-24 22:34:27.406080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.984 qpair failed and we were unable to recover it. 00:25:01.984 [2024-07-24 22:34:27.406181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.984 [2024-07-24 22:34:27.406207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.984 qpair failed and we were unable to recover it. 00:25:01.984 [2024-07-24 22:34:27.406310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.984 [2024-07-24 22:34:27.406337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.984 qpair failed and we were unable to recover it. 00:25:01.984 [2024-07-24 22:34:27.406438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.984 [2024-07-24 22:34:27.406464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.984 qpair failed and we were unable to recover it. 00:25:01.984 [2024-07-24 22:34:27.406575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.984 [2024-07-24 22:34:27.406601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.984 qpair failed and we were unable to recover it. 00:25:01.984 [2024-07-24 22:34:27.406694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.984 [2024-07-24 22:34:27.406720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.984 qpair failed and we were unable to recover it. 00:25:01.984 [2024-07-24 22:34:27.406828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.984 [2024-07-24 22:34:27.406856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.984 qpair failed and we were unable to recover it. 00:25:01.984 [2024-07-24 22:34:27.406963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.984 [2024-07-24 22:34:27.406990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.984 qpair failed and we were unable to recover it. 00:25:01.984 [2024-07-24 22:34:27.407090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.984 [2024-07-24 22:34:27.407117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.984 qpair failed and we were unable to recover it. 00:25:01.985 [2024-07-24 22:34:27.407218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.985 [2024-07-24 22:34:27.407244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.985 qpair failed and we were unable to recover it. 00:25:01.985 [2024-07-24 22:34:27.407346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.985 [2024-07-24 22:34:27.407373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.985 qpair failed and we were unable to recover it. 00:25:01.985 [2024-07-24 22:34:27.407497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.985 [2024-07-24 22:34:27.407527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.985 qpair failed and we were unable to recover it. 00:25:01.985 [2024-07-24 22:34:27.407637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.985 [2024-07-24 22:34:27.407665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.985 qpair failed and we were unable to recover it. 00:25:01.985 [2024-07-24 22:34:27.407780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.985 [2024-07-24 22:34:27.407814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.985 qpair failed and we were unable to recover it. 00:25:01.985 [2024-07-24 22:34:27.407943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.985 [2024-07-24 22:34:27.407973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:01.985 qpair failed and we were unable to recover it. 00:25:01.985 [2024-07-24 22:34:27.408099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.985 [2024-07-24 22:34:27.408127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.985 qpair failed and we were unable to recover it. 00:25:01.985 [2024-07-24 22:34:27.408227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.985 [2024-07-24 22:34:27.408252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.985 qpair failed and we were unable to recover it. 00:25:01.985 [2024-07-24 22:34:27.408371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.985 [2024-07-24 22:34:27.408397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.985 qpair failed and we were unable to recover it. 00:25:01.985 [2024-07-24 22:34:27.408501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.985 [2024-07-24 22:34:27.408528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.985 qpair failed and we were unable to recover it. 00:25:01.985 [2024-07-24 22:34:27.408632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.985 [2024-07-24 22:34:27.408660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.985 qpair failed and we were unable to recover it. 00:25:01.985 [2024-07-24 22:34:27.408773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.985 [2024-07-24 22:34:27.408801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.985 qpair failed and we were unable to recover it. 00:25:01.985 [2024-07-24 22:34:27.408906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.985 [2024-07-24 22:34:27.408933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.985 qpair failed and we were unable to recover it. 00:25:01.985 [2024-07-24 22:34:27.409032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.985 [2024-07-24 22:34:27.409058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.985 qpair failed and we were unable to recover it. 00:25:01.985 [2024-07-24 22:34:27.409157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.985 [2024-07-24 22:34:27.409183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.985 qpair failed and we were unable to recover it. 00:25:01.985 [2024-07-24 22:34:27.409290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.985 [2024-07-24 22:34:27.409316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.985 qpair failed and we were unable to recover it. 00:25:01.985 [2024-07-24 22:34:27.409426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.985 [2024-07-24 22:34:27.409452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.985 qpair failed and we were unable to recover it. 00:25:01.985 [2024-07-24 22:34:27.409560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.985 [2024-07-24 22:34:27.409587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.985 qpair failed and we were unable to recover it. 00:25:01.985 [2024-07-24 22:34:27.409691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.985 [2024-07-24 22:34:27.409718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.985 qpair failed and we were unable to recover it. 00:25:01.985 [2024-07-24 22:34:27.409818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.985 [2024-07-24 22:34:27.409844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.985 qpair failed and we were unable to recover it. 00:25:01.985 [2024-07-24 22:34:27.409961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.985 [2024-07-24 22:34:27.409986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.985 qpair failed and we were unable to recover it. 00:25:01.985 [2024-07-24 22:34:27.410085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.985 [2024-07-24 22:34:27.410111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.985 qpair failed and we were unable to recover it. 00:25:01.985 [2024-07-24 22:34:27.410217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.985 [2024-07-24 22:34:27.410246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.985 qpair failed and we were unable to recover it. 00:25:01.985 [2024-07-24 22:34:27.410340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.985 [2024-07-24 22:34:27.410366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.985 qpair failed and we were unable to recover it. 00:25:01.985 [2024-07-24 22:34:27.410472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.985 [2024-07-24 22:34:27.410506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.985 qpair failed and we were unable to recover it. 00:25:01.985 [2024-07-24 22:34:27.410607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.985 [2024-07-24 22:34:27.410633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.985 qpair failed and we were unable to recover it. 00:25:01.985 [2024-07-24 22:34:27.410745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.985 [2024-07-24 22:34:27.410771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.985 qpair failed and we were unable to recover it. 00:25:01.985 [2024-07-24 22:34:27.410886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.985 [2024-07-24 22:34:27.410912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.985 qpair failed and we were unable to recover it. 00:25:01.985 [2024-07-24 22:34:27.411018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.985 [2024-07-24 22:34:27.411043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.985 qpair failed and we were unable to recover it. 00:25:01.985 [2024-07-24 22:34:27.411164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.985 [2024-07-24 22:34:27.411189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.985 qpair failed and we were unable to recover it. 00:25:01.985 [2024-07-24 22:34:27.411306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.985 [2024-07-24 22:34:27.411332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.985 qpair failed and we were unable to recover it. 00:25:01.985 [2024-07-24 22:34:27.411441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.985 [2024-07-24 22:34:27.411467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.985 qpair failed and we were unable to recover it. 00:25:01.985 [2024-07-24 22:34:27.411582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.985 [2024-07-24 22:34:27.411608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.985 qpair failed and we were unable to recover it. 00:25:01.985 [2024-07-24 22:34:27.411711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.985 [2024-07-24 22:34:27.411737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.985 qpair failed and we were unable to recover it. 00:25:01.985 [2024-07-24 22:34:27.411836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.985 [2024-07-24 22:34:27.411861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.985 qpair failed and we were unable to recover it. 00:25:01.985 [2024-07-24 22:34:27.411965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.985 [2024-07-24 22:34:27.411992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.985 qpair failed and we were unable to recover it. 00:25:01.985 [2024-07-24 22:34:27.412089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.985 [2024-07-24 22:34:27.412115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.985 qpair failed and we were unable to recover it. 00:25:01.985 [2024-07-24 22:34:27.412217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.986 [2024-07-24 22:34:27.412244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.986 qpair failed and we were unable to recover it. 00:25:01.986 [2024-07-24 22:34:27.412341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.986 [2024-07-24 22:34:27.412366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.986 qpair failed and we were unable to recover it. 00:25:01.986 [2024-07-24 22:34:27.412461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.986 [2024-07-24 22:34:27.412495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.986 qpair failed and we were unable to recover it. 00:25:01.986 [2024-07-24 22:34:27.412611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.986 [2024-07-24 22:34:27.412637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.986 qpair failed and we were unable to recover it. 00:25:01.986 [2024-07-24 22:34:27.412795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.986 [2024-07-24 22:34:27.412824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.986 qpair failed and we were unable to recover it. 00:25:01.986 [2024-07-24 22:34:27.412934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.986 [2024-07-24 22:34:27.412961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.986 qpair failed and we were unable to recover it. 00:25:01.986 [2024-07-24 22:34:27.413071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.986 [2024-07-24 22:34:27.413098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.986 qpair failed and we were unable to recover it. 00:25:01.986 [2024-07-24 22:34:27.413216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.986 [2024-07-24 22:34:27.413246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.986 qpair failed and we were unable to recover it. 00:25:01.986 [2024-07-24 22:34:27.413350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.986 [2024-07-24 22:34:27.413376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.986 qpair failed and we were unable to recover it. 00:25:01.986 [2024-07-24 22:34:27.413487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.986 [2024-07-24 22:34:27.413516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.986 qpair failed and we were unable to recover it. 00:25:01.986 [2024-07-24 22:34:27.413622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.986 [2024-07-24 22:34:27.413648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.986 qpair failed and we were unable to recover it. 00:25:01.986 [2024-07-24 22:34:27.413751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.986 [2024-07-24 22:34:27.413777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.986 qpair failed and we were unable to recover it. 00:25:01.986 [2024-07-24 22:34:27.413878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.986 [2024-07-24 22:34:27.413903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.986 qpair failed and we were unable to recover it. 00:25:01.986 [2024-07-24 22:34:27.414123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.986 [2024-07-24 22:34:27.414150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.986 qpair failed and we were unable to recover it. 00:25:01.986 [2024-07-24 22:34:27.414246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.986 [2024-07-24 22:34:27.414272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.986 qpair failed and we were unable to recover it. 00:25:01.986 [2024-07-24 22:34:27.414375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.986 [2024-07-24 22:34:27.414401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.986 qpair failed and we were unable to recover it. 00:25:01.986 [2024-07-24 22:34:27.414518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.986 [2024-07-24 22:34:27.414544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.986 qpair failed and we were unable to recover it. 00:25:01.986 [2024-07-24 22:34:27.414647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.986 [2024-07-24 22:34:27.414674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.986 qpair failed and we were unable to recover it. 00:25:01.986 [2024-07-24 22:34:27.414778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.986 [2024-07-24 22:34:27.414806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.986 qpair failed and we were unable to recover it. 00:25:01.986 [2024-07-24 22:34:27.414916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.986 [2024-07-24 22:34:27.414942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.986 qpair failed and we were unable to recover it. 00:25:01.986 [2024-07-24 22:34:27.415057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.986 [2024-07-24 22:34:27.415083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.986 qpair failed and we were unable to recover it. 00:25:01.986 [2024-07-24 22:34:27.415211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.986 [2024-07-24 22:34:27.415237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.986 qpair failed and we were unable to recover it. 00:25:01.986 [2024-07-24 22:34:27.415354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.986 [2024-07-24 22:34:27.415381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.986 qpair failed and we were unable to recover it. 00:25:01.986 [2024-07-24 22:34:27.415483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.986 [2024-07-24 22:34:27.415511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.986 qpair failed and we were unable to recover it. 00:25:01.986 [2024-07-24 22:34:27.415609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.986 [2024-07-24 22:34:27.415635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.986 qpair failed and we were unable to recover it. 00:25:01.986 [2024-07-24 22:34:27.415755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.986 [2024-07-24 22:34:27.415785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.986 qpair failed and we were unable to recover it. 00:25:01.986 [2024-07-24 22:34:27.415893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.986 [2024-07-24 22:34:27.415918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.986 qpair failed and we were unable to recover it. 00:25:01.986 [2024-07-24 22:34:27.416032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.986 [2024-07-24 22:34:27.416057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.986 qpair failed and we were unable to recover it. 00:25:01.986 [2024-07-24 22:34:27.416171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.986 [2024-07-24 22:34:27.416198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.986 qpair failed and we were unable to recover it. 00:25:01.986 [2024-07-24 22:34:27.416303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.986 [2024-07-24 22:34:27.416330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.986 qpair failed and we were unable to recover it. 00:25:01.986 [2024-07-24 22:34:27.416451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.986 [2024-07-24 22:34:27.416487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.986 qpair failed and we were unable to recover it. 00:25:01.986 [2024-07-24 22:34:27.416598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.986 [2024-07-24 22:34:27.416625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.986 qpair failed and we were unable to recover it. 00:25:01.986 [2024-07-24 22:34:27.416722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.986 [2024-07-24 22:34:27.416749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.986 qpair failed and we were unable to recover it. 00:25:01.986 [2024-07-24 22:34:27.416844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.986 [2024-07-24 22:34:27.416870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.986 qpair failed and we were unable to recover it. 00:25:01.986 [2024-07-24 22:34:27.416981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.986 [2024-07-24 22:34:27.417007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.986 qpair failed and we were unable to recover it. 00:25:01.986 [2024-07-24 22:34:27.417110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.986 [2024-07-24 22:34:27.417137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.986 qpair failed and we were unable to recover it. 00:25:01.986 [2024-07-24 22:34:27.417238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.986 [2024-07-24 22:34:27.417264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.986 qpair failed and we were unable to recover it. 00:25:01.986 [2024-07-24 22:34:27.417364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.986 [2024-07-24 22:34:27.417390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.987 qpair failed and we were unable to recover it. 00:25:01.987 [2024-07-24 22:34:27.417492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.987 [2024-07-24 22:34:27.417519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.987 qpair failed and we were unable to recover it. 00:25:01.987 [2024-07-24 22:34:27.417620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.987 [2024-07-24 22:34:27.417646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.987 qpair failed and we were unable to recover it. 00:25:01.987 [2024-07-24 22:34:27.417752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.987 [2024-07-24 22:34:27.417781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.987 qpair failed and we were unable to recover it. 00:25:01.987 [2024-07-24 22:34:27.417900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.987 [2024-07-24 22:34:27.417927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.987 qpair failed and we were unable to recover it. 00:25:01.987 [2024-07-24 22:34:27.418031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.987 [2024-07-24 22:34:27.418059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.987 qpair failed and we were unable to recover it. 00:25:01.987 [2024-07-24 22:34:27.418159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.987 [2024-07-24 22:34:27.418186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.987 qpair failed and we were unable to recover it. 00:25:01.987 [2024-07-24 22:34:27.418289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.987 [2024-07-24 22:34:27.418315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.987 qpair failed and we were unable to recover it. 00:25:01.987 [2024-07-24 22:34:27.418425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.987 [2024-07-24 22:34:27.418451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.987 qpair failed and we were unable to recover it. 00:25:01.987 [2024-07-24 22:34:27.418568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.987 [2024-07-24 22:34:27.418594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.987 qpair failed and we were unable to recover it. 00:25:01.987 [2024-07-24 22:34:27.418712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.987 [2024-07-24 22:34:27.418745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.987 qpair failed and we were unable to recover it. 00:25:01.987 [2024-07-24 22:34:27.418862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.987 [2024-07-24 22:34:27.418890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.987 qpair failed and we were unable to recover it. 00:25:01.987 [2024-07-24 22:34:27.418992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.987 [2024-07-24 22:34:27.419018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.987 qpair failed and we were unable to recover it. 00:25:01.987 [2024-07-24 22:34:27.419117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.987 [2024-07-24 22:34:27.419143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.987 qpair failed and we were unable to recover it. 00:25:01.987 [2024-07-24 22:34:27.419254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.987 [2024-07-24 22:34:27.419280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.987 qpair failed and we were unable to recover it. 00:25:01.987 [2024-07-24 22:34:27.419397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.987 [2024-07-24 22:34:27.419426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.987 qpair failed and we were unable to recover it. 00:25:01.987 [2024-07-24 22:34:27.419523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.987 [2024-07-24 22:34:27.419549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.987 qpair failed and we were unable to recover it. 00:25:01.987 [2024-07-24 22:34:27.419651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.987 [2024-07-24 22:34:27.419677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.987 qpair failed and we were unable to recover it. 00:25:01.987 [2024-07-24 22:34:27.419780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.987 [2024-07-24 22:34:27.419807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.987 qpair failed and we were unable to recover it. 00:25:01.987 [2024-07-24 22:34:27.419913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.987 [2024-07-24 22:34:27.419938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.987 qpair failed and we were unable to recover it. 00:25:01.987 [2024-07-24 22:34:27.420058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.987 [2024-07-24 22:34:27.420088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.987 qpair failed and we were unable to recover it. 00:25:01.987 [2024-07-24 22:34:27.420196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.987 [2024-07-24 22:34:27.420223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.987 qpair failed and we were unable to recover it. 00:25:01.987 [2024-07-24 22:34:27.420337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.987 [2024-07-24 22:34:27.420362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.987 qpair failed and we were unable to recover it. 00:25:01.987 [2024-07-24 22:34:27.420465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.987 [2024-07-24 22:34:27.420504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.987 qpair failed and we were unable to recover it. 00:25:01.987 [2024-07-24 22:34:27.420625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.987 [2024-07-24 22:34:27.420650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.987 qpair failed and we were unable to recover it. 00:25:01.987 [2024-07-24 22:34:27.420753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.987 [2024-07-24 22:34:27.420779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.987 qpair failed and we were unable to recover it. 00:25:01.987 [2024-07-24 22:34:27.420884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.987 [2024-07-24 22:34:27.420909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.987 qpair failed and we were unable to recover it. 00:25:01.987 [2024-07-24 22:34:27.421025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.987 [2024-07-24 22:34:27.421051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.987 qpair failed and we were unable to recover it. 00:25:01.987 [2024-07-24 22:34:27.421156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.987 [2024-07-24 22:34:27.421182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.987 qpair failed and we were unable to recover it. 00:25:01.987 [2024-07-24 22:34:27.421288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.987 [2024-07-24 22:34:27.421314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.987 qpair failed and we were unable to recover it. 00:25:01.987 [2024-07-24 22:34:27.421434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.987 [2024-07-24 22:34:27.421460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.987 qpair failed and we were unable to recover it. 00:25:01.987 [2024-07-24 22:34:27.421570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.987 [2024-07-24 22:34:27.421596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.987 qpair failed and we were unable to recover it. 00:25:01.987 [2024-07-24 22:34:27.421697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.987 [2024-07-24 22:34:27.421730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.987 qpair failed and we were unable to recover it. 00:25:01.987 [2024-07-24 22:34:27.421842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.987 [2024-07-24 22:34:27.421869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.987 qpair failed and we were unable to recover it. 00:25:01.987 [2024-07-24 22:34:27.421973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.987 [2024-07-24 22:34:27.421998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.987 qpair failed and we were unable to recover it. 00:25:01.987 [2024-07-24 22:34:27.422100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.987 [2024-07-24 22:34:27.422125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.987 qpair failed and we were unable to recover it. 00:25:01.987 [2024-07-24 22:34:27.422228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.987 [2024-07-24 22:34:27.422253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.987 qpair failed and we were unable to recover it. 00:25:01.987 [2024-07-24 22:34:27.422364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.987 [2024-07-24 22:34:27.422390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.987 qpair failed and we were unable to recover it. 00:25:01.987 [2024-07-24 22:34:27.422502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.988 [2024-07-24 22:34:27.422533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.988 qpair failed and we were unable to recover it. 00:25:01.988 [2024-07-24 22:34:27.422639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.988 [2024-07-24 22:34:27.422667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.988 qpair failed and we were unable to recover it. 00:25:01.988 [2024-07-24 22:34:27.422880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.988 [2024-07-24 22:34:27.422906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.988 qpair failed and we were unable to recover it. 00:25:01.988 [2024-07-24 22:34:27.423024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.988 [2024-07-24 22:34:27.423051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.988 qpair failed and we were unable to recover it. 00:25:01.988 [2024-07-24 22:34:27.423147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.988 [2024-07-24 22:34:27.423173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.988 qpair failed and we were unable to recover it. 00:25:01.988 [2024-07-24 22:34:27.423278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.988 [2024-07-24 22:34:27.423304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.988 qpair failed and we were unable to recover it. 00:25:01.988 [2024-07-24 22:34:27.423419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.988 [2024-07-24 22:34:27.423447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.988 qpair failed and we were unable to recover it. 00:25:01.988 [2024-07-24 22:34:27.423562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.988 [2024-07-24 22:34:27.423590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.988 qpair failed and we were unable to recover it. 00:25:01.988 [2024-07-24 22:34:27.423695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.988 [2024-07-24 22:34:27.423721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.988 qpair failed and we were unable to recover it. 00:25:01.988 [2024-07-24 22:34:27.423821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.988 [2024-07-24 22:34:27.423846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.988 qpair failed and we were unable to recover it. 00:25:01.988 [2024-07-24 22:34:27.423959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.988 [2024-07-24 22:34:27.423984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.988 qpair failed and we were unable to recover it. 00:25:01.988 [2024-07-24 22:34:27.424099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.988 [2024-07-24 22:34:27.424126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.988 qpair failed and we were unable to recover it. 00:25:01.988 [2024-07-24 22:34:27.424228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.988 [2024-07-24 22:34:27.424258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.988 qpair failed and we were unable to recover it. 00:25:01.988 [2024-07-24 22:34:27.424377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.988 [2024-07-24 22:34:27.424403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.988 qpair failed and we were unable to recover it. 00:25:01.988 [2024-07-24 22:34:27.424513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.988 [2024-07-24 22:34:27.424540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.988 qpair failed and we were unable to recover it. 00:25:01.988 [2024-07-24 22:34:27.424660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.988 [2024-07-24 22:34:27.424686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.988 qpair failed and we were unable to recover it. 00:25:01.988 [2024-07-24 22:34:27.424790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.988 [2024-07-24 22:34:27.424815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.988 qpair failed and we were unable to recover it. 00:25:01.988 [2024-07-24 22:34:27.424916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.988 [2024-07-24 22:34:27.424940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.988 qpair failed and we were unable to recover it. 00:25:01.988 [2024-07-24 22:34:27.425044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.988 [2024-07-24 22:34:27.425070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.988 qpair failed and we were unable to recover it. 00:25:01.988 [2024-07-24 22:34:27.425175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.988 [2024-07-24 22:34:27.425202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.988 qpair failed and we were unable to recover it. 00:25:01.988 [2024-07-24 22:34:27.425297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.988 [2024-07-24 22:34:27.425323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.988 qpair failed and we were unable to recover it. 00:25:01.988 [2024-07-24 22:34:27.425425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.988 [2024-07-24 22:34:27.425452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.988 qpair failed and we were unable to recover it. 00:25:01.988 [2024-07-24 22:34:27.425603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.988 [2024-07-24 22:34:27.425630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.988 qpair failed and we were unable to recover it. 00:25:01.988 [2024-07-24 22:34:27.425737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.988 [2024-07-24 22:34:27.425764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.988 qpair failed and we were unable to recover it. 00:25:01.988 [2024-07-24 22:34:27.425866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.988 [2024-07-24 22:34:27.425895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.988 qpair failed and we were unable to recover it. 00:25:01.988 [2024-07-24 22:34:27.426004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.988 [2024-07-24 22:34:27.426032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.988 qpair failed and we were unable to recover it. 00:25:01.988 [2024-07-24 22:34:27.426145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.988 [2024-07-24 22:34:27.426172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.988 qpair failed and we were unable to recover it. 00:25:01.988 [2024-07-24 22:34:27.426269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.988 [2024-07-24 22:34:27.426296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.988 qpair failed and we were unable to recover it. 00:25:01.988 [2024-07-24 22:34:27.426404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.988 [2024-07-24 22:34:27.426434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.988 qpair failed and we were unable to recover it. 00:25:01.988 [2024-07-24 22:34:27.426543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.988 [2024-07-24 22:34:27.426570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.988 qpair failed and we were unable to recover it. 00:25:01.988 [2024-07-24 22:34:27.426685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.988 [2024-07-24 22:34:27.426713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.989 qpair failed and we were unable to recover it. 00:25:01.989 [2024-07-24 22:34:27.426814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.989 [2024-07-24 22:34:27.426840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.989 qpair failed and we were unable to recover it. 00:25:01.989 [2024-07-24 22:34:27.426959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.989 [2024-07-24 22:34:27.426984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.989 qpair failed and we were unable to recover it. 00:25:01.989 [2024-07-24 22:34:27.427090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.989 [2024-07-24 22:34:27.427117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.989 qpair failed and we were unable to recover it. 00:25:01.989 [2024-07-24 22:34:27.427234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.989 [2024-07-24 22:34:27.427263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.989 qpair failed and we were unable to recover it. 00:25:01.989 [2024-07-24 22:34:27.427374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.989 [2024-07-24 22:34:27.427401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.989 qpair failed and we were unable to recover it. 00:25:01.989 [2024-07-24 22:34:27.427513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.989 [2024-07-24 22:34:27.427540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.989 qpair failed and we were unable to recover it. 00:25:01.989 [2024-07-24 22:34:27.427653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.989 [2024-07-24 22:34:27.427679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.989 qpair failed and we were unable to recover it. 00:25:01.989 [2024-07-24 22:34:27.427798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.989 [2024-07-24 22:34:27.427824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.989 qpair failed and we were unable to recover it. 00:25:01.989 [2024-07-24 22:34:27.427949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.989 [2024-07-24 22:34:27.427977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.989 qpair failed and we were unable to recover it. 00:25:01.989 [2024-07-24 22:34:27.428088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.989 [2024-07-24 22:34:27.428115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.989 qpair failed and we were unable to recover it. 00:25:01.989 [2024-07-24 22:34:27.428216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.989 [2024-07-24 22:34:27.428242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.989 qpair failed and we were unable to recover it. 00:25:01.989 [2024-07-24 22:34:27.428356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.989 [2024-07-24 22:34:27.428382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.989 qpair failed and we were unable to recover it. 00:25:01.989 [2024-07-24 22:34:27.428498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.989 [2024-07-24 22:34:27.428524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.989 qpair failed and we were unable to recover it. 00:25:01.989 [2024-07-24 22:34:27.428637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.989 [2024-07-24 22:34:27.428665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.989 qpair failed and we were unable to recover it. 00:25:01.989 [2024-07-24 22:34:27.428775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.989 [2024-07-24 22:34:27.428801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.989 qpair failed and we were unable to recover it. 00:25:01.989 [2024-07-24 22:34:27.428903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.989 [2024-07-24 22:34:27.428929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.989 qpair failed and we were unable to recover it. 00:25:01.989 [2024-07-24 22:34:27.429043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.989 [2024-07-24 22:34:27.429069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.989 qpair failed and we were unable to recover it. 00:25:01.989 [2024-07-24 22:34:27.429178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.989 [2024-07-24 22:34:27.429204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.989 qpair failed and we were unable to recover it. 00:25:01.989 [2024-07-24 22:34:27.429306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.989 [2024-07-24 22:34:27.429334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.989 qpair failed and we were unable to recover it. 00:25:01.989 [2024-07-24 22:34:27.429432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.989 [2024-07-24 22:34:27.429457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.989 qpair failed and we were unable to recover it. 00:25:01.989 [2024-07-24 22:34:27.429567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.989 [2024-07-24 22:34:27.429596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.989 qpair failed and we were unable to recover it. 00:25:01.989 [2024-07-24 22:34:27.429715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.989 [2024-07-24 22:34:27.429746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.989 qpair failed and we were unable to recover it. 00:25:01.989 [2024-07-24 22:34:27.429845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.989 [2024-07-24 22:34:27.429872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.989 qpair failed and we were unable to recover it. 00:25:01.989 [2024-07-24 22:34:27.429970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.989 [2024-07-24 22:34:27.429996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.989 qpair failed and we were unable to recover it. 00:25:01.989 [2024-07-24 22:34:27.430105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.989 [2024-07-24 22:34:27.430131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.989 qpair failed and we were unable to recover it. 00:25:01.989 [2024-07-24 22:34:27.430236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.989 [2024-07-24 22:34:27.430262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.989 qpair failed and we were unable to recover it. 00:25:01.989 [2024-07-24 22:34:27.430473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.989 [2024-07-24 22:34:27.430510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.989 qpair failed and we were unable to recover it. 00:25:01.989 [2024-07-24 22:34:27.430612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.989 [2024-07-24 22:34:27.430638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.989 qpair failed and we were unable to recover it. 00:25:01.989 [2024-07-24 22:34:27.430753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.989 [2024-07-24 22:34:27.430778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.989 qpair failed and we were unable to recover it. 00:25:01.989 [2024-07-24 22:34:27.430874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.989 [2024-07-24 22:34:27.430900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.989 qpair failed and we were unable to recover it. 00:25:01.989 [2024-07-24 22:34:27.431011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.989 [2024-07-24 22:34:27.431040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.989 qpair failed and we were unable to recover it. 00:25:01.989 [2024-07-24 22:34:27.431138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.989 [2024-07-24 22:34:27.431164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.989 qpair failed and we were unable to recover it. 00:25:01.989 [2024-07-24 22:34:27.431275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.989 [2024-07-24 22:34:27.431302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.989 qpair failed and we were unable to recover it. 00:25:01.989 [2024-07-24 22:34:27.431411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.989 [2024-07-24 22:34:27.431436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.989 qpair failed and we were unable to recover it. 00:25:01.989 [2024-07-24 22:34:27.431546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.989 [2024-07-24 22:34:27.431573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.989 qpair failed and we were unable to recover it. 00:25:01.989 [2024-07-24 22:34:27.431687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.989 [2024-07-24 22:34:27.431713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.989 qpair failed and we were unable to recover it. 00:25:01.989 [2024-07-24 22:34:27.431816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.989 [2024-07-24 22:34:27.431842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.990 qpair failed and we were unable to recover it. 00:25:01.990 [2024-07-24 22:34:27.431946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.990 [2024-07-24 22:34:27.431972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.990 qpair failed and we were unable to recover it. 00:25:01.990 [2024-07-24 22:34:27.432086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.990 [2024-07-24 22:34:27.432116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.990 qpair failed and we were unable to recover it. 00:25:01.990 [2024-07-24 22:34:27.432222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.990 [2024-07-24 22:34:27.432249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.990 qpair failed and we were unable to recover it. 00:25:01.990 [2024-07-24 22:34:27.432351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.990 [2024-07-24 22:34:27.432377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.990 qpair failed and we were unable to recover it. 00:25:01.990 [2024-07-24 22:34:27.432504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.990 [2024-07-24 22:34:27.432532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.990 qpair failed and we were unable to recover it. 00:25:01.990 [2024-07-24 22:34:27.432644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.990 [2024-07-24 22:34:27.432670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.990 qpair failed and we were unable to recover it. 00:25:01.990 [2024-07-24 22:34:27.432783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.990 [2024-07-24 22:34:27.432809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.990 qpair failed and we were unable to recover it. 00:25:01.990 [2024-07-24 22:34:27.432905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.990 [2024-07-24 22:34:27.432931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.990 qpair failed and we were unable to recover it. 00:25:01.990 [2024-07-24 22:34:27.433031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.990 [2024-07-24 22:34:27.433056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.990 qpair failed and we were unable to recover it. 00:25:01.990 [2024-07-24 22:34:27.433154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.990 [2024-07-24 22:34:27.433180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.990 qpair failed and we were unable to recover it. 00:25:01.990 [2024-07-24 22:34:27.433277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.990 [2024-07-24 22:34:27.433303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.990 qpair failed and we were unable to recover it. 00:25:01.990 [2024-07-24 22:34:27.433412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.990 [2024-07-24 22:34:27.433441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.990 qpair failed and we were unable to recover it. 00:25:01.990 [2024-07-24 22:34:27.433567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.990 [2024-07-24 22:34:27.433594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.990 qpair failed and we were unable to recover it. 00:25:01.990 [2024-07-24 22:34:27.433698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.990 [2024-07-24 22:34:27.433725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.990 qpair failed and we were unable to recover it. 00:25:01.990 [2024-07-24 22:34:27.433825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.990 [2024-07-24 22:34:27.433851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.990 qpair failed and we were unable to recover it. 00:25:01.990 [2024-07-24 22:34:27.433957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.990 [2024-07-24 22:34:27.433984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.990 qpair failed and we were unable to recover it. 00:25:01.990 [2024-07-24 22:34:27.434090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.990 [2024-07-24 22:34:27.434116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.990 qpair failed and we were unable to recover it. 00:25:01.990 [2024-07-24 22:34:27.434264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.990 [2024-07-24 22:34:27.434291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.990 qpair failed and we were unable to recover it. 00:25:01.990 [2024-07-24 22:34:27.434404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.990 [2024-07-24 22:34:27.434430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.990 qpair failed and we were unable to recover it. 00:25:01.990 [2024-07-24 22:34:27.434537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.990 [2024-07-24 22:34:27.434565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.990 qpair failed and we were unable to recover it. 00:25:01.990 [2024-07-24 22:34:27.434674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.990 [2024-07-24 22:34:27.434701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.990 qpair failed and we were unable to recover it. 00:25:01.990 [2024-07-24 22:34:27.434799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.990 [2024-07-24 22:34:27.434825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.990 qpair failed and we were unable to recover it. 00:25:01.990 [2024-07-24 22:34:27.434947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.990 [2024-07-24 22:34:27.434976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.990 qpair failed and we were unable to recover it. 00:25:01.990 [2024-07-24 22:34:27.435093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.990 [2024-07-24 22:34:27.435119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.990 qpair failed and we were unable to recover it. 00:25:01.990 [2024-07-24 22:34:27.435235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.990 [2024-07-24 22:34:27.435269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.990 qpair failed and we were unable to recover it. 00:25:01.990 [2024-07-24 22:34:27.435377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.990 [2024-07-24 22:34:27.435402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.990 qpair failed and we were unable to recover it. 00:25:01.990 [2024-07-24 22:34:27.435519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.990 [2024-07-24 22:34:27.435548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.990 qpair failed and we were unable to recover it. 00:25:01.990 [2024-07-24 22:34:27.435652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.990 [2024-07-24 22:34:27.435677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.990 qpair failed and we were unable to recover it. 00:25:01.990 [2024-07-24 22:34:27.435779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.990 [2024-07-24 22:34:27.435804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.990 qpair failed and we were unable to recover it. 00:25:01.990 [2024-07-24 22:34:27.435916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.990 [2024-07-24 22:34:27.435942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.990 qpair failed and we were unable to recover it. 00:25:01.990 [2024-07-24 22:34:27.436041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.990 [2024-07-24 22:34:27.436067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.990 qpair failed and we were unable to recover it. 00:25:01.990 [2024-07-24 22:34:27.436171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.990 [2024-07-24 22:34:27.436200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.990 qpair failed and we were unable to recover it. 00:25:01.990 [2024-07-24 22:34:27.436326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.990 [2024-07-24 22:34:27.436356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.990 qpair failed and we were unable to recover it. 00:25:01.990 [2024-07-24 22:34:27.436466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.990 [2024-07-24 22:34:27.436499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.990 qpair failed and we were unable to recover it. 00:25:01.990 [2024-07-24 22:34:27.436611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.990 [2024-07-24 22:34:27.436638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.990 qpair failed and we were unable to recover it. 00:25:01.990 [2024-07-24 22:34:27.436742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.990 [2024-07-24 22:34:27.436770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.990 qpair failed and we were unable to recover it. 00:25:01.990 [2024-07-24 22:34:27.436878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.990 [2024-07-24 22:34:27.436905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.990 qpair failed and we were unable to recover it. 00:25:01.991 [2024-07-24 22:34:27.437009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.991 [2024-07-24 22:34:27.437036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.991 qpair failed and we were unable to recover it. 00:25:01.991 [2024-07-24 22:34:27.437143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.991 [2024-07-24 22:34:27.437169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.991 qpair failed and we were unable to recover it. 00:25:01.991 [2024-07-24 22:34:27.437285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.991 [2024-07-24 22:34:27.437311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.991 qpair failed and we were unable to recover it. 00:25:01.991 [2024-07-24 22:34:27.437462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.991 [2024-07-24 22:34:27.437498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.991 qpair failed and we were unable to recover it. 00:25:01.991 [2024-07-24 22:34:27.437600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.991 [2024-07-24 22:34:27.437626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.991 qpair failed and we were unable to recover it. 00:25:01.991 [2024-07-24 22:34:27.437728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.991 [2024-07-24 22:34:27.437754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.991 qpair failed and we were unable to recover it. 00:25:01.991 [2024-07-24 22:34:27.437857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.991 [2024-07-24 22:34:27.437883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.991 qpair failed and we were unable to recover it. 00:25:01.991 [2024-07-24 22:34:27.437987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.991 [2024-07-24 22:34:27.438013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.991 qpair failed and we were unable to recover it. 00:25:01.991 [2024-07-24 22:34:27.438113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.991 [2024-07-24 22:34:27.438138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.991 qpair failed and we were unable to recover it. 00:25:01.991 [2024-07-24 22:34:27.438236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.991 [2024-07-24 22:34:27.438262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.991 qpair failed and we were unable to recover it. 00:25:01.991 [2024-07-24 22:34:27.438357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.991 [2024-07-24 22:34:27.438382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.991 qpair failed and we were unable to recover it. 00:25:01.991 [2024-07-24 22:34:27.438487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.991 [2024-07-24 22:34:27.438514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.991 qpair failed and we were unable to recover it. 00:25:01.991 [2024-07-24 22:34:27.438636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.991 [2024-07-24 22:34:27.438666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.991 qpair failed and we were unable to recover it. 00:25:01.991 [2024-07-24 22:34:27.438773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.991 [2024-07-24 22:34:27.438802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.991 qpair failed and we were unable to recover it. 00:25:01.991 [2024-07-24 22:34:27.438903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.991 [2024-07-24 22:34:27.438930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.991 qpair failed and we were unable to recover it. 00:25:01.991 [2024-07-24 22:34:27.439032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.991 [2024-07-24 22:34:27.439058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.991 qpair failed and we were unable to recover it. 00:25:01.991 [2024-07-24 22:34:27.439170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.991 [2024-07-24 22:34:27.439197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.991 qpair failed and we were unable to recover it. 00:25:01.991 [2024-07-24 22:34:27.439297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.991 [2024-07-24 22:34:27.439323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.991 qpair failed and we were unable to recover it. 00:25:01.991 [2024-07-24 22:34:27.439415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.991 [2024-07-24 22:34:27.439441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.991 qpair failed and we were unable to recover it. 00:25:01.991 [2024-07-24 22:34:27.439558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.991 [2024-07-24 22:34:27.439586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.991 qpair failed and we were unable to recover it. 00:25:01.991 [2024-07-24 22:34:27.439702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.991 [2024-07-24 22:34:27.439729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.991 qpair failed and we were unable to recover it. 00:25:01.991 [2024-07-24 22:34:27.439829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.991 [2024-07-24 22:34:27.439855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.991 qpair failed and we were unable to recover it. 00:25:01.991 [2024-07-24 22:34:27.439954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.991 [2024-07-24 22:34:27.439980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.991 qpair failed and we were unable to recover it. 00:25:01.991 [2024-07-24 22:34:27.440090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.991 [2024-07-24 22:34:27.440116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.991 qpair failed and we were unable to recover it. 00:25:01.991 [2024-07-24 22:34:27.440218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.991 [2024-07-24 22:34:27.440245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.991 qpair failed and we were unable to recover it. 00:25:01.991 [2024-07-24 22:34:27.440350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.991 [2024-07-24 22:34:27.440378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.991 qpair failed and we were unable to recover it. 00:25:01.991 [2024-07-24 22:34:27.440474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.991 [2024-07-24 22:34:27.440512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.991 qpair failed and we were unable to recover it. 00:25:01.991 [2024-07-24 22:34:27.440608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.991 [2024-07-24 22:34:27.440640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.991 qpair failed and we were unable to recover it. 00:25:01.991 [2024-07-24 22:34:27.440747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.991 [2024-07-24 22:34:27.440774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.991 qpair failed and we were unable to recover it. 00:25:01.991 [2024-07-24 22:34:27.440877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.991 [2024-07-24 22:34:27.440902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.991 qpair failed and we were unable to recover it. 00:25:01.991 [2024-07-24 22:34:27.441003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.991 [2024-07-24 22:34:27.441030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.991 qpair failed and we were unable to recover it. 00:25:01.991 [2024-07-24 22:34:27.441131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.991 [2024-07-24 22:34:27.441157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.991 qpair failed and we were unable to recover it. 00:25:01.991 [2024-07-24 22:34:27.441304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.991 [2024-07-24 22:34:27.441329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.991 qpair failed and we were unable to recover it. 00:25:01.991 [2024-07-24 22:34:27.441424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.991 [2024-07-24 22:34:27.441451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.991 qpair failed and we were unable to recover it. 00:25:01.991 [2024-07-24 22:34:27.441561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.991 [2024-07-24 22:34:27.441587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.991 qpair failed and we were unable to recover it. 00:25:01.991 [2024-07-24 22:34:27.441688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.992 [2024-07-24 22:34:27.441714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.992 qpair failed and we were unable to recover it. 00:25:01.992 [2024-07-24 22:34:27.441829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.992 [2024-07-24 22:34:27.441858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.992 qpair failed and we were unable to recover it. 00:25:01.992 [2024-07-24 22:34:27.441968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.992 [2024-07-24 22:34:27.441995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.992 qpair failed and we were unable to recover it. 00:25:01.992 [2024-07-24 22:34:27.442090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.992 [2024-07-24 22:34:27.442116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.992 qpair failed and we were unable to recover it. 00:25:01.992 [2024-07-24 22:34:27.442213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.992 [2024-07-24 22:34:27.442238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.992 qpair failed and we were unable to recover it. 00:25:01.992 [2024-07-24 22:34:27.442337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.992 [2024-07-24 22:34:27.442363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.992 qpair failed and we were unable to recover it. 00:25:01.992 [2024-07-24 22:34:27.442470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.992 [2024-07-24 22:34:27.442508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.992 qpair failed and we were unable to recover it. 00:25:01.992 [2024-07-24 22:34:27.442606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.992 [2024-07-24 22:34:27.442631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.992 qpair failed and we were unable to recover it. 00:25:01.992 [2024-07-24 22:34:27.442733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.992 [2024-07-24 22:34:27.442758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.992 qpair failed and we were unable to recover it. 00:25:01.992 [2024-07-24 22:34:27.442869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.992 [2024-07-24 22:34:27.442895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.992 qpair failed and we were unable to recover it. 00:25:01.992 [2024-07-24 22:34:27.442995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.992 [2024-07-24 22:34:27.443024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.992 qpair failed and we were unable to recover it. 00:25:01.992 [2024-07-24 22:34:27.443129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.992 [2024-07-24 22:34:27.443155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.992 qpair failed and we were unable to recover it. 00:25:01.992 [2024-07-24 22:34:27.443254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.992 [2024-07-24 22:34:27.443280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.992 qpair failed and we were unable to recover it. 00:25:01.992 [2024-07-24 22:34:27.443373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.992 [2024-07-24 22:34:27.443399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.992 qpair failed and we were unable to recover it. 00:25:01.992 [2024-07-24 22:34:27.443515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.992 [2024-07-24 22:34:27.443543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.992 qpair failed and we were unable to recover it. 00:25:01.992 [2024-07-24 22:34:27.443644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.992 [2024-07-24 22:34:27.443671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.992 qpair failed and we were unable to recover it. 00:25:01.992 [2024-07-24 22:34:27.443768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.992 [2024-07-24 22:34:27.443794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.992 qpair failed and we were unable to recover it. 00:25:01.992 [2024-07-24 22:34:27.443901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.992 [2024-07-24 22:34:27.443928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.992 qpair failed and we were unable to recover it. 00:25:01.992 [2024-07-24 22:34:27.444044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.992 [2024-07-24 22:34:27.444070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.992 qpair failed and we were unable to recover it. 00:25:01.992 [2024-07-24 22:34:27.444176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.992 [2024-07-24 22:34:27.444202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.992 qpair failed and we were unable to recover it. 00:25:01.992 [2024-07-24 22:34:27.444305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.992 [2024-07-24 22:34:27.444331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.992 qpair failed and we were unable to recover it. 00:25:01.992 [2024-07-24 22:34:27.444434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.992 [2024-07-24 22:34:27.444460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.992 qpair failed and we were unable to recover it. 00:25:01.992 [2024-07-24 22:34:27.444616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.992 [2024-07-24 22:34:27.444643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.992 qpair failed and we were unable to recover it. 00:25:01.992 [2024-07-24 22:34:27.444741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.992 [2024-07-24 22:34:27.444767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.992 qpair failed and we were unable to recover it. 00:25:01.992 [2024-07-24 22:34:27.444865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.992 [2024-07-24 22:34:27.444892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.992 qpair failed and we were unable to recover it. 00:25:01.992 [2024-07-24 22:34:27.444999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.992 [2024-07-24 22:34:27.445025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.992 qpair failed and we were unable to recover it. 00:25:01.992 [2024-07-24 22:34:27.445125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.992 [2024-07-24 22:34:27.445153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.992 qpair failed and we were unable to recover it. 00:25:01.992 [2024-07-24 22:34:27.445300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.992 [2024-07-24 22:34:27.445326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.992 qpair failed and we were unable to recover it. 00:25:01.992 [2024-07-24 22:34:27.445473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.992 [2024-07-24 22:34:27.445505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.992 qpair failed and we were unable to recover it. 00:25:01.992 [2024-07-24 22:34:27.445616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.992 [2024-07-24 22:34:27.445643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.992 qpair failed and we were unable to recover it. 00:25:01.992 [2024-07-24 22:34:27.445742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.992 [2024-07-24 22:34:27.445768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.992 qpair failed and we were unable to recover it. 00:25:01.992 [2024-07-24 22:34:27.445874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.992 [2024-07-24 22:34:27.445900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.992 qpair failed and we were unable to recover it. 00:25:01.992 [2024-07-24 22:34:27.446003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.992 [2024-07-24 22:34:27.446039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.992 qpair failed and we were unable to recover it. 00:25:01.992 [2024-07-24 22:34:27.446147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.992 [2024-07-24 22:34:27.446175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.992 qpair failed and we were unable to recover it. 00:25:01.992 [2024-07-24 22:34:27.446280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.992 [2024-07-24 22:34:27.446310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.992 qpair failed and we were unable to recover it. 00:25:01.992 [2024-07-24 22:34:27.446417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.992 [2024-07-24 22:34:27.446443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.992 qpair failed and we were unable to recover it. 00:25:01.992 [2024-07-24 22:34:27.446553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.992 [2024-07-24 22:34:27.446580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.992 qpair failed and we were unable to recover it. 00:25:01.992 [2024-07-24 22:34:27.446682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.992 [2024-07-24 22:34:27.446707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.993 qpair failed and we were unable to recover it. 00:25:01.993 [2024-07-24 22:34:27.446804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.993 [2024-07-24 22:34:27.446829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.993 qpair failed and we were unable to recover it. 00:25:01.993 [2024-07-24 22:34:27.446936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.993 [2024-07-24 22:34:27.446963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.993 qpair failed and we were unable to recover it. 00:25:01.993 [2024-07-24 22:34:27.447080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.993 [2024-07-24 22:34:27.447105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.993 qpair failed and we were unable to recover it. 00:25:01.993 [2024-07-24 22:34:27.447216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.993 [2024-07-24 22:34:27.447242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.993 qpair failed and we were unable to recover it. 00:25:01.993 [2024-07-24 22:34:27.447363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.993 [2024-07-24 22:34:27.447388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.993 qpair failed and we were unable to recover it. 00:25:01.993 [2024-07-24 22:34:27.447495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.993 [2024-07-24 22:34:27.447525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.993 qpair failed and we were unable to recover it. 00:25:01.993 [2024-07-24 22:34:27.447628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.993 [2024-07-24 22:34:27.447654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.993 qpair failed and we were unable to recover it. 00:25:01.993 [2024-07-24 22:34:27.447767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.993 [2024-07-24 22:34:27.447792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.993 qpair failed and we were unable to recover it. 00:25:01.993 [2024-07-24 22:34:27.447899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.993 [2024-07-24 22:34:27.447925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.993 qpair failed and we were unable to recover it. 00:25:01.993 [2024-07-24 22:34:27.448019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.993 [2024-07-24 22:34:27.448045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.993 qpair failed and we were unable to recover it. 00:25:01.993 [2024-07-24 22:34:27.448151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.993 [2024-07-24 22:34:27.448179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.993 qpair failed and we were unable to recover it. 00:25:01.993 [2024-07-24 22:34:27.448284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.993 [2024-07-24 22:34:27.448310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.993 qpair failed and we were unable to recover it. 00:25:01.993 [2024-07-24 22:34:27.448409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.993 [2024-07-24 22:34:27.448436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.993 qpair failed and we were unable to recover it. 00:25:01.993 [2024-07-24 22:34:27.448554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.993 [2024-07-24 22:34:27.448580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.993 qpair failed and we were unable to recover it. 00:25:01.993 [2024-07-24 22:34:27.448696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.993 [2024-07-24 22:34:27.448725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.993 qpair failed and we were unable to recover it. 00:25:01.993 [2024-07-24 22:34:27.448836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.993 [2024-07-24 22:34:27.448864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.993 qpair failed and we were unable to recover it. 00:25:01.993 [2024-07-24 22:34:27.448969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.993 [2024-07-24 22:34:27.448995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.993 qpair failed and we were unable to recover it. 00:25:01.993 [2024-07-24 22:34:27.449108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.993 [2024-07-24 22:34:27.449134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.993 qpair failed and we were unable to recover it. 00:25:01.993 [2024-07-24 22:34:27.449240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.993 [2024-07-24 22:34:27.449267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.993 qpair failed and we were unable to recover it. 00:25:01.993 [2024-07-24 22:34:27.449370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.993 [2024-07-24 22:34:27.449398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.993 qpair failed and we were unable to recover it. 00:25:01.993 [2024-07-24 22:34:27.449495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.993 [2024-07-24 22:34:27.449521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.993 qpair failed and we were unable to recover it. 00:25:01.993 [2024-07-24 22:34:27.449629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.993 [2024-07-24 22:34:27.449658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.993 qpair failed and we were unable to recover it. 00:25:01.993 [2024-07-24 22:34:27.449788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.993 [2024-07-24 22:34:27.449814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.993 qpair failed and we were unable to recover it. 00:25:01.993 [2024-07-24 22:34:27.449936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.993 [2024-07-24 22:34:27.449963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.993 qpair failed and we were unable to recover it. 00:25:01.993 [2024-07-24 22:34:27.450065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.993 [2024-07-24 22:34:27.450092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.993 qpair failed and we were unable to recover it. 00:25:01.993 [2024-07-24 22:34:27.450208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.993 [2024-07-24 22:34:27.450235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.993 qpair failed and we were unable to recover it. 00:25:01.993 [2024-07-24 22:34:27.450348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.993 [2024-07-24 22:34:27.450377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.993 qpair failed and we were unable to recover it. 00:25:01.993 [2024-07-24 22:34:27.450498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.993 [2024-07-24 22:34:27.450525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.993 qpair failed and we were unable to recover it. 00:25:01.993 [2024-07-24 22:34:27.450624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.993 [2024-07-24 22:34:27.450650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.993 qpair failed and we were unable to recover it. 00:25:01.993 [2024-07-24 22:34:27.450756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.993 [2024-07-24 22:34:27.450783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.993 qpair failed and we were unable to recover it. 00:25:01.993 [2024-07-24 22:34:27.450891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.993 [2024-07-24 22:34:27.450916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.993 qpair failed and we were unable to recover it. 00:25:01.993 [2024-07-24 22:34:27.451018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.994 [2024-07-24 22:34:27.451045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.994 qpair failed and we were unable to recover it. 00:25:01.994 [2024-07-24 22:34:27.451148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.994 [2024-07-24 22:34:27.451176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.994 qpair failed and we were unable to recover it. 00:25:01.994 [2024-07-24 22:34:27.451281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.994 [2024-07-24 22:34:27.451307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.994 qpair failed and we were unable to recover it. 00:25:01.994 [2024-07-24 22:34:27.451421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.994 [2024-07-24 22:34:27.451452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.994 qpair failed and we were unable to recover it. 00:25:01.994 [2024-07-24 22:34:27.451564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.994 [2024-07-24 22:34:27.451591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.994 qpair failed and we were unable to recover it. 00:25:01.994 [2024-07-24 22:34:27.451695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.994 [2024-07-24 22:34:27.451721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.994 qpair failed and we were unable to recover it. 00:25:01.994 [2024-07-24 22:34:27.451821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.994 [2024-07-24 22:34:27.451848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.994 qpair failed and we were unable to recover it. 00:25:01.994 [2024-07-24 22:34:27.451946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.994 [2024-07-24 22:34:27.451972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.994 qpair failed and we were unable to recover it. 00:25:01.994 [2024-07-24 22:34:27.452076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.994 [2024-07-24 22:34:27.452104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.994 qpair failed and we were unable to recover it. 00:25:01.994 [2024-07-24 22:34:27.452206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.994 [2024-07-24 22:34:27.452232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.994 qpair failed and we were unable to recover it. 00:25:01.994 [2024-07-24 22:34:27.452344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.994 [2024-07-24 22:34:27.452372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.994 qpair failed and we were unable to recover it. 00:25:01.994 [2024-07-24 22:34:27.452490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.994 [2024-07-24 22:34:27.452519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.994 qpair failed and we were unable to recover it. 00:25:01.994 [2024-07-24 22:34:27.452641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.994 [2024-07-24 22:34:27.452667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.994 qpair failed and we were unable to recover it. 00:25:01.994 [2024-07-24 22:34:27.452770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.994 [2024-07-24 22:34:27.452797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.994 qpair failed and we were unable to recover it. 00:25:01.994 [2024-07-24 22:34:27.452899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.994 [2024-07-24 22:34:27.452927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.994 qpair failed and we were unable to recover it. 00:25:01.994 [2024-07-24 22:34:27.453041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.994 [2024-07-24 22:34:27.453067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.994 qpair failed and we were unable to recover it. 00:25:01.994 [2024-07-24 22:34:27.453171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.994 [2024-07-24 22:34:27.453199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.994 qpair failed and we were unable to recover it. 00:25:01.994 [2024-07-24 22:34:27.453307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.994 [2024-07-24 22:34:27.453335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.994 qpair failed and we were unable to recover it. 00:25:01.994 [2024-07-24 22:34:27.453434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.994 [2024-07-24 22:34:27.453460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.994 qpair failed and we were unable to recover it. 00:25:01.994 [2024-07-24 22:34:27.453586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.994 [2024-07-24 22:34:27.453614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.994 qpair failed and we were unable to recover it. 00:25:01.994 [2024-07-24 22:34:27.453717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.994 [2024-07-24 22:34:27.453743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.994 qpair failed and we were unable to recover it. 00:25:01.994 [2024-07-24 22:34:27.453842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.994 [2024-07-24 22:34:27.453868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.994 qpair failed and we were unable to recover it. 00:25:01.994 [2024-07-24 22:34:27.453973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.994 [2024-07-24 22:34:27.453998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.994 qpair failed and we were unable to recover it. 00:25:01.994 [2024-07-24 22:34:27.454101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.994 [2024-07-24 22:34:27.454129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.994 qpair failed and we were unable to recover it. 00:25:01.994 [2024-07-24 22:34:27.454227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.994 [2024-07-24 22:34:27.454253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.994 qpair failed and we were unable to recover it. 00:25:01.994 [2024-07-24 22:34:27.454352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.994 [2024-07-24 22:34:27.454379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.994 qpair failed and we were unable to recover it. 00:25:01.994 [2024-07-24 22:34:27.454492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.994 [2024-07-24 22:34:27.454518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.994 qpair failed and we were unable to recover it. 00:25:01.994 [2024-07-24 22:34:27.454625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.994 [2024-07-24 22:34:27.454652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.994 qpair failed and we were unable to recover it. 00:25:01.994 [2024-07-24 22:34:27.454752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.994 [2024-07-24 22:34:27.454778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.994 qpair failed and we were unable to recover it. 00:25:01.994 [2024-07-24 22:34:27.454879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.994 [2024-07-24 22:34:27.454905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.994 qpair failed and we were unable to recover it. 00:25:01.994 [2024-07-24 22:34:27.455015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.994 [2024-07-24 22:34:27.455040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.994 qpair failed and we were unable to recover it. 00:25:01.994 [2024-07-24 22:34:27.455147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.994 [2024-07-24 22:34:27.455172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.994 qpair failed and we were unable to recover it. 00:25:01.994 [2024-07-24 22:34:27.455272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.994 [2024-07-24 22:34:27.455298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.994 qpair failed and we were unable to recover it. 00:25:01.994 [2024-07-24 22:34:27.455398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.994 [2024-07-24 22:34:27.455423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.994 qpair failed and we were unable to recover it. 00:25:01.994 [2024-07-24 22:34:27.455530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.994 [2024-07-24 22:34:27.455557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.994 qpair failed and we were unable to recover it. 00:25:01.994 [2024-07-24 22:34:27.455669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.994 [2024-07-24 22:34:27.455695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.994 qpair failed and we were unable to recover it. 00:25:01.994 [2024-07-24 22:34:27.455812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.994 [2024-07-24 22:34:27.455839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.994 qpair failed and we were unable to recover it. 00:25:01.994 [2024-07-24 22:34:27.455955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.995 [2024-07-24 22:34:27.455981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.995 qpair failed and we were unable to recover it. 00:25:01.995 [2024-07-24 22:34:27.456085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.995 [2024-07-24 22:34:27.456112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.995 qpair failed and we were unable to recover it. 00:25:01.995 [2024-07-24 22:34:27.456219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.995 [2024-07-24 22:34:27.456245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.995 qpair failed and we were unable to recover it. 00:25:01.995 [2024-07-24 22:34:27.456355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.995 [2024-07-24 22:34:27.456384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.995 qpair failed and we were unable to recover it. 00:25:01.995 [2024-07-24 22:34:27.456496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.995 [2024-07-24 22:34:27.456523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.995 qpair failed and we were unable to recover it. 00:25:01.995 [2024-07-24 22:34:27.456623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.995 [2024-07-24 22:34:27.456649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.995 qpair failed and we were unable to recover it. 00:25:01.995 [2024-07-24 22:34:27.456750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.995 [2024-07-24 22:34:27.456781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.995 qpair failed and we were unable to recover it. 00:25:01.995 [2024-07-24 22:34:27.456884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.995 [2024-07-24 22:34:27.456910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.995 qpair failed and we were unable to recover it. 00:25:01.995 [2024-07-24 22:34:27.457009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.995 [2024-07-24 22:34:27.457034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.995 qpair failed and we were unable to recover it. 00:25:01.995 [2024-07-24 22:34:27.457148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.995 [2024-07-24 22:34:27.457175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.995 qpair failed and we were unable to recover it. 00:25:01.995 [2024-07-24 22:34:27.457282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.995 [2024-07-24 22:34:27.457309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.995 qpair failed and we were unable to recover it. 00:25:01.995 [2024-07-24 22:34:27.457418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.995 [2024-07-24 22:34:27.457445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.995 qpair failed and we were unable to recover it. 00:25:01.995 [2024-07-24 22:34:27.457574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.995 [2024-07-24 22:34:27.457600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.995 qpair failed and we were unable to recover it. 00:25:01.995 [2024-07-24 22:34:27.457713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.995 [2024-07-24 22:34:27.457742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.995 qpair failed and we were unable to recover it. 00:25:01.995 [2024-07-24 22:34:27.457846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.995 [2024-07-24 22:34:27.457872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.995 qpair failed and we were unable to recover it. 00:25:01.995 [2024-07-24 22:34:27.457981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.995 [2024-07-24 22:34:27.458008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.995 qpair failed and we were unable to recover it. 00:25:01.995 [2024-07-24 22:34:27.458108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.995 [2024-07-24 22:34:27.458134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.995 qpair failed and we were unable to recover it. 00:25:01.995 [2024-07-24 22:34:27.458246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.995 [2024-07-24 22:34:27.458271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.995 qpair failed and we were unable to recover it. 00:25:01.995 [2024-07-24 22:34:27.458367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.995 [2024-07-24 22:34:27.458396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.995 qpair failed and we were unable to recover it. 00:25:01.995 [2024-07-24 22:34:27.458499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.995 [2024-07-24 22:34:27.458526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.995 qpair failed and we were unable to recover it. 00:25:01.995 [2024-07-24 22:34:27.458638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.995 [2024-07-24 22:34:27.458664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.995 qpair failed and we were unable to recover it. 00:25:01.995 [2024-07-24 22:34:27.458783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.995 [2024-07-24 22:34:27.458808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.995 qpair failed and we were unable to recover it. 00:25:01.995 [2024-07-24 22:34:27.458908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.995 [2024-07-24 22:34:27.458934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.995 qpair failed and we were unable to recover it. 00:25:01.995 [2024-07-24 22:34:27.459036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.995 [2024-07-24 22:34:27.459063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.995 qpair failed and we were unable to recover it. 00:25:01.995 [2024-07-24 22:34:27.459162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.995 [2024-07-24 22:34:27.459188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.995 qpair failed and we were unable to recover it. 00:25:01.995 [2024-07-24 22:34:27.459288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.995 [2024-07-24 22:34:27.459314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.995 qpair failed and we were unable to recover it. 00:25:01.995 [2024-07-24 22:34:27.459412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.995 [2024-07-24 22:34:27.459438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.995 qpair failed and we were unable to recover it. 00:25:01.995 [2024-07-24 22:34:27.462553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.995 [2024-07-24 22:34:27.462580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.995 qpair failed and we were unable to recover it. 00:25:01.995 [2024-07-24 22:34:27.462684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.995 [2024-07-24 22:34:27.462711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.995 qpair failed and we were unable to recover it. 00:25:01.995 [2024-07-24 22:34:27.462827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.995 [2024-07-24 22:34:27.462854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.995 qpair failed and we were unable to recover it. 00:25:01.995 [2024-07-24 22:34:27.462974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.995 [2024-07-24 22:34:27.463000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.995 qpair failed and we were unable to recover it. 00:25:01.995 [2024-07-24 22:34:27.463100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.995 [2024-07-24 22:34:27.463126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.995 qpair failed and we were unable to recover it. 00:25:01.995 [2024-07-24 22:34:27.463240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.995 [2024-07-24 22:34:27.463266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.995 qpair failed and we were unable to recover it. 00:25:01.995 [2024-07-24 22:34:27.463373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.995 [2024-07-24 22:34:27.463403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.995 qpair failed and we were unable to recover it. 00:25:01.995 [2024-07-24 22:34:27.463521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.995 [2024-07-24 22:34:27.463548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.995 qpair failed and we were unable to recover it. 00:25:01.995 [2024-07-24 22:34:27.463679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.995 [2024-07-24 22:34:27.463705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.995 qpair failed and we were unable to recover it. 00:25:01.995 [2024-07-24 22:34:27.463808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.995 [2024-07-24 22:34:27.463833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.995 qpair failed and we were unable to recover it. 00:25:01.995 [2024-07-24 22:34:27.463946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.995 [2024-07-24 22:34:27.463972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.995 qpair failed and we were unable to recover it. 00:25:01.996 [2024-07-24 22:34:27.464073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.996 [2024-07-24 22:34:27.464100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.996 qpair failed and we were unable to recover it. 00:25:01.996 [2024-07-24 22:34:27.464230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.996 [2024-07-24 22:34:27.464256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.996 qpair failed and we were unable to recover it. 00:25:01.996 [2024-07-24 22:34:27.464395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.996 [2024-07-24 22:34:27.464421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.996 qpair failed and we were unable to recover it. 00:25:01.996 [2024-07-24 22:34:27.464559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.996 [2024-07-24 22:34:27.464587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.996 qpair failed and we were unable to recover it. 00:25:01.996 [2024-07-24 22:34:27.464692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.996 [2024-07-24 22:34:27.464717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.996 qpair failed and we were unable to recover it. 00:25:01.996 [2024-07-24 22:34:27.464840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.996 [2024-07-24 22:34:27.464868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.996 qpair failed and we were unable to recover it. 00:25:01.996 [2024-07-24 22:34:27.464972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.996 [2024-07-24 22:34:27.464998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.996 qpair failed and we were unable to recover it. 00:25:01.996 [2024-07-24 22:34:27.465102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.996 [2024-07-24 22:34:27.465129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.996 qpair failed and we were unable to recover it. 00:25:01.996 [2024-07-24 22:34:27.465233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.996 [2024-07-24 22:34:27.465265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.996 qpair failed and we were unable to recover it. 00:25:01.996 [2024-07-24 22:34:27.465371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.996 [2024-07-24 22:34:27.465399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.996 qpair failed and we were unable to recover it. 00:25:01.996 [2024-07-24 22:34:27.465514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.996 [2024-07-24 22:34:27.465543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.996 qpair failed and we were unable to recover it. 00:25:01.996 [2024-07-24 22:34:27.465656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.996 [2024-07-24 22:34:27.465682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.996 qpair failed and we were unable to recover it. 00:25:01.996 [2024-07-24 22:34:27.465776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.996 [2024-07-24 22:34:27.465802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.996 qpair failed and we were unable to recover it. 00:25:01.996 [2024-07-24 22:34:27.465913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.996 [2024-07-24 22:34:27.465938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.996 qpair failed and we were unable to recover it. 00:25:01.996 [2024-07-24 22:34:27.466040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.996 [2024-07-24 22:34:27.466066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.996 qpair failed and we were unable to recover it. 00:25:01.996 [2024-07-24 22:34:27.466167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.996 [2024-07-24 22:34:27.466193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.996 qpair failed and we were unable to recover it. 00:25:01.996 [2024-07-24 22:34:27.466341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.996 [2024-07-24 22:34:27.466369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.996 qpair failed and we were unable to recover it. 00:25:01.996 [2024-07-24 22:34:27.466475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.996 [2024-07-24 22:34:27.466507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.996 qpair failed and we were unable to recover it. 00:25:01.996 [2024-07-24 22:34:27.466628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.996 [2024-07-24 22:34:27.466655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.996 qpair failed and we were unable to recover it. 00:25:01.996 [2024-07-24 22:34:27.466760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.996 [2024-07-24 22:34:27.466786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.996 qpair failed and we were unable to recover it. 00:25:01.996 [2024-07-24 22:34:27.466891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.996 [2024-07-24 22:34:27.466916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.996 qpair failed and we were unable to recover it. 00:25:01.996 [2024-07-24 22:34:27.467049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.996 [2024-07-24 22:34:27.467075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.996 qpair failed and we were unable to recover it. 00:25:01.996 [2024-07-24 22:34:27.467181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.996 [2024-07-24 22:34:27.467208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.996 qpair failed and we were unable to recover it. 00:25:01.996 [2024-07-24 22:34:27.467346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.996 [2024-07-24 22:34:27.467373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.996 qpair failed and we were unable to recover it. 00:25:01.996 [2024-07-24 22:34:27.467476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.996 [2024-07-24 22:34:27.467507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.996 qpair failed and we were unable to recover it. 00:25:01.996 [2024-07-24 22:34:27.467607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.996 [2024-07-24 22:34:27.467633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.996 qpair failed and we were unable to recover it. 00:25:01.996 [2024-07-24 22:34:27.467743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.996 [2024-07-24 22:34:27.467768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.996 qpair failed and we were unable to recover it. 00:25:01.996 [2024-07-24 22:34:27.467868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.996 [2024-07-24 22:34:27.467893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.996 qpair failed and we were unable to recover it. 00:25:01.996 [2024-07-24 22:34:27.467988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.996 [2024-07-24 22:34:27.468013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.996 qpair failed and we were unable to recover it. 00:25:01.996 [2024-07-24 22:34:27.468153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.996 [2024-07-24 22:34:27.468179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.996 qpair failed and we were unable to recover it. 00:25:01.996 [2024-07-24 22:34:27.468309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.996 [2024-07-24 22:34:27.468335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.996 qpair failed and we were unable to recover it. 00:25:01.996 [2024-07-24 22:34:27.468452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.996 [2024-07-24 22:34:27.468477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.996 qpair failed and we were unable to recover it. 00:25:01.996 [2024-07-24 22:34:27.468591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.996 [2024-07-24 22:34:27.468618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.996 qpair failed and we were unable to recover it. 00:25:01.996 [2024-07-24 22:34:27.468719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.996 [2024-07-24 22:34:27.468745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.996 qpair failed and we were unable to recover it. 00:25:01.996 [2024-07-24 22:34:27.468848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.996 [2024-07-24 22:34:27.468875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.996 qpair failed and we were unable to recover it. 00:25:01.996 [2024-07-24 22:34:27.468989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.996 [2024-07-24 22:34:27.469020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.996 qpair failed and we were unable to recover it. 00:25:01.996 [2024-07-24 22:34:27.469126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.996 [2024-07-24 22:34:27.469154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.996 qpair failed and we were unable to recover it. 00:25:01.997 [2024-07-24 22:34:27.469274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.997 [2024-07-24 22:34:27.469302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.997 qpair failed and we were unable to recover it. 00:25:01.997 [2024-07-24 22:34:27.469410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.997 [2024-07-24 22:34:27.469438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.997 qpair failed and we were unable to recover it. 00:25:01.997 [2024-07-24 22:34:27.469569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.997 [2024-07-24 22:34:27.469596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.997 qpair failed and we were unable to recover it. 00:25:01.997 [2024-07-24 22:34:27.469729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.997 [2024-07-24 22:34:27.469756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.997 qpair failed and we were unable to recover it. 00:25:01.997 [2024-07-24 22:34:27.469894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.997 [2024-07-24 22:34:27.469920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.997 qpair failed and we were unable to recover it. 00:25:01.997 [2024-07-24 22:34:27.470016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.997 [2024-07-24 22:34:27.470042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.997 qpair failed and we were unable to recover it. 00:25:01.997 [2024-07-24 22:34:27.470189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.997 [2024-07-24 22:34:27.470217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.997 qpair failed and we were unable to recover it. 00:25:01.997 [2024-07-24 22:34:27.470323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.997 [2024-07-24 22:34:27.470352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.997 qpair failed and we were unable to recover it. 00:25:01.997 [2024-07-24 22:34:27.470499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.997 [2024-07-24 22:34:27.470525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.997 qpair failed and we were unable to recover it. 00:25:01.997 [2024-07-24 22:34:27.470632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.997 [2024-07-24 22:34:27.470659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.997 qpair failed and we were unable to recover it. 00:25:01.997 [2024-07-24 22:34:27.470772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.997 [2024-07-24 22:34:27.470798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.997 qpair failed and we were unable to recover it. 00:25:01.997 [2024-07-24 22:34:27.470911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.997 [2024-07-24 22:34:27.470941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.997 qpair failed and we were unable to recover it. 00:25:01.997 [2024-07-24 22:34:27.471155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.997 [2024-07-24 22:34:27.471182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.997 qpair failed and we were unable to recover it. 00:25:01.997 [2024-07-24 22:34:27.471287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.997 [2024-07-24 22:34:27.471315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.997 qpair failed and we were unable to recover it. 00:25:01.997 [2024-07-24 22:34:27.471426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.997 [2024-07-24 22:34:27.471453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.997 qpair failed and we were unable to recover it. 00:25:01.997 [2024-07-24 22:34:27.471556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.997 [2024-07-24 22:34:27.471582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.997 qpair failed and we were unable to recover it. 00:25:01.997 [2024-07-24 22:34:27.471687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.997 [2024-07-24 22:34:27.471713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.997 qpair failed and we were unable to recover it. 00:25:01.997 [2024-07-24 22:34:27.471849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.997 [2024-07-24 22:34:27.471875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.997 qpair failed and we were unable to recover it. 00:25:01.997 [2024-07-24 22:34:27.472012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.997 [2024-07-24 22:34:27.472041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.997 qpair failed and we were unable to recover it. 00:25:01.997 [2024-07-24 22:34:27.472170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.997 [2024-07-24 22:34:27.472196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.997 qpair failed and we were unable to recover it. 00:25:01.997 [2024-07-24 22:34:27.472302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.997 [2024-07-24 22:34:27.472328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.997 qpair failed and we were unable to recover it. 00:25:01.997 [2024-07-24 22:34:27.472432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.997 [2024-07-24 22:34:27.472459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.997 qpair failed and we were unable to recover it. 00:25:01.997 [2024-07-24 22:34:27.472589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.997 [2024-07-24 22:34:27.472617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.997 qpair failed and we were unable to recover it. 00:25:01.997 [2024-07-24 22:34:27.472734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.997 [2024-07-24 22:34:27.472761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.997 qpair failed and we were unable to recover it. 00:25:01.997 [2024-07-24 22:34:27.472861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.997 [2024-07-24 22:34:27.472888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.997 qpair failed and we were unable to recover it. 00:25:01.997 [2024-07-24 22:34:27.473008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.997 [2024-07-24 22:34:27.473035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.997 qpair failed and we were unable to recover it. 00:25:01.997 [2024-07-24 22:34:27.473147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.997 [2024-07-24 22:34:27.473173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.997 qpair failed and we were unable to recover it. 00:25:01.997 [2024-07-24 22:34:27.473273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.997 [2024-07-24 22:34:27.473300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.997 qpair failed and we were unable to recover it. 00:25:01.997 [2024-07-24 22:34:27.473428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.997 [2024-07-24 22:34:27.473455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.997 qpair failed and we were unable to recover it. 00:25:01.997 [2024-07-24 22:34:27.473676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.997 [2024-07-24 22:34:27.473703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.997 qpair failed and we were unable to recover it. 00:25:01.997 [2024-07-24 22:34:27.473821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.997 [2024-07-24 22:34:27.473848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.997 qpair failed and we were unable to recover it. 00:25:01.997 [2024-07-24 22:34:27.473983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.997 [2024-07-24 22:34:27.474008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.997 qpair failed and we were unable to recover it. 00:25:01.997 [2024-07-24 22:34:27.474109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-24 22:34:27.474135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-24 22:34:27.474275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-24 22:34:27.474304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-24 22:34:27.474410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-24 22:34:27.474436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-24 22:34:27.474543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-24 22:34:27.474571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-24 22:34:27.474705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-24 22:34:27.474731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-24 22:34:27.474830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-24 22:34:27.474855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-24 22:34:27.474977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-24 22:34:27.475004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-24 22:34:27.475112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-24 22:34:27.475140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-24 22:34:27.475249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-24 22:34:27.475279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-24 22:34:27.475386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-24 22:34:27.475412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-24 22:34:27.475519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-24 22:34:27.475546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-24 22:34:27.475652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-24 22:34:27.475680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-24 22:34:27.475824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-24 22:34:27.475850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-24 22:34:27.475952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-24 22:34:27.475978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-24 22:34:27.476113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-24 22:34:27.476138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-24 22:34:27.476252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-24 22:34:27.476279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-24 22:34:27.476394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-24 22:34:27.476421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-24 22:34:27.476531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-24 22:34:27.476560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-24 22:34:27.476677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-24 22:34:27.476703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-24 22:34:27.476805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-24 22:34:27.476836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-24 22:34:27.476934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-24 22:34:27.476960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-24 22:34:27.477062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-24 22:34:27.477089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-24 22:34:27.477212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-24 22:34:27.477239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-24 22:34:27.477357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-24 22:34:27.477385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-24 22:34:27.477506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-24 22:34:27.477533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-24 22:34:27.477656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-24 22:34:27.477683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-24 22:34:27.477788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-24 22:34:27.477814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-24 22:34:27.477917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-24 22:34:27.477943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-24 22:34:27.478045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-24 22:34:27.478071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-24 22:34:27.478168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-24 22:34:27.478193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-24 22:34:27.478306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-24 22:34:27.478331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-24 22:34:27.478432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-24 22:34:27.478459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-24 22:34:27.478572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-24 22:34:27.478601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-24 22:34:27.478724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-24 22:34:27.478750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-24 22:34:27.478867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-24 22:34:27.478893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-24 22:34:27.478994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-24 22:34:27.479020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-24 22:34:27.479119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.998 [2024-07-24 22:34:27.479145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.998 qpair failed and we were unable to recover it. 00:25:01.998 [2024-07-24 22:34:27.479248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-24 22:34:27.479274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-24 22:34:27.479367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-24 22:34:27.479393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-24 22:34:27.479490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-24 22:34:27.479516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-24 22:34:27.479616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-24 22:34:27.479642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-24 22:34:27.479742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-24 22:34:27.479769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-24 22:34:27.479873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-24 22:34:27.479900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-24 22:34:27.480019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-24 22:34:27.480047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-24 22:34:27.480151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-24 22:34:27.480179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-24 22:34:27.480286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-24 22:34:27.480314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-24 22:34:27.480423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-24 22:34:27.480453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-24 22:34:27.480564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-24 22:34:27.480592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-24 22:34:27.480693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-24 22:34:27.480720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-24 22:34:27.480834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-24 22:34:27.480860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-24 22:34:27.480970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-24 22:34:27.480996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-24 22:34:27.481094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-24 22:34:27.481120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-24 22:34:27.481219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-24 22:34:27.481247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-24 22:34:27.481346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-24 22:34:27.481372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-24 22:34:27.481496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-24 22:34:27.481530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-24 22:34:27.481647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-24 22:34:27.481674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-24 22:34:27.481777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-24 22:34:27.481803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-24 22:34:27.481906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-24 22:34:27.481930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-24 22:34:27.482047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-24 22:34:27.482071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-24 22:34:27.482180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-24 22:34:27.482209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-24 22:34:27.482428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-24 22:34:27.482454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-24 22:34:27.482565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-24 22:34:27.482592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-24 22:34:27.482694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-24 22:34:27.482720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-24 22:34:27.482821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-24 22:34:27.482847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-24 22:34:27.482947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-24 22:34:27.482973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-24 22:34:27.483076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-24 22:34:27.483103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-24 22:34:27.483217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-24 22:34:27.483243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-24 22:34:27.483345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-24 22:34:27.483371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-24 22:34:27.483476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-24 22:34:27.483511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-24 22:34:27.483619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-24 22:34:27.483646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-24 22:34:27.483752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-24 22:34:27.483778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-24 22:34:27.483880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-24 22:34:27.483907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-24 22:34:27.484004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-24 22:34:27.484029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-24 22:34:27.484138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-24 22:34:27.484167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:01.999 [2024-07-24 22:34:27.484279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.999 [2024-07-24 22:34:27.484307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:01.999 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-24 22:34:27.484410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-24 22:34:27.484437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-24 22:34:27.484540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-24 22:34:27.484566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-24 22:34:27.484660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-24 22:34:27.484686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-24 22:34:27.484790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-24 22:34:27.484817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-24 22:34:27.484926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-24 22:34:27.484955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-24 22:34:27.485056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-24 22:34:27.485084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-24 22:34:27.485186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-24 22:34:27.485212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-24 22:34:27.485311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-24 22:34:27.485337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-24 22:34:27.485454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-24 22:34:27.485485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-24 22:34:27.485592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-24 22:34:27.485619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-24 22:34:27.485726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-24 22:34:27.485754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-24 22:34:27.485869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-24 22:34:27.485901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-24 22:34:27.486012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-24 22:34:27.486038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-24 22:34:27.486153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-24 22:34:27.486182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-24 22:34:27.486293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-24 22:34:27.486322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-24 22:34:27.486424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-24 22:34:27.486450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-24 22:34:27.486561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-24 22:34:27.486588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-24 22:34:27.486687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-24 22:34:27.486713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-24 22:34:27.486808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-24 22:34:27.486834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-24 22:34:27.486936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-24 22:34:27.486963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-24 22:34:27.487082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-24 22:34:27.487107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-24 22:34:27.487219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-24 22:34:27.487244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-24 22:34:27.487346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-24 22:34:27.487371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-24 22:34:27.487476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-24 22:34:27.487510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-24 22:34:27.487623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-24 22:34:27.487649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-24 22:34:27.487747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-24 22:34:27.487772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-24 22:34:27.487878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-24 22:34:27.487905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-24 22:34:27.488008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-24 22:34:27.488033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-24 22:34:27.488141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-24 22:34:27.488171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-24 22:34:27.488287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-24 22:34:27.488315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-24 22:34:27.488429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-24 22:34:27.488456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-24 22:34:27.488563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-24 22:34:27.488591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-24 22:34:27.488686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-24 22:34:27.488712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-24 22:34:27.488813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-24 22:34:27.488840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-24 22:34:27.488951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-24 22:34:27.488976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-24 22:34:27.489076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.000 [2024-07-24 22:34:27.489101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.000 qpair failed and we were unable to recover it. 00:25:02.000 [2024-07-24 22:34:27.489213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-24 22:34:27.489238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-24 22:34:27.489359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-24 22:34:27.489384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-24 22:34:27.489499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-24 22:34:27.489527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-24 22:34:27.489624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-24 22:34:27.489650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-24 22:34:27.489757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-24 22:34:27.489784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-24 22:34:27.489886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-24 22:34:27.489913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-24 22:34:27.490023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-24 22:34:27.490048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-24 22:34:27.490165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-24 22:34:27.490191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-24 22:34:27.490293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-24 22:34:27.490319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-24 22:34:27.490430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-24 22:34:27.490456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-24 22:34:27.490564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-24 22:34:27.490590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-24 22:34:27.490717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-24 22:34:27.490742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-24 22:34:27.490846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-24 22:34:27.490872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-24 22:34:27.490990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-24 22:34:27.491017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-24 22:34:27.491125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-24 22:34:27.491152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-24 22:34:27.491254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-24 22:34:27.491285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-24 22:34:27.491388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-24 22:34:27.491417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-24 22:34:27.491516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-24 22:34:27.491542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-24 22:34:27.491645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-24 22:34:27.491673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-24 22:34:27.491785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-24 22:34:27.491812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-24 22:34:27.491910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-24 22:34:27.491936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-24 22:34:27.492039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-24 22:34:27.492067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-24 22:34:27.492171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-24 22:34:27.492198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-24 22:34:27.492304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-24 22:34:27.492329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-24 22:34:27.492427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-24 22:34:27.492453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-24 22:34:27.492569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-24 22:34:27.492594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-24 22:34:27.492701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-24 22:34:27.492727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-24 22:34:27.492838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-24 22:34:27.492863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-24 22:34:27.492962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-24 22:34:27.492989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-24 22:34:27.493096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-24 22:34:27.493122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-24 22:34:27.493220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-24 22:34:27.493245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-24 22:34:27.493354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-24 22:34:27.493383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-24 22:34:27.493492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-24 22:34:27.493519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-24 22:34:27.493635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-24 22:34:27.493662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-24 22:34:27.493757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-24 22:34:27.493783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-24 22:34:27.493898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-24 22:34:27.493924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-24 22:34:27.494021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-24 22:34:27.494048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.001 [2024-07-24 22:34:27.494168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.001 [2024-07-24 22:34:27.494197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.001 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-24 22:34:27.494300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-24 22:34:27.494327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-24 22:34:27.494425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-24 22:34:27.494450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-24 22:34:27.494561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-24 22:34:27.494588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-24 22:34:27.494690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-24 22:34:27.494717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-24 22:34:27.494826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-24 22:34:27.494854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-24 22:34:27.494961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-24 22:34:27.494989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-24 22:34:27.495089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-24 22:34:27.495116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-24 22:34:27.495277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-24 22:34:27.495302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-24 22:34:27.495405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-24 22:34:27.495431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-24 22:34:27.495531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-24 22:34:27.495558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-24 22:34:27.495652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-24 22:34:27.495678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-24 22:34:27.495778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-24 22:34:27.495803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-24 22:34:27.495899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-24 22:34:27.495925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-24 22:34:27.496028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-24 22:34:27.496053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-24 22:34:27.496165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-24 22:34:27.496191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-24 22:34:27.496293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-24 22:34:27.496318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-24 22:34:27.496418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-24 22:34:27.496445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-24 22:34:27.496565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-24 22:34:27.496599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-24 22:34:27.496699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-24 22:34:27.496725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-24 22:34:27.496826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-24 22:34:27.496851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-24 22:34:27.496955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-24 22:34:27.496982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-24 22:34:27.497089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-24 22:34:27.497114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-24 22:34:27.497210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-24 22:34:27.497236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-24 22:34:27.497338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-24 22:34:27.497364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-24 22:34:27.497468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-24 22:34:27.497508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-24 22:34:27.497623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-24 22:34:27.497648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-24 22:34:27.497753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-24 22:34:27.497781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-24 22:34:27.497955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-24 22:34:27.497981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-24 22:34:27.498079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-24 22:34:27.498105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-24 22:34:27.498207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-24 22:34:27.498234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-24 22:34:27.498347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-24 22:34:27.498373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-24 22:34:27.498486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-24 22:34:27.498513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-24 22:34:27.498615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-24 22:34:27.498643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-24 22:34:27.498741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-24 22:34:27.498766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.002 [2024-07-24 22:34:27.498921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.002 [2024-07-24 22:34:27.498948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.002 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-24 22:34:27.499067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-24 22:34:27.499095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-24 22:34:27.499204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-24 22:34:27.499234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-24 22:34:27.499341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-24 22:34:27.499369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-24 22:34:27.499585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-24 22:34:27.499613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-24 22:34:27.499727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-24 22:34:27.499753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-24 22:34:27.499869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-24 22:34:27.499895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-24 22:34:27.499998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-24 22:34:27.500025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-24 22:34:27.500127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-24 22:34:27.500154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-24 22:34:27.500259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-24 22:34:27.500284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-24 22:34:27.500397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-24 22:34:27.500426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-24 22:34:27.500532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-24 22:34:27.500561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-24 22:34:27.500662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-24 22:34:27.500688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-24 22:34:27.500792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-24 22:34:27.500819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-24 22:34:27.500925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-24 22:34:27.500951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-24 22:34:27.501066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-24 22:34:27.501092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-24 22:34:27.501193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-24 22:34:27.501219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-24 22:34:27.501318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-24 22:34:27.501344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-24 22:34:27.501507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-24 22:34:27.501534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-24 22:34:27.501637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-24 22:34:27.501664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-24 22:34:27.501779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-24 22:34:27.501806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-24 22:34:27.501906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-24 22:34:27.501932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-24 22:34:27.502036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-24 22:34:27.502063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-24 22:34:27.502173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-24 22:34:27.502208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-24 22:34:27.502341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-24 22:34:27.502367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-24 22:34:27.502489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-24 22:34:27.502517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-24 22:34:27.502625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-24 22:34:27.502651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-24 22:34:27.502750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-24 22:34:27.502776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-24 22:34:27.502880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-24 22:34:27.502909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-24 22:34:27.503011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-24 22:34:27.503038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-24 22:34:27.503155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-24 22:34:27.503182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-24 22:34:27.503282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-24 22:34:27.503308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-24 22:34:27.503407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-24 22:34:27.503434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-24 22:34:27.503544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-24 22:34:27.503571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-24 22:34:27.503675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-24 22:34:27.503702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-24 22:34:27.503823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-24 22:34:27.503851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-24 22:34:27.503960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-24 22:34:27.503987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-24 22:34:27.504095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-24 22:34:27.504120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-24 22:34:27.504227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-24 22:34:27.504252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-24 22:34:27.504353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-24 22:34:27.504379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-24 22:34:27.504485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.003 [2024-07-24 22:34:27.504514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.003 qpair failed and we were unable to recover it. 00:25:02.003 [2024-07-24 22:34:27.504623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-24 22:34:27.504650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-24 22:34:27.504756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-24 22:34:27.504784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-24 22:34:27.504892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-24 22:34:27.504918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-24 22:34:27.505013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-24 22:34:27.505039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-24 22:34:27.505153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-24 22:34:27.505180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-24 22:34:27.505287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-24 22:34:27.505313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-24 22:34:27.505419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-24 22:34:27.505448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-24 22:34:27.505557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-24 22:34:27.505583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-24 22:34:27.505736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-24 22:34:27.505763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-24 22:34:27.505871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-24 22:34:27.505897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-24 22:34:27.506004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-24 22:34:27.506031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-24 22:34:27.506146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-24 22:34:27.506171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-24 22:34:27.506272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-24 22:34:27.506298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-24 22:34:27.506400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-24 22:34:27.506427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-24 22:34:27.506658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-24 22:34:27.506687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-24 22:34:27.506793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-24 22:34:27.506819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-24 22:34:27.506921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-24 22:34:27.506947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-24 22:34:27.507043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-24 22:34:27.507068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-24 22:34:27.507186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-24 22:34:27.507215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-24 22:34:27.507333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-24 22:34:27.507360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-24 22:34:27.507465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-24 22:34:27.507499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-24 22:34:27.507612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-24 22:34:27.507638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-24 22:34:27.507794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-24 22:34:27.507824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-24 22:34:27.507931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-24 22:34:27.507959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-24 22:34:27.508076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-24 22:34:27.508102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-24 22:34:27.508200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-24 22:34:27.508225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-24 22:34:27.508327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-24 22:34:27.508352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-24 22:34:27.508459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-24 22:34:27.508489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-24 22:34:27.508587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-24 22:34:27.508613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-24 22:34:27.508709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-24 22:34:27.508734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-24 22:34:27.508829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-24 22:34:27.508854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-24 22:34:27.508966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-24 22:34:27.508993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-24 22:34:27.509120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-24 22:34:27.509146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-24 22:34:27.509249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-24 22:34:27.509277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-24 22:34:27.509382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-24 22:34:27.509408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-24 22:34:27.509540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-24 22:34:27.509568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-24 22:34:27.509707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-24 22:34:27.509734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-24 22:34:27.509850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-24 22:34:27.509877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-24 22:34:27.509978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-24 22:34:27.510003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-24 22:34:27.510106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.004 [2024-07-24 22:34:27.510132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.004 qpair failed and we were unable to recover it. 00:25:02.004 [2024-07-24 22:34:27.510236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-24 22:34:27.510262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-24 22:34:27.510399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-24 22:34:27.510426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-24 22:34:27.510555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-24 22:34:27.510583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-24 22:34:27.510686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-24 22:34:27.510713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-24 22:34:27.510826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-24 22:34:27.510852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-24 22:34:27.510968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-24 22:34:27.510993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-24 22:34:27.511106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-24 22:34:27.511131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-24 22:34:27.511248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-24 22:34:27.511275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-24 22:34:27.511375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-24 22:34:27.511402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-24 22:34:27.511511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-24 22:34:27.511537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-24 22:34:27.511640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-24 22:34:27.511666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-24 22:34:27.511759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-24 22:34:27.511784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-24 22:34:27.511883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-24 22:34:27.511908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-24 22:34:27.512011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-24 22:34:27.512040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-24 22:34:27.512148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-24 22:34:27.512176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-24 22:34:27.512280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-24 22:34:27.512307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-24 22:34:27.512410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-24 22:34:27.512435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-24 22:34:27.512537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-24 22:34:27.512564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-24 22:34:27.512668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-24 22:34:27.512695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-24 22:34:27.512801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-24 22:34:27.512828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-24 22:34:27.512954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-24 22:34:27.512981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-24 22:34:27.513086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-24 22:34:27.513113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-24 22:34:27.513217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-24 22:34:27.513250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-24 22:34:27.513367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-24 22:34:27.513393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-24 22:34:27.513506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-24 22:34:27.513532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-24 22:34:27.513635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-24 22:34:27.513661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-24 22:34:27.513773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-24 22:34:27.513798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-24 22:34:27.513899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-24 22:34:27.513925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-24 22:34:27.514038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-24 22:34:27.514063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-24 22:34:27.514163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-24 22:34:27.514193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-24 22:34:27.514294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-24 22:34:27.514320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-24 22:34:27.514423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-24 22:34:27.514449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-24 22:34:27.514554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-24 22:34:27.514581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-24 22:34:27.514687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-24 22:34:27.514715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-24 22:34:27.514830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-24 22:34:27.514859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-24 22:34:27.514961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-24 22:34:27.514988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-24 22:34:27.515099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-24 22:34:27.515124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-24 22:34:27.515228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-24 22:34:27.515253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-24 22:34:27.515365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-24 22:34:27.515390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-24 22:34:27.515489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-24 22:34:27.515515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-24 22:34:27.515618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-24 22:34:27.515647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-24 22:34:27.515753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-24 22:34:27.515779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-24 22:34:27.515884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-24 22:34:27.515911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-24 22:34:27.516015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.005 [2024-07-24 22:34:27.516042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.005 qpair failed and we were unable to recover it. 00:25:02.005 [2024-07-24 22:34:27.516162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-24 22:34:27.516188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-24 22:34:27.516291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-24 22:34:27.516318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-24 22:34:27.516421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-24 22:34:27.516449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-24 22:34:27.516567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-24 22:34:27.516597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-24 22:34:27.516718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-24 22:34:27.516744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-24 22:34:27.516849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-24 22:34:27.516874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-24 22:34:27.516975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-24 22:34:27.517000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-24 22:34:27.517100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-24 22:34:27.517127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-24 22:34:27.517223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-24 22:34:27.517248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-24 22:34:27.517349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-24 22:34:27.517374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-24 22:34:27.517476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-24 22:34:27.517513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-24 22:34:27.517636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-24 22:34:27.517661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-24 22:34:27.517764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-24 22:34:27.517790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-24 22:34:27.517963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-24 22:34:27.517989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-24 22:34:27.518100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-24 22:34:27.518125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-24 22:34:27.518226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-24 22:34:27.518252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-24 22:34:27.518354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-24 22:34:27.518380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-24 22:34:27.518499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-24 22:34:27.518528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-24 22:34:27.518636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-24 22:34:27.518671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-24 22:34:27.518778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-24 22:34:27.518804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-24 22:34:27.518905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-24 22:34:27.518931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-24 22:34:27.519031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-24 22:34:27.519058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-24 22:34:27.519176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-24 22:34:27.519201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-24 22:34:27.519311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-24 22:34:27.519338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-24 22:34:27.519443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-24 22:34:27.519469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-24 22:34:27.519574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-24 22:34:27.519600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-24 22:34:27.519699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-24 22:34:27.519725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-24 22:34:27.519820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-24 22:34:27.519846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-24 22:34:27.519957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-24 22:34:27.519984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-24 22:34:27.520106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-24 22:34:27.520132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-24 22:34:27.520235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-24 22:34:27.520265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-24 22:34:27.520368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-24 22:34:27.520395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-24 22:34:27.520519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-24 22:34:27.520548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-24 22:34:27.520670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-24 22:34:27.520695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-24 22:34:27.520800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-24 22:34:27.520825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-24 22:34:27.520934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-24 22:34:27.520960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-24 22:34:27.521068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-24 22:34:27.521096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-24 22:34:27.521200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-24 22:34:27.521226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.006 qpair failed and we were unable to recover it. 00:25:02.006 [2024-07-24 22:34:27.521332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.006 [2024-07-24 22:34:27.521360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-24 22:34:27.521458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-24 22:34:27.521489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-24 22:34:27.521590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-24 22:34:27.521617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-24 22:34:27.521719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-24 22:34:27.521745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-24 22:34:27.521864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-24 22:34:27.521891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-24 22:34:27.521997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-24 22:34:27.522023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-24 22:34:27.522124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-24 22:34:27.522150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-24 22:34:27.522260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-24 22:34:27.522290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-24 22:34:27.522408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-24 22:34:27.522434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-24 22:34:27.522565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-24 22:34:27.522591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-24 22:34:27.522702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-24 22:34:27.522728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-24 22:34:27.522833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-24 22:34:27.522860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-24 22:34:27.522965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-24 22:34:27.522999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-24 22:34:27.523107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-24 22:34:27.523135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-24 22:34:27.523237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-24 22:34:27.523264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-24 22:34:27.523365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-24 22:34:27.523391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-24 22:34:27.523497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-24 22:34:27.523526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-24 22:34:27.523631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-24 22:34:27.523656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-24 22:34:27.523755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-24 22:34:27.523782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-24 22:34:27.523886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-24 22:34:27.523912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-24 22:34:27.524016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-24 22:34:27.524046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-24 22:34:27.524165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-24 22:34:27.524190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-24 22:34:27.524286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-24 22:34:27.524312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-24 22:34:27.524415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-24 22:34:27.524440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-24 22:34:27.524559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-24 22:34:27.524586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-24 22:34:27.524692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-24 22:34:27.524718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-24 22:34:27.524817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-24 22:34:27.524844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-24 22:34:27.524969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-24 22:34:27.524994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-24 22:34:27.525098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-24 22:34:27.525124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-24 22:34:27.525223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-24 22:34:27.525247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-24 22:34:27.525352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-24 22:34:27.525378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-24 22:34:27.525484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-24 22:34:27.525511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-24 22:34:27.525616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-24 22:34:27.525643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-24 22:34:27.525745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-24 22:34:27.525772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-24 22:34:27.525889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-24 22:34:27.525914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-24 22:34:27.526014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-24 22:34:27.526040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-24 22:34:27.526145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-24 22:34:27.526171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-24 22:34:27.526289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-24 22:34:27.526318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-24 22:34:27.526422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-24 22:34:27.526448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-24 22:34:27.526560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-24 22:34:27.526587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-24 22:34:27.526685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-24 22:34:27.526711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-24 22:34:27.526806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-24 22:34:27.526832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-24 22:34:27.526932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-24 22:34:27.526958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-24 22:34:27.527069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-24 22:34:27.527095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-24 22:34:27.527200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-24 22:34:27.527227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-24 22:34:27.527328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-24 22:34:27.527355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-24 22:34:27.527450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-24 22:34:27.527476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-24 22:34:27.527615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-24 22:34:27.527652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.007 [2024-07-24 22:34:27.527776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.007 [2024-07-24 22:34:27.527805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.007 qpair failed and we were unable to recover it. 00:25:02.008 [2024-07-24 22:34:27.527903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.008 [2024-07-24 22:34:27.527928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.008 qpair failed and we were unable to recover it. 00:25:02.008 [2024-07-24 22:34:27.528031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.008 [2024-07-24 22:34:27.528057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.008 qpair failed and we were unable to recover it. 00:25:02.008 [2024-07-24 22:34:27.528169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.008 [2024-07-24 22:34:27.528194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.008 qpair failed and we were unable to recover it. 00:25:02.008 [2024-07-24 22:34:27.528312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.008 [2024-07-24 22:34:27.528341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.008 qpair failed and we were unable to recover it. 00:25:02.008 [2024-07-24 22:34:27.528444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.008 [2024-07-24 22:34:27.528471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.008 qpair failed and we were unable to recover it. 00:25:02.008 [2024-07-24 22:34:27.528594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.008 [2024-07-24 22:34:27.528619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.008 qpair failed and we were unable to recover it. 00:25:02.008 [2024-07-24 22:34:27.528715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.008 [2024-07-24 22:34:27.528740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.008 qpair failed and we were unable to recover it. 00:25:02.008 [2024-07-24 22:34:27.528839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.008 [2024-07-24 22:34:27.528864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.008 qpair failed and we were unable to recover it. 00:25:02.008 [2024-07-24 22:34:27.528971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.008 [2024-07-24 22:34:27.528997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.008 qpair failed and we were unable to recover it. 00:25:02.008 [2024-07-24 22:34:27.529117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.008 [2024-07-24 22:34:27.529143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.008 qpair failed and we were unable to recover it. 00:25:02.008 [2024-07-24 22:34:27.529255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.008 [2024-07-24 22:34:27.529281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.008 qpair failed and we were unable to recover it. 00:25:02.008 [2024-07-24 22:34:27.529388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.008 [2024-07-24 22:34:27.529417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.008 qpair failed and we were unable to recover it. 00:25:02.008 [2024-07-24 22:34:27.529523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.008 [2024-07-24 22:34:27.529563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.008 qpair failed and we were unable to recover it. 00:25:02.008 [2024-07-24 22:34:27.529665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.008 [2024-07-24 22:34:27.529690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.008 qpair failed and we were unable to recover it. 00:25:02.008 [2024-07-24 22:34:27.529796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.008 [2024-07-24 22:34:27.529824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.008 qpair failed and we were unable to recover it. 00:25:02.008 [2024-07-24 22:34:27.529929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.008 [2024-07-24 22:34:27.529954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.008 qpair failed and we were unable to recover it. 00:25:02.008 [2024-07-24 22:34:27.530063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.008 [2024-07-24 22:34:27.530092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.008 qpair failed and we were unable to recover it. 00:25:02.008 [2024-07-24 22:34:27.530191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.008 [2024-07-24 22:34:27.530217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.008 qpair failed and we were unable to recover it. 00:25:02.008 [2024-07-24 22:34:27.530320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.008 [2024-07-24 22:34:27.530346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.008 qpair failed and we were unable to recover it. 00:25:02.008 [2024-07-24 22:34:27.530454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.008 [2024-07-24 22:34:27.530485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.008 qpair failed and we were unable to recover it. 00:25:02.008 [2024-07-24 22:34:27.530593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.008 [2024-07-24 22:34:27.530619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.008 qpair failed and we were unable to recover it. 00:25:02.008 [2024-07-24 22:34:27.530725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.008 [2024-07-24 22:34:27.530751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.008 qpair failed and we were unable to recover it. 00:25:02.008 [2024-07-24 22:34:27.530858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.008 [2024-07-24 22:34:27.530885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.008 qpair failed and we were unable to recover it. 00:25:02.008 [2024-07-24 22:34:27.530981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.008 [2024-07-24 22:34:27.531007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.008 qpair failed and we were unable to recover it. 00:25:02.008 [2024-07-24 22:34:27.531117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.008 [2024-07-24 22:34:27.531147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.008 qpair failed and we were unable to recover it. 00:25:02.008 [2024-07-24 22:34:27.531259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.008 [2024-07-24 22:34:27.531288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.008 qpair failed and we were unable to recover it. 00:25:02.008 [2024-07-24 22:34:27.531405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.008 [2024-07-24 22:34:27.531431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.008 qpair failed and we were unable to recover it. 00:25:02.008 [2024-07-24 22:34:27.531541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.008 [2024-07-24 22:34:27.531567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.008 qpair failed and we were unable to recover it. 00:25:02.008 [2024-07-24 22:34:27.531666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.008 [2024-07-24 22:34:27.531693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.008 qpair failed and we were unable to recover it. 00:25:02.008 [2024-07-24 22:34:27.531796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.008 [2024-07-24 22:34:27.531822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.008 qpair failed and we were unable to recover it. 00:25:02.008 [2024-07-24 22:34:27.531920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.008 [2024-07-24 22:34:27.531945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.008 qpair failed and we were unable to recover it. 00:25:02.008 [2024-07-24 22:34:27.532067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.008 [2024-07-24 22:34:27.532094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.008 qpair failed and we were unable to recover it. 00:25:02.008 [2024-07-24 22:34:27.532225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.008 [2024-07-24 22:34:27.532252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.008 qpair failed and we were unable to recover it. 00:25:02.008 [2024-07-24 22:34:27.532357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.008 [2024-07-24 22:34:27.532383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.008 qpair failed and we were unable to recover it. 00:25:02.008 [2024-07-24 22:34:27.532484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.008 [2024-07-24 22:34:27.532510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.008 qpair failed and we were unable to recover it. 00:25:02.008 [2024-07-24 22:34:27.532620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.008 [2024-07-24 22:34:27.532646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.008 qpair failed and we were unable to recover it. 00:25:02.008 [2024-07-24 22:34:27.532762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.008 [2024-07-24 22:34:27.532791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.008 qpair failed and we were unable to recover it. 00:25:02.008 [2024-07-24 22:34:27.532899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.008 [2024-07-24 22:34:27.532926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.008 qpair failed and we were unable to recover it. 00:25:02.008 [2024-07-24 22:34:27.533041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.008 [2024-07-24 22:34:27.533075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.008 qpair failed and we were unable to recover it. 00:25:02.008 [2024-07-24 22:34:27.533245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.008 [2024-07-24 22:34:27.533274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.008 qpair failed and we were unable to recover it. 00:25:02.008 [2024-07-24 22:34:27.533392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.008 [2024-07-24 22:34:27.533417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.008 qpair failed and we were unable to recover it. 00:25:02.008 [2024-07-24 22:34:27.533526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.008 [2024-07-24 22:34:27.533554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.008 qpair failed and we were unable to recover it. 00:25:02.008 [2024-07-24 22:34:27.533667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.008 [2024-07-24 22:34:27.533694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.008 qpair failed and we were unable to recover it. 00:25:02.008 [2024-07-24 22:34:27.533800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.008 [2024-07-24 22:34:27.533827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.008 qpair failed and we were unable to recover it. 00:25:02.008 [2024-07-24 22:34:27.533936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.008 [2024-07-24 22:34:27.533965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.008 qpair failed and we were unable to recover it. 00:25:02.008 [2024-07-24 22:34:27.534067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.008 [2024-07-24 22:34:27.534093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.008 qpair failed and we were unable to recover it. 00:25:02.008 [2024-07-24 22:34:27.534195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.008 [2024-07-24 22:34:27.534223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.008 qpair failed and we were unable to recover it. 00:25:02.008 [2024-07-24 22:34:27.534322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.008 [2024-07-24 22:34:27.534348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.008 qpair failed and we were unable to recover it. 00:25:02.008 [2024-07-24 22:34:27.534452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.008 [2024-07-24 22:34:27.534490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.008 qpair failed and we were unable to recover it. 00:25:02.008 [2024-07-24 22:34:27.534603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.008 [2024-07-24 22:34:27.534630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.008 qpair failed and we were unable to recover it. 00:25:02.008 [2024-07-24 22:34:27.534746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.008 [2024-07-24 22:34:27.534774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.008 qpair failed and we were unable to recover it. 00:25:02.008 [2024-07-24 22:34:27.534879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.008 [2024-07-24 22:34:27.534911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.008 qpair failed and we were unable to recover it. 00:25:02.008 [2024-07-24 22:34:27.535018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.008 [2024-07-24 22:34:27.535044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.008 qpair failed and we were unable to recover it. 00:25:02.008 [2024-07-24 22:34:27.535153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.008 [2024-07-24 22:34:27.535179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.008 qpair failed and we were unable to recover it. 00:25:02.008 [2024-07-24 22:34:27.535284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.008 [2024-07-24 22:34:27.535311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.008 qpair failed and we were unable to recover it. 00:25:02.008 [2024-07-24 22:34:27.535413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.008 [2024-07-24 22:34:27.535443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.008 qpair failed and we were unable to recover it. 00:25:02.008 [2024-07-24 22:34:27.535559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.008 [2024-07-24 22:34:27.535586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.008 qpair failed and we were unable to recover it. 00:25:02.008 [2024-07-24 22:34:27.535690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.008 [2024-07-24 22:34:27.535716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.008 qpair failed and we were unable to recover it. 00:25:02.008 [2024-07-24 22:34:27.535828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.008 [2024-07-24 22:34:27.535854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.008 qpair failed and we were unable to recover it. 00:25:02.008 [2024-07-24 22:34:27.535953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.008 [2024-07-24 22:34:27.535978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.008 qpair failed and we were unable to recover it. 00:25:02.008 [2024-07-24 22:34:27.536090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.008 [2024-07-24 22:34:27.536118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.008 qpair failed and we were unable to recover it. 00:25:02.008 [2024-07-24 22:34:27.536236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.008 [2024-07-24 22:34:27.536262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.008 qpair failed and we were unable to recover it. 00:25:02.008 [2024-07-24 22:34:27.536372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.008 [2024-07-24 22:34:27.536399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.008 qpair failed and we were unable to recover it. 00:25:02.008 [2024-07-24 22:34:27.536503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.008 [2024-07-24 22:34:27.536529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.008 qpair failed and we were unable to recover it. 00:25:02.008 [2024-07-24 22:34:27.536631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.008 [2024-07-24 22:34:27.536658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.008 qpair failed and we were unable to recover it. 00:25:02.008 [2024-07-24 22:34:27.536779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.008 [2024-07-24 22:34:27.536806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.008 qpair failed and we were unable to recover it. 00:25:02.008 [2024-07-24 22:34:27.536910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.008 [2024-07-24 22:34:27.536937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.008 qpair failed and we were unable to recover it. 00:25:02.008 [2024-07-24 22:34:27.537043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.008 [2024-07-24 22:34:27.537070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.008 qpair failed and we were unable to recover it. 00:25:02.008 [2024-07-24 22:34:27.537170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.009 [2024-07-24 22:34:27.537197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.009 qpair failed and we were unable to recover it. 00:25:02.009 [2024-07-24 22:34:27.537301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.009 [2024-07-24 22:34:27.537327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.009 qpair failed and we were unable to recover it. 00:25:02.009 [2024-07-24 22:34:27.537437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.009 [2024-07-24 22:34:27.537467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.009 qpair failed and we were unable to recover it. 00:25:02.009 [2024-07-24 22:34:27.537604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.009 [2024-07-24 22:34:27.537631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.009 qpair failed and we were unable to recover it. 00:25:02.009 [2024-07-24 22:34:27.537747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.009 [2024-07-24 22:34:27.537773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.009 qpair failed and we were unable to recover it. 00:25:02.009 [2024-07-24 22:34:27.537879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.009 [2024-07-24 22:34:27.537905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.009 qpair failed and we were unable to recover it. 00:25:02.009 [2024-07-24 22:34:27.538013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.009 [2024-07-24 22:34:27.538040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.009 qpair failed and we were unable to recover it. 00:25:02.009 [2024-07-24 22:34:27.538151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.009 [2024-07-24 22:34:27.538177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.009 qpair failed and we were unable to recover it. 00:25:02.009 [2024-07-24 22:34:27.538289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.009 [2024-07-24 22:34:27.538316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.009 qpair failed and we were unable to recover it. 00:25:02.009 [2024-07-24 22:34:27.538418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.009 [2024-07-24 22:34:27.538443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.009 qpair failed and we were unable to recover it. 00:25:02.009 [2024-07-24 22:34:27.538567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.009 [2024-07-24 22:34:27.538601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.009 qpair failed and we were unable to recover it. 00:25:02.009 [2024-07-24 22:34:27.538727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.009 [2024-07-24 22:34:27.538755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.009 qpair failed and we were unable to recover it. 00:25:02.009 [2024-07-24 22:34:27.538864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.009 [2024-07-24 22:34:27.538894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.009 qpair failed and we were unable to recover it. 00:25:02.009 [2024-07-24 22:34:27.539018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.009 [2024-07-24 22:34:27.539045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.009 qpair failed and we were unable to recover it. 00:25:02.009 [2024-07-24 22:34:27.539152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.009 [2024-07-24 22:34:27.539177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.009 qpair failed and we were unable to recover it. 00:25:02.009 [2024-07-24 22:34:27.539286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.009 [2024-07-24 22:34:27.539311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.009 qpair failed and we were unable to recover it. 00:25:02.009 [2024-07-24 22:34:27.539413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.009 [2024-07-24 22:34:27.539438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.009 qpair failed and we were unable to recover it. 00:25:02.009 [2024-07-24 22:34:27.539549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.009 [2024-07-24 22:34:27.539577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.009 qpair failed and we were unable to recover it. 00:25:02.009 [2024-07-24 22:34:27.539687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.009 [2024-07-24 22:34:27.539712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.009 qpair failed and we were unable to recover it. 00:25:02.009 [2024-07-24 22:34:27.539816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.009 [2024-07-24 22:34:27.539841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.009 qpair failed and we were unable to recover it. 00:25:02.009 [2024-07-24 22:34:27.539949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.009 [2024-07-24 22:34:27.539975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.009 qpair failed and we were unable to recover it. 00:25:02.009 [2024-07-24 22:34:27.540079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.009 [2024-07-24 22:34:27.540108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.009 qpair failed and we were unable to recover it. 00:25:02.009 [2024-07-24 22:34:27.540220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.009 [2024-07-24 22:34:27.540250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.009 qpair failed and we were unable to recover it. 00:25:02.009 [2024-07-24 22:34:27.540365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.009 [2024-07-24 22:34:27.540397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.009 qpair failed and we were unable to recover it. 00:25:02.009 [2024-07-24 22:34:27.540508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.009 [2024-07-24 22:34:27.540536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.009 qpair failed and we were unable to recover it. 00:25:02.009 [2024-07-24 22:34:27.540652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.009 [2024-07-24 22:34:27.540678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.009 qpair failed and we were unable to recover it. 00:25:02.009 [2024-07-24 22:34:27.540780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.009 [2024-07-24 22:34:27.540806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.009 qpair failed and we were unable to recover it. 00:25:02.009 [2024-07-24 22:34:27.540908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.009 [2024-07-24 22:34:27.540936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.009 qpair failed and we were unable to recover it. 00:25:02.009 [2024-07-24 22:34:27.541039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.009 [2024-07-24 22:34:27.541066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.009 qpair failed and we were unable to recover it. 00:25:02.009 [2024-07-24 22:34:27.541168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.009 [2024-07-24 22:34:27.541194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.009 qpair failed and we were unable to recover it. 00:25:02.009 [2024-07-24 22:34:27.541300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.009 [2024-07-24 22:34:27.541326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.009 qpair failed and we were unable to recover it. 00:25:02.009 [2024-07-24 22:34:27.541473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.009 [2024-07-24 22:34:27.541504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.009 qpair failed and we were unable to recover it. 00:25:02.009 [2024-07-24 22:34:27.541618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.009 [2024-07-24 22:34:27.541643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.009 qpair failed and we were unable to recover it. 00:25:02.009 [2024-07-24 22:34:27.541746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.009 [2024-07-24 22:34:27.541772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.009 qpair failed and we were unable to recover it. 00:25:02.009 [2024-07-24 22:34:27.541875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.009 [2024-07-24 22:34:27.541902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.009 qpair failed and we were unable to recover it. 00:25:02.009 [2024-07-24 22:34:27.542019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.009 [2024-07-24 22:34:27.542047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.009 qpair failed and we were unable to recover it. 00:25:02.009 [2024-07-24 22:34:27.542155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.010 [2024-07-24 22:34:27.542180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.010 qpair failed and we were unable to recover it. 00:25:02.010 [2024-07-24 22:34:27.542287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.010 [2024-07-24 22:34:27.542312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.010 qpair failed and we were unable to recover it. 00:25:02.010 [2024-07-24 22:34:27.542430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.010 [2024-07-24 22:34:27.542456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.010 qpair failed and we were unable to recover it. 00:25:02.010 [2024-07-24 22:34:27.542579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.010 [2024-07-24 22:34:27.542607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.010 qpair failed and we were unable to recover it. 00:25:02.010 [2024-07-24 22:34:27.542719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.010 [2024-07-24 22:34:27.542748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.010 qpair failed and we were unable to recover it. 00:25:02.010 [2024-07-24 22:34:27.542848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.010 [2024-07-24 22:34:27.542875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.010 qpair failed and we were unable to recover it. 00:25:02.010 [2024-07-24 22:34:27.542995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.010 [2024-07-24 22:34:27.543021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.010 qpair failed and we were unable to recover it. 00:25:02.010 [2024-07-24 22:34:27.543129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.010 [2024-07-24 22:34:27.543156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.010 qpair failed and we were unable to recover it. 00:25:02.010 [2024-07-24 22:34:27.543274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.010 [2024-07-24 22:34:27.543300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.010 qpair failed and we were unable to recover it. 00:25:02.010 [2024-07-24 22:34:27.543404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.010 [2024-07-24 22:34:27.543434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.010 qpair failed and we were unable to recover it. 00:25:02.010 [2024-07-24 22:34:27.543548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.010 [2024-07-24 22:34:27.543575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.010 qpair failed and we were unable to recover it. 00:25:02.010 [2024-07-24 22:34:27.543681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.010 [2024-07-24 22:34:27.543706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.010 qpair failed and we were unable to recover it. 00:25:02.010 [2024-07-24 22:34:27.543809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.010 [2024-07-24 22:34:27.543834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.010 qpair failed and we were unable to recover it. 00:25:02.010 [2024-07-24 22:34:27.543930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.010 [2024-07-24 22:34:27.543955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.010 qpair failed and we were unable to recover it. 00:25:02.010 [2024-07-24 22:34:27.544057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.010 [2024-07-24 22:34:27.544088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.010 qpair failed and we were unable to recover it. 00:25:02.010 [2024-07-24 22:34:27.544221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.010 [2024-07-24 22:34:27.544246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.010 qpair failed and we were unable to recover it. 00:25:02.010 [2024-07-24 22:34:27.544363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.010 [2024-07-24 22:34:27.544389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.010 qpair failed and we were unable to recover it. 00:25:02.010 [2024-07-24 22:34:27.544516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.010 [2024-07-24 22:34:27.544543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.010 qpair failed and we were unable to recover it. 00:25:02.010 [2024-07-24 22:34:27.544653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.010 [2024-07-24 22:34:27.544681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.010 qpair failed and we were unable to recover it. 00:25:02.010 [2024-07-24 22:34:27.544822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.010 [2024-07-24 22:34:27.544852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.010 qpair failed and we were unable to recover it. 00:25:02.010 [2024-07-24 22:34:27.544965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.010 [2024-07-24 22:34:27.544991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.010 qpair failed and we were unable to recover it. 00:25:02.010 [2024-07-24 22:34:27.545105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.010 [2024-07-24 22:34:27.545132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.010 qpair failed and we were unable to recover it. 00:25:02.010 [2024-07-24 22:34:27.545250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.010 [2024-07-24 22:34:27.545277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.010 qpair failed and we were unable to recover it. 00:25:02.010 [2024-07-24 22:34:27.545388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.010 [2024-07-24 22:34:27.545414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.010 qpair failed and we were unable to recover it. 00:25:02.010 [2024-07-24 22:34:27.545530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.010 [2024-07-24 22:34:27.545559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.010 qpair failed and we were unable to recover it. 00:25:02.010 [2024-07-24 22:34:27.545660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.010 [2024-07-24 22:34:27.545686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.010 qpair failed and we were unable to recover it. 00:25:02.010 [2024-07-24 22:34:27.545790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.010 [2024-07-24 22:34:27.545816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.010 qpair failed and we were unable to recover it. 00:25:02.010 [2024-07-24 22:34:27.545923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.010 [2024-07-24 22:34:27.545951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.010 qpair failed and we were unable to recover it. 00:25:02.010 [2024-07-24 22:34:27.546060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.010 [2024-07-24 22:34:27.546086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.010 qpair failed and we were unable to recover it. 00:25:02.010 [2024-07-24 22:34:27.546196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.010 [2024-07-24 22:34:27.546223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.010 qpair failed and we were unable to recover it. 00:25:02.010 [2024-07-24 22:34:27.546321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.010 [2024-07-24 22:34:27.546347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.010 qpair failed and we were unable to recover it. 00:25:02.010 [2024-07-24 22:34:27.546461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.010 [2024-07-24 22:34:27.546498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.010 qpair failed and we were unable to recover it. 00:25:02.010 [2024-07-24 22:34:27.546604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.010 [2024-07-24 22:34:27.546630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.010 qpair failed and we were unable to recover it. 00:25:02.010 [2024-07-24 22:34:27.546731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.010 [2024-07-24 22:34:27.546757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.010 qpair failed and we were unable to recover it. 00:25:02.010 [2024-07-24 22:34:27.546860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.010 [2024-07-24 22:34:27.546886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.010 qpair failed and we were unable to recover it. 00:25:02.010 [2024-07-24 22:34:27.546983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.010 [2024-07-24 22:34:27.547009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.010 qpair failed and we were unable to recover it. 00:25:02.010 [2024-07-24 22:34:27.547111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.010 [2024-07-24 22:34:27.547138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.010 qpair failed and we were unable to recover it. 00:25:02.010 [2024-07-24 22:34:27.547241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.010 [2024-07-24 22:34:27.547267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.010 qpair failed and we were unable to recover it. 00:25:02.010 [2024-07-24 22:34:27.547364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.010 [2024-07-24 22:34:27.547390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.010 qpair failed and we were unable to recover it. 00:25:02.010 [2024-07-24 22:34:27.547508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.011 [2024-07-24 22:34:27.547536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.011 qpair failed and we were unable to recover it. 00:25:02.011 [2024-07-24 22:34:27.547638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.011 [2024-07-24 22:34:27.547665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.011 qpair failed and we were unable to recover it. 00:25:02.011 [2024-07-24 22:34:27.547790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.011 [2024-07-24 22:34:27.547816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.011 qpair failed and we were unable to recover it. 00:25:02.011 [2024-07-24 22:34:27.547913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.011 [2024-07-24 22:34:27.547939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.011 qpair failed and we were unable to recover it. 00:25:02.011 [2024-07-24 22:34:27.548041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.011 [2024-07-24 22:34:27.548067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.011 qpair failed and we were unable to recover it. 00:25:02.011 [2024-07-24 22:34:27.548167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.011 [2024-07-24 22:34:27.548192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.011 qpair failed and we were unable to recover it. 00:25:02.011 [2024-07-24 22:34:27.548298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.011 [2024-07-24 22:34:27.548325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.011 qpair failed and we were unable to recover it. 00:25:02.011 [2024-07-24 22:34:27.548431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.011 [2024-07-24 22:34:27.548457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.011 qpair failed and we were unable to recover it. 00:25:02.011 [2024-07-24 22:34:27.548566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.011 [2024-07-24 22:34:27.548592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.011 qpair failed and we were unable to recover it. 00:25:02.011 [2024-07-24 22:34:27.548692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.011 [2024-07-24 22:34:27.548722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.011 qpair failed and we were unable to recover it. 00:25:02.011 [2024-07-24 22:34:27.548825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.011 [2024-07-24 22:34:27.548853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.011 qpair failed and we were unable to recover it. 00:25:02.011 [2024-07-24 22:34:27.548965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.011 [2024-07-24 22:34:27.548991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.011 qpair failed and we were unable to recover it. 00:25:02.011 [2024-07-24 22:34:27.549097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.011 [2024-07-24 22:34:27.549125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.011 qpair failed and we were unable to recover it. 00:25:02.011 [2024-07-24 22:34:27.549247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.011 [2024-07-24 22:34:27.549275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.011 qpair failed and we were unable to recover it. 00:25:02.011 [2024-07-24 22:34:27.549399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.011 [2024-07-24 22:34:27.549425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.011 qpair failed and we were unable to recover it. 00:25:02.011 [2024-07-24 22:34:27.549523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.011 [2024-07-24 22:34:27.549557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.011 qpair failed and we were unable to recover it. 00:25:02.011 [2024-07-24 22:34:27.549661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.011 [2024-07-24 22:34:27.549687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.011 qpair failed and we were unable to recover it. 00:25:02.011 [2024-07-24 22:34:27.549788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.011 [2024-07-24 22:34:27.549815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.011 qpair failed and we were unable to recover it. 00:25:02.011 [2024-07-24 22:34:27.549933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.011 [2024-07-24 22:34:27.549959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.011 qpair failed and we were unable to recover it. 00:25:02.011 [2024-07-24 22:34:27.550063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.011 [2024-07-24 22:34:27.550089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.011 qpair failed and we were unable to recover it. 00:25:02.011 [2024-07-24 22:34:27.550189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.011 [2024-07-24 22:34:27.550215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.011 qpair failed and we were unable to recover it. 00:25:02.011 [2024-07-24 22:34:27.550328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.011 [2024-07-24 22:34:27.550354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.011 qpair failed and we were unable to recover it. 00:25:02.011 [2024-07-24 22:34:27.550466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.011 [2024-07-24 22:34:27.550499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.011 qpair failed and we were unable to recover it. 00:25:02.011 [2024-07-24 22:34:27.550602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.011 [2024-07-24 22:34:27.550630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.011 qpair failed and we were unable to recover it. 00:25:02.011 [2024-07-24 22:34:27.550750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.011 [2024-07-24 22:34:27.550777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.011 qpair failed and we were unable to recover it. 00:25:02.011 [2024-07-24 22:34:27.550883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.011 [2024-07-24 22:34:27.550910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.011 qpair failed and we were unable to recover it. 00:25:02.011 [2024-07-24 22:34:27.551011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.011 [2024-07-24 22:34:27.551037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.011 qpair failed and we were unable to recover it. 00:25:02.011 [2024-07-24 22:34:27.551140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.011 [2024-07-24 22:34:27.551167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.011 qpair failed and we were unable to recover it. 00:25:02.011 [2024-07-24 22:34:27.551271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.011 [2024-07-24 22:34:27.551298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.011 qpair failed and we were unable to recover it. 00:25:02.011 [2024-07-24 22:34:27.551409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.011 [2024-07-24 22:34:27.551436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.011 qpair failed and we were unable to recover it. 00:25:02.011 [2024-07-24 22:34:27.551558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.011 [2024-07-24 22:34:27.551587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.011 qpair failed and we were unable to recover it. 00:25:02.011 [2024-07-24 22:34:27.551723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.011 [2024-07-24 22:34:27.551751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.011 qpair failed and we were unable to recover it. 00:25:02.011 [2024-07-24 22:34:27.551889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.011 [2024-07-24 22:34:27.551916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.011 qpair failed and we were unable to recover it. 00:25:02.011 [2024-07-24 22:34:27.552024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.011 [2024-07-24 22:34:27.552050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.011 qpair failed and we were unable to recover it. 00:25:02.011 [2024-07-24 22:34:27.552157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.011 [2024-07-24 22:34:27.552183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.011 qpair failed and we were unable to recover it. 00:25:02.011 [2024-07-24 22:34:27.552326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.011 [2024-07-24 22:34:27.552352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.011 qpair failed and we were unable to recover it. 00:25:02.011 [2024-07-24 22:34:27.552491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.011 [2024-07-24 22:34:27.552516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.011 qpair failed and we were unable to recover it. 00:25:02.011 [2024-07-24 22:34:27.552621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.011 [2024-07-24 22:34:27.552649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.011 qpair failed and we were unable to recover it. 00:25:02.011 [2024-07-24 22:34:27.552752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.011 [2024-07-24 22:34:27.552778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.011 qpair failed and we were unable to recover it. 00:25:02.011 [2024-07-24 22:34:27.552896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.011 [2024-07-24 22:34:27.552922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.011 qpair failed and we were unable to recover it. 00:25:02.011 [2024-07-24 22:34:27.553043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.011 [2024-07-24 22:34:27.553072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.011 qpair failed and we were unable to recover it. 00:25:02.011 [2024-07-24 22:34:27.553179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.011 [2024-07-24 22:34:27.553205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.011 qpair failed and we were unable to recover it. 00:25:02.011 [2024-07-24 22:34:27.553339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.011 [2024-07-24 22:34:27.553365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.011 qpair failed and we were unable to recover it. 00:25:02.011 [2024-07-24 22:34:27.553491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.011 [2024-07-24 22:34:27.553519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.011 qpair failed and we were unable to recover it. 00:25:02.011 [2024-07-24 22:34:27.553623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.011 [2024-07-24 22:34:27.553649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.011 qpair failed and we were unable to recover it. 00:25:02.011 [2024-07-24 22:34:27.553782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.011 [2024-07-24 22:34:27.553810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.011 qpair failed and we were unable to recover it. 00:25:02.011 [2024-07-24 22:34:27.553924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.011 [2024-07-24 22:34:27.553950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.011 qpair failed and we were unable to recover it. 00:25:02.011 [2024-07-24 22:34:27.554055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.011 [2024-07-24 22:34:27.554082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.011 qpair failed and we were unable to recover it. 00:25:02.011 [2024-07-24 22:34:27.554193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.011 [2024-07-24 22:34:27.554219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.011 qpair failed and we were unable to recover it. 00:25:02.011 [2024-07-24 22:34:27.554324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.011 [2024-07-24 22:34:27.554349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.011 qpair failed and we were unable to recover it. 00:25:02.011 [2024-07-24 22:34:27.554445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.011 [2024-07-24 22:34:27.554471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.011 qpair failed and we were unable to recover it. 00:25:02.011 [2024-07-24 22:34:27.554576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.011 [2024-07-24 22:34:27.554602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.011 qpair failed and we were unable to recover it. 00:25:02.011 [2024-07-24 22:34:27.554703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.011 [2024-07-24 22:34:27.554730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.011 qpair failed and we were unable to recover it. 00:25:02.011 [2024-07-24 22:34:27.554865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.011 [2024-07-24 22:34:27.554892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.011 qpair failed and we were unable to recover it. 00:25:02.011 [2024-07-24 22:34:27.554993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.011 [2024-07-24 22:34:27.555018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.011 qpair failed and we were unable to recover it. 00:25:02.011 [2024-07-24 22:34:27.555124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.011 [2024-07-24 22:34:27.555157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.011 qpair failed and we were unable to recover it. 00:25:02.011 [2024-07-24 22:34:27.555259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.011 [2024-07-24 22:34:27.555285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.011 qpair failed and we were unable to recover it. 00:25:02.011 [2024-07-24 22:34:27.555401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.011 [2024-07-24 22:34:27.555426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.011 qpair failed and we were unable to recover it. 00:25:02.011 [2024-07-24 22:34:27.555549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.011 [2024-07-24 22:34:27.555577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.011 qpair failed and we were unable to recover it. 00:25:02.011 [2024-07-24 22:34:27.555714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.011 [2024-07-24 22:34:27.555739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.011 qpair failed and we were unable to recover it. 00:25:02.011 [2024-07-24 22:34:27.555842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.011 [2024-07-24 22:34:27.555868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.011 qpair failed and we were unable to recover it. 00:25:02.011 [2024-07-24 22:34:27.555973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.011 [2024-07-24 22:34:27.556001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.011 qpair failed and we were unable to recover it. 00:25:02.011 [2024-07-24 22:34:27.556104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.011 [2024-07-24 22:34:27.556130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.011 qpair failed and we were unable to recover it. 00:25:02.011 [2024-07-24 22:34:27.556269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.011 [2024-07-24 22:34:27.556295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.011 qpair failed and we were unable to recover it. 00:25:02.011 [2024-07-24 22:34:27.556410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.011 [2024-07-24 22:34:27.556436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.011 qpair failed and we were unable to recover it. 00:25:02.011 [2024-07-24 22:34:27.556538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.012 [2024-07-24 22:34:27.556563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.012 qpair failed and we were unable to recover it. 00:25:02.012 [2024-07-24 22:34:27.556705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.012 [2024-07-24 22:34:27.556735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.012 qpair failed and we were unable to recover it. 00:25:02.012 [2024-07-24 22:34:27.556869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.012 [2024-07-24 22:34:27.556896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.012 qpair failed and we were unable to recover it. 00:25:02.012 [2024-07-24 22:34:27.557000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.012 [2024-07-24 22:34:27.557026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.012 qpair failed and we were unable to recover it. 00:25:02.012 [2024-07-24 22:34:27.557147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.012 [2024-07-24 22:34:27.557173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.012 qpair failed and we were unable to recover it. 00:25:02.012 [2024-07-24 22:34:27.557314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.012 [2024-07-24 22:34:27.557342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.012 qpair failed and we were unable to recover it. 00:25:02.012 [2024-07-24 22:34:27.557441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.012 [2024-07-24 22:34:27.557467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.012 qpair failed and we were unable to recover it. 00:25:02.012 [2024-07-24 22:34:27.557575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.012 [2024-07-24 22:34:27.557603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.012 qpair failed and we were unable to recover it. 00:25:02.012 [2024-07-24 22:34:27.557709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.012 [2024-07-24 22:34:27.557734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.012 qpair failed and we were unable to recover it. 00:25:02.012 [2024-07-24 22:34:27.557850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.012 [2024-07-24 22:34:27.557876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.012 qpair failed and we were unable to recover it. 00:25:02.012 [2024-07-24 22:34:27.557977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.012 [2024-07-24 22:34:27.558003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.012 qpair failed and we were unable to recover it. 00:25:02.012 [2024-07-24 22:34:27.558110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.012 [2024-07-24 22:34:27.558139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.012 qpair failed and we were unable to recover it. 00:25:02.012 [2024-07-24 22:34:27.558241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.012 [2024-07-24 22:34:27.558267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.012 qpair failed and we were unable to recover it. 00:25:02.012 [2024-07-24 22:34:27.558386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.012 [2024-07-24 22:34:27.558415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.012 qpair failed and we were unable to recover it. 00:25:02.012 [2024-07-24 22:34:27.558521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.012 [2024-07-24 22:34:27.558548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.012 qpair failed and we were unable to recover it. 00:25:02.012 [2024-07-24 22:34:27.558654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.012 [2024-07-24 22:34:27.558681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.012 qpair failed and we were unable to recover it. 00:25:02.012 [2024-07-24 22:34:27.558792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.012 [2024-07-24 22:34:27.558817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.012 qpair failed and we were unable to recover it. 00:25:02.012 [2024-07-24 22:34:27.558921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.012 [2024-07-24 22:34:27.558946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.012 qpair failed and we were unable to recover it. 00:25:02.012 [2024-07-24 22:34:27.559047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.012 [2024-07-24 22:34:27.559072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.012 qpair failed and we were unable to recover it. 00:25:02.012 [2024-07-24 22:34:27.559189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.012 [2024-07-24 22:34:27.559217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.012 qpair failed and we were unable to recover it. 00:25:02.012 [2024-07-24 22:34:27.559324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.012 [2024-07-24 22:34:27.559353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.012 qpair failed and we were unable to recover it. 00:25:02.012 [2024-07-24 22:34:27.559475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.012 [2024-07-24 22:34:27.559507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.012 qpair failed and we were unable to recover it. 00:25:02.012 [2024-07-24 22:34:27.559622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.012 [2024-07-24 22:34:27.559649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.012 qpair failed and we were unable to recover it. 00:25:02.012 [2024-07-24 22:34:27.559783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.012 [2024-07-24 22:34:27.559810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.012 qpair failed and we were unable to recover it. 00:25:02.012 [2024-07-24 22:34:27.559914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.012 [2024-07-24 22:34:27.559940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.012 qpair failed and we were unable to recover it. 00:25:02.012 [2024-07-24 22:34:27.560076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.012 [2024-07-24 22:34:27.560102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.012 qpair failed and we were unable to recover it. 00:25:02.012 [2024-07-24 22:34:27.560234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.012 [2024-07-24 22:34:27.560260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.012 qpair failed and we were unable to recover it. 00:25:02.012 [2024-07-24 22:34:27.560353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.012 [2024-07-24 22:34:27.560379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.012 qpair failed and we were unable to recover it. 00:25:02.012 [2024-07-24 22:34:27.560489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.012 [2024-07-24 22:34:27.560517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.012 qpair failed and we were unable to recover it. 00:25:02.012 [2024-07-24 22:34:27.560619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.012 [2024-07-24 22:34:27.560646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.012 qpair failed and we were unable to recover it. 00:25:02.012 [2024-07-24 22:34:27.560763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.012 [2024-07-24 22:34:27.560795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.012 qpair failed and we were unable to recover it. 00:25:02.012 [2024-07-24 22:34:27.560898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.012 [2024-07-24 22:34:27.560924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.012 qpair failed and we were unable to recover it. 00:25:02.012 [2024-07-24 22:34:27.561030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.012 [2024-07-24 22:34:27.561057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.012 qpair failed and we were unable to recover it. 00:25:02.012 [2024-07-24 22:34:27.561203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.012 [2024-07-24 22:34:27.561230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.012 qpair failed and we were unable to recover it. 00:25:02.012 [2024-07-24 22:34:27.561343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.012 [2024-07-24 22:34:27.561369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.012 qpair failed and we were unable to recover it. 00:25:02.012 [2024-07-24 22:34:27.561489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.012 [2024-07-24 22:34:27.561517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.012 qpair failed and we were unable to recover it. 00:25:02.012 [2024-07-24 22:34:27.561631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.012 [2024-07-24 22:34:27.561657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.012 qpair failed and we were unable to recover it. 00:25:02.012 [2024-07-24 22:34:27.561789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.012 [2024-07-24 22:34:27.561815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.012 qpair failed and we were unable to recover it. 00:25:02.012 [2024-07-24 22:34:27.561947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.012 [2024-07-24 22:34:27.561975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.012 qpair failed and we were unable to recover it. 00:25:02.012 [2024-07-24 22:34:27.562074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.012 [2024-07-24 22:34:27.562100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.012 qpair failed and we were unable to recover it. 00:25:02.012 [2024-07-24 22:34:27.562218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.012 [2024-07-24 22:34:27.562246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.012 qpair failed and we were unable to recover it. 00:25:02.012 [2024-07-24 22:34:27.562356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.012 [2024-07-24 22:34:27.562383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.012 qpair failed and we were unable to recover it. 00:25:02.012 [2024-07-24 22:34:27.562493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.012 [2024-07-24 22:34:27.562519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.012 qpair failed and we were unable to recover it. 00:25:02.012 [2024-07-24 22:34:27.562624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.012 [2024-07-24 22:34:27.562649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.012 qpair failed and we were unable to recover it. 00:25:02.012 [2024-07-24 22:34:27.562776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.012 [2024-07-24 22:34:27.562805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.012 qpair failed and we were unable to recover it. 00:25:02.012 [2024-07-24 22:34:27.562914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.012 [2024-07-24 22:34:27.562941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.012 qpair failed and we were unable to recover it. 00:25:02.012 [2024-07-24 22:34:27.563074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.012 [2024-07-24 22:34:27.563100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.012 qpair failed and we were unable to recover it. 00:25:02.012 [2024-07-24 22:34:27.563205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.012 [2024-07-24 22:34:27.563232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.012 qpair failed and we were unable to recover it. 00:25:02.012 [2024-07-24 22:34:27.563335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.012 [2024-07-24 22:34:27.563360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.012 qpair failed and we were unable to recover it. 00:25:02.012 [2024-07-24 22:34:27.563465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.012 [2024-07-24 22:34:27.563503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.012 qpair failed and we were unable to recover it. 00:25:02.012 [2024-07-24 22:34:27.563616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.012 [2024-07-24 22:34:27.563643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.012 qpair failed and we were unable to recover it. 00:25:02.012 [2024-07-24 22:34:27.563791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.012 [2024-07-24 22:34:27.563817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.012 qpair failed and we were unable to recover it. 00:25:02.012 [2024-07-24 22:34:27.563937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.012 [2024-07-24 22:34:27.563966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.012 qpair failed and we were unable to recover it. 00:25:02.012 [2024-07-24 22:34:27.564064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.012 [2024-07-24 22:34:27.564091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.012 qpair failed and we were unable to recover it. 00:25:02.012 [2024-07-24 22:34:27.564234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.012 [2024-07-24 22:34:27.564260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.012 qpair failed and we were unable to recover it. 00:25:02.012 [2024-07-24 22:34:27.564363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.012 [2024-07-24 22:34:27.564390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.012 qpair failed and we were unable to recover it. 00:25:02.012 [2024-07-24 22:34:27.564508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.012 [2024-07-24 22:34:27.564536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.012 qpair failed and we were unable to recover it. 00:25:02.012 [2024-07-24 22:34:27.564646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.012 [2024-07-24 22:34:27.564673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.012 qpair failed and we were unable to recover it. 00:25:02.012 [2024-07-24 22:34:27.564792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.012 [2024-07-24 22:34:27.564821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.012 qpair failed and we were unable to recover it. 00:25:02.012 [2024-07-24 22:34:27.564946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.012 [2024-07-24 22:34:27.564971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.012 qpair failed and we were unable to recover it. 00:25:02.012 [2024-07-24 22:34:27.565086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.012 [2024-07-24 22:34:27.565111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.012 qpair failed and we were unable to recover it. 00:25:02.012 [2024-07-24 22:34:27.565225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.012 [2024-07-24 22:34:27.565252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.012 qpair failed and we were unable to recover it. 00:25:02.012 [2024-07-24 22:34:27.565366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.012 [2024-07-24 22:34:27.565391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.012 qpair failed and we were unable to recover it. 00:25:02.012 [2024-07-24 22:34:27.565509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.012 [2024-07-24 22:34:27.565536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.012 qpair failed and we were unable to recover it. 00:25:02.012 [2024-07-24 22:34:27.565637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.012 [2024-07-24 22:34:27.565663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.012 qpair failed and we were unable to recover it. 00:25:02.013 [2024-07-24 22:34:27.565760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.013 [2024-07-24 22:34:27.565786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.013 qpair failed and we were unable to recover it. 00:25:02.013 [2024-07-24 22:34:27.565903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.013 [2024-07-24 22:34:27.565928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.013 qpair failed and we were unable to recover it. 00:25:02.013 [2024-07-24 22:34:27.566034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.013 [2024-07-24 22:34:27.566060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.013 qpair failed and we were unable to recover it. 00:25:02.013 [2024-07-24 22:34:27.566157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.013 [2024-07-24 22:34:27.566183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.013 qpair failed and we were unable to recover it. 00:25:02.013 [2024-07-24 22:34:27.566284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.013 [2024-07-24 22:34:27.566309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.013 qpair failed and we were unable to recover it. 00:25:02.013 [2024-07-24 22:34:27.566413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.013 [2024-07-24 22:34:27.566446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.013 qpair failed and we were unable to recover it. 00:25:02.013 [2024-07-24 22:34:27.566565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.013 [2024-07-24 22:34:27.566594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.013 qpair failed and we were unable to recover it. 00:25:02.013 [2024-07-24 22:34:27.566695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.013 [2024-07-24 22:34:27.566722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.013 qpair failed and we were unable to recover it. 00:25:02.013 [2024-07-24 22:34:27.566841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.013 [2024-07-24 22:34:27.566867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.013 qpair failed and we were unable to recover it. 00:25:02.013 [2024-07-24 22:34:27.566971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.013 [2024-07-24 22:34:27.566997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.013 qpair failed and we were unable to recover it. 00:25:02.013 [2024-07-24 22:34:27.567100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.013 [2024-07-24 22:34:27.567126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.013 qpair failed and we were unable to recover it. 00:25:02.013 [2024-07-24 22:34:27.567240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.013 [2024-07-24 22:34:27.567265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.013 qpair failed and we were unable to recover it. 00:25:02.013 [2024-07-24 22:34:27.567378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.013 [2024-07-24 22:34:27.567404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.013 qpair failed and we were unable to recover it. 00:25:02.013 [2024-07-24 22:34:27.567518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.013 [2024-07-24 22:34:27.567543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.013 qpair failed and we were unable to recover it. 00:25:02.013 [2024-07-24 22:34:27.567646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.013 [2024-07-24 22:34:27.567671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.013 qpair failed and we were unable to recover it. 00:25:02.013 [2024-07-24 22:34:27.567772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.013 [2024-07-24 22:34:27.567797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.013 qpair failed and we were unable to recover it. 00:25:02.013 [2024-07-24 22:34:27.567892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.013 [2024-07-24 22:34:27.567917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.013 qpair failed and we were unable to recover it. 00:25:02.013 [2024-07-24 22:34:27.568032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.013 [2024-07-24 22:34:27.568059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.013 qpair failed and we were unable to recover it. 00:25:02.013 [2024-07-24 22:34:27.568165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.013 [2024-07-24 22:34:27.568192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.013 qpair failed and we were unable to recover it. 00:25:02.013 [2024-07-24 22:34:27.568299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.013 [2024-07-24 22:34:27.568324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.013 qpair failed and we were unable to recover it. 00:25:02.013 [2024-07-24 22:34:27.568442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.013 [2024-07-24 22:34:27.568469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.013 qpair failed and we were unable to recover it. 00:25:02.013 [2024-07-24 22:34:27.568581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.013 [2024-07-24 22:34:27.568608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.013 qpair failed and we were unable to recover it. 00:25:02.013 [2024-07-24 22:34:27.568725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.013 [2024-07-24 22:34:27.568751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.013 qpair failed and we were unable to recover it. 00:25:02.013 [2024-07-24 22:34:27.568873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.013 [2024-07-24 22:34:27.568898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.013 qpair failed and we were unable to recover it. 00:25:02.013 [2024-07-24 22:34:27.569010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.013 [2024-07-24 22:34:27.569039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.013 qpair failed and we were unable to recover it. 00:25:02.013 [2024-07-24 22:34:27.569140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.013 [2024-07-24 22:34:27.569167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.013 qpair failed and we were unable to recover it. 00:25:02.013 [2024-07-24 22:34:27.569266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.013 [2024-07-24 22:34:27.569291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.013 qpair failed and we were unable to recover it. 00:25:02.013 [2024-07-24 22:34:27.569394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.013 [2024-07-24 22:34:27.569420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.013 qpair failed and we were unable to recover it. 00:25:02.013 [2024-07-24 22:34:27.569562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.013 [2024-07-24 22:34:27.569588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.013 qpair failed and we were unable to recover it. 00:25:02.013 [2024-07-24 22:34:27.569687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.013 [2024-07-24 22:34:27.569712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.013 qpair failed and we were unable to recover it. 00:25:02.013 [2024-07-24 22:34:27.569820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.013 [2024-07-24 22:34:27.569846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.013 qpair failed and we were unable to recover it. 00:25:02.013 [2024-07-24 22:34:27.569949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.013 [2024-07-24 22:34:27.569976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.013 qpair failed and we were unable to recover it. 00:25:02.013 [2024-07-24 22:34:27.570085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.013 [2024-07-24 22:34:27.570112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.013 qpair failed and we were unable to recover it. 00:25:02.013 [2024-07-24 22:34:27.570250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.013 [2024-07-24 22:34:27.570278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.013 qpair failed and we were unable to recover it. 00:25:02.013 [2024-07-24 22:34:27.570379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.013 [2024-07-24 22:34:27.570405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.013 qpair failed and we were unable to recover it. 00:25:02.013 [2024-07-24 22:34:27.570516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.013 [2024-07-24 22:34:27.570542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.013 qpair failed and we were unable to recover it. 00:25:02.013 [2024-07-24 22:34:27.570645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.013 [2024-07-24 22:34:27.570670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.013 qpair failed and we were unable to recover it. 00:25:02.013 [2024-07-24 22:34:27.570775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.013 [2024-07-24 22:34:27.570800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.013 qpair failed and we were unable to recover it. 00:25:02.013 [2024-07-24 22:34:27.570922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.013 [2024-07-24 22:34:27.570947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.013 qpair failed and we were unable to recover it. 00:25:02.013 [2024-07-24 22:34:27.571054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.013 [2024-07-24 22:34:27.571086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.013 qpair failed and we were unable to recover it. 00:25:02.013 [2024-07-24 22:34:27.571207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.013 [2024-07-24 22:34:27.571233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.013 qpair failed and we were unable to recover it. 00:25:02.013 [2024-07-24 22:34:27.571361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.013 [2024-07-24 22:34:27.571390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.013 qpair failed and we were unable to recover it. 00:25:02.013 [2024-07-24 22:34:27.571502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.013 [2024-07-24 22:34:27.571529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.013 qpair failed and we were unable to recover it. 00:25:02.013 [2024-07-24 22:34:27.571627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.013 [2024-07-24 22:34:27.571654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.013 qpair failed and we were unable to recover it. 00:25:02.013 [2024-07-24 22:34:27.571750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.013 [2024-07-24 22:34:27.571776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.013 qpair failed and we were unable to recover it. 00:25:02.013 [2024-07-24 22:34:27.571892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.013 [2024-07-24 22:34:27.571923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.013 qpair failed and we were unable to recover it. 00:25:02.013 [2024-07-24 22:34:27.572053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.013 [2024-07-24 22:34:27.572079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.013 qpair failed and we were unable to recover it. 00:25:02.013 [2024-07-24 22:34:27.572184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.013 [2024-07-24 22:34:27.572211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.013 qpair failed and we were unable to recover it. 00:25:02.013 [2024-07-24 22:34:27.572315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.013 [2024-07-24 22:34:27.572343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.013 qpair failed and we were unable to recover it. 00:25:02.013 [2024-07-24 22:34:27.572456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.013 [2024-07-24 22:34:27.572494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.013 qpair failed and we were unable to recover it. 00:25:02.013 [2024-07-24 22:34:27.572600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.013 [2024-07-24 22:34:27.572626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.013 qpair failed and we were unable to recover it. 00:25:02.013 [2024-07-24 22:34:27.572741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.013 [2024-07-24 22:34:27.572767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.013 qpair failed and we were unable to recover it. 00:25:02.013 [2024-07-24 22:34:27.572875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.013 [2024-07-24 22:34:27.572902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.013 qpair failed and we were unable to recover it. 00:25:02.013 [2024-07-24 22:34:27.573021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.013 [2024-07-24 22:34:27.573047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.013 qpair failed and we were unable to recover it. 00:25:02.013 [2024-07-24 22:34:27.573150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.013 [2024-07-24 22:34:27.573175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.013 qpair failed and we were unable to recover it. 00:25:02.013 [2024-07-24 22:34:27.573291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.013 [2024-07-24 22:34:27.573318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.013 qpair failed and we were unable to recover it. 00:25:02.013 [2024-07-24 22:34:27.573440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.013 [2024-07-24 22:34:27.573470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.013 qpair failed and we were unable to recover it. 00:25:02.013 [2024-07-24 22:34:27.573595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.013 [2024-07-24 22:34:27.573621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.013 qpair failed and we were unable to recover it. 00:25:02.013 [2024-07-24 22:34:27.573741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.013 [2024-07-24 22:34:27.573767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.013 qpair failed and we were unable to recover it. 00:25:02.013 [2024-07-24 22:34:27.573876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.013 [2024-07-24 22:34:27.573903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.013 qpair failed and we were unable to recover it. 00:25:02.013 [2024-07-24 22:34:27.574021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.013 [2024-07-24 22:34:27.574047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.013 qpair failed and we were unable to recover it. 00:25:02.013 [2024-07-24 22:34:27.574153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.013 [2024-07-24 22:34:27.574178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.013 qpair failed and we were unable to recover it. 00:25:02.013 [2024-07-24 22:34:27.574293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.013 [2024-07-24 22:34:27.574319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.013 qpair failed and we were unable to recover it. 00:25:02.013 [2024-07-24 22:34:27.574423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.013 [2024-07-24 22:34:27.574453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.013 qpair failed and we were unable to recover it. 00:25:02.013 [2024-07-24 22:34:27.574580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.013 [2024-07-24 22:34:27.574608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.013 qpair failed and we were unable to recover it. 00:25:02.013 [2024-07-24 22:34:27.574726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.013 [2024-07-24 22:34:27.574752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.013 qpair failed and we were unable to recover it. 00:25:02.013 [2024-07-24 22:34:27.574860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.013 [2024-07-24 22:34:27.574887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.013 qpair failed and we were unable to recover it. 00:25:02.013 [2024-07-24 22:34:27.575007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.013 [2024-07-24 22:34:27.575034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.013 qpair failed and we were unable to recover it. 00:25:02.013 [2024-07-24 22:34:27.575148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.014 [2024-07-24 22:34:27.575175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.014 qpair failed and we were unable to recover it. 00:25:02.014 [2024-07-24 22:34:27.575292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.014 [2024-07-24 22:34:27.575319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.014 qpair failed and we were unable to recover it. 00:25:02.014 [2024-07-24 22:34:27.575424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.014 [2024-07-24 22:34:27.575450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.014 qpair failed and we were unable to recover it. 00:25:02.014 [2024-07-24 22:34:27.575567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.014 [2024-07-24 22:34:27.575594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.014 qpair failed and we were unable to recover it. 00:25:02.014 [2024-07-24 22:34:27.575702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.014 [2024-07-24 22:34:27.575729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.014 qpair failed and we were unable to recover it. 00:25:02.014 [2024-07-24 22:34:27.575825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.014 [2024-07-24 22:34:27.575851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.014 qpair failed and we were unable to recover it. 00:25:02.014 [2024-07-24 22:34:27.575965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.014 [2024-07-24 22:34:27.575991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.014 qpair failed and we were unable to recover it. 00:25:02.014 [2024-07-24 22:34:27.576096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.014 [2024-07-24 22:34:27.576122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.014 qpair failed and we were unable to recover it. 00:25:02.014 [2024-07-24 22:34:27.576237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.014 [2024-07-24 22:34:27.576263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.014 qpair failed and we were unable to recover it. 00:25:02.014 [2024-07-24 22:34:27.576378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.014 [2024-07-24 22:34:27.576403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.014 qpair failed and we were unable to recover it. 00:25:02.014 [2024-07-24 22:34:27.576518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.014 [2024-07-24 22:34:27.576546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.014 qpair failed and we were unable to recover it. 00:25:02.014 [2024-07-24 22:34:27.576654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.014 [2024-07-24 22:34:27.576682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.014 qpair failed and we were unable to recover it. 00:25:02.014 [2024-07-24 22:34:27.576784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.014 [2024-07-24 22:34:27.576810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.014 qpair failed and we were unable to recover it. 00:25:02.014 [2024-07-24 22:34:27.576924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.014 [2024-07-24 22:34:27.576950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.014 qpair failed and we were unable to recover it. 00:25:02.014 [2024-07-24 22:34:27.577085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.014 [2024-07-24 22:34:27.577111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.014 qpair failed and we were unable to recover it. 00:25:02.014 [2024-07-24 22:34:27.577215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.014 [2024-07-24 22:34:27.577243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.014 qpair failed and we were unable to recover it. 00:25:02.014 [2024-07-24 22:34:27.577363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.014 [2024-07-24 22:34:27.577390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.014 qpair failed and we were unable to recover it. 00:25:02.014 [2024-07-24 22:34:27.577500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.014 [2024-07-24 22:34:27.577530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.014 qpair failed and we were unable to recover it. 00:25:02.014 [2024-07-24 22:34:27.577636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.014 [2024-07-24 22:34:27.577662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.014 qpair failed and we were unable to recover it. 00:25:02.014 [2024-07-24 22:34:27.577762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.014 [2024-07-24 22:34:27.577788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.014 qpair failed and we were unable to recover it. 00:25:02.014 [2024-07-24 22:34:27.577888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.014 [2024-07-24 22:34:27.577914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.014 qpair failed and we were unable to recover it. 00:25:02.014 [2024-07-24 22:34:27.578011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.014 [2024-07-24 22:34:27.578037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.014 qpair failed and we were unable to recover it. 00:25:02.014 [2024-07-24 22:34:27.578140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.014 [2024-07-24 22:34:27.578166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.014 qpair failed and we were unable to recover it. 00:25:02.014 [2024-07-24 22:34:27.578268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.014 [2024-07-24 22:34:27.578294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.014 qpair failed and we were unable to recover it. 00:25:02.014 [2024-07-24 22:34:27.578411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.014 [2024-07-24 22:34:27.578440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.014 qpair failed and we were unable to recover it. 00:25:02.014 [2024-07-24 22:34:27.578564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.014 [2024-07-24 22:34:27.578593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.014 qpair failed and we were unable to recover it. 00:25:02.014 [2024-07-24 22:34:27.578702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.014 [2024-07-24 22:34:27.578728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.014 qpair failed and we were unable to recover it. 00:25:02.014 [2024-07-24 22:34:27.578849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.014 [2024-07-24 22:34:27.578874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.014 qpair failed and we were unable to recover it. 00:25:02.014 [2024-07-24 22:34:27.578981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.014 [2024-07-24 22:34:27.579008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.014 qpair failed and we were unable to recover it. 00:25:02.014 [2024-07-24 22:34:27.579121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.014 [2024-07-24 22:34:27.579147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.014 qpair failed and we were unable to recover it. 00:25:02.014 [2024-07-24 22:34:27.579250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.014 [2024-07-24 22:34:27.579283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.014 qpair failed and we were unable to recover it. 00:25:02.014 [2024-07-24 22:34:27.579392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.014 [2024-07-24 22:34:27.579418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.014 qpair failed and we were unable to recover it. 00:25:02.014 [2024-07-24 22:34:27.579537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.014 [2024-07-24 22:34:27.579565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.014 qpair failed and we were unable to recover it. 00:25:02.014 [2024-07-24 22:34:27.579674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.014 [2024-07-24 22:34:27.579701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.014 qpair failed and we were unable to recover it. 00:25:02.014 [2024-07-24 22:34:27.579803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.014 [2024-07-24 22:34:27.579829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.014 qpair failed and we were unable to recover it. 00:25:02.014 [2024-07-24 22:34:27.579926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.014 [2024-07-24 22:34:27.579952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.014 qpair failed and we were unable to recover it. 00:25:02.014 [2024-07-24 22:34:27.580073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.014 [2024-07-24 22:34:27.580102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.014 qpair failed and we were unable to recover it. 00:25:02.014 [2024-07-24 22:34:27.580219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.014 [2024-07-24 22:34:27.580244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.014 qpair failed and we were unable to recover it. 00:25:02.014 [2024-07-24 22:34:27.580350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.014 [2024-07-24 22:34:27.580376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.014 qpair failed and we were unable to recover it. 00:25:02.014 [2024-07-24 22:34:27.580496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.014 [2024-07-24 22:34:27.580523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.014 qpair failed and we were unable to recover it. 00:25:02.014 [2024-07-24 22:34:27.580640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.014 [2024-07-24 22:34:27.580666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.014 qpair failed and we were unable to recover it. 00:25:02.014 [2024-07-24 22:34:27.580763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.014 [2024-07-24 22:34:27.580789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.014 qpair failed and we were unable to recover it. 00:25:02.014 [2024-07-24 22:34:27.580889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.014 [2024-07-24 22:34:27.580915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.014 qpair failed and we were unable to recover it. 00:25:02.014 [2024-07-24 22:34:27.581016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.014 [2024-07-24 22:34:27.581042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.014 qpair failed and we were unable to recover it. 00:25:02.014 [2024-07-24 22:34:27.581161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.014 [2024-07-24 22:34:27.581187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.014 qpair failed and we were unable to recover it. 00:25:02.014 [2024-07-24 22:34:27.581291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.014 [2024-07-24 22:34:27.581318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.014 qpair failed and we were unable to recover it. 00:25:02.014 [2024-07-24 22:34:27.581413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.014 [2024-07-24 22:34:27.581438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.014 qpair failed and we were unable to recover it. 00:25:02.014 [2024-07-24 22:34:27.581550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.014 [2024-07-24 22:34:27.581577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.014 qpair failed and we were unable to recover it. 00:25:02.014 [2024-07-24 22:34:27.581673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.014 [2024-07-24 22:34:27.581698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.014 qpair failed and we were unable to recover it. 00:25:02.014 [2024-07-24 22:34:27.581796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.014 [2024-07-24 22:34:27.581822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.014 qpair failed and we were unable to recover it. 00:25:02.014 [2024-07-24 22:34:27.581947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.014 [2024-07-24 22:34:27.581972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.014 qpair failed and we were unable to recover it. 00:25:02.014 [2024-07-24 22:34:27.582075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.014 [2024-07-24 22:34:27.582102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.014 qpair failed and we were unable to recover it. 00:25:02.014 [2024-07-24 22:34:27.582213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.014 [2024-07-24 22:34:27.582238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.014 qpair failed and we were unable to recover it. 00:25:02.014 [2024-07-24 22:34:27.582346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.014 [2024-07-24 22:34:27.582373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.014 qpair failed and we were unable to recover it. 00:25:02.014 [2024-07-24 22:34:27.582487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.014 [2024-07-24 22:34:27.582513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.014 qpair failed and we were unable to recover it. 00:25:02.014 [2024-07-24 22:34:27.582615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.014 [2024-07-24 22:34:27.582642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.014 qpair failed and we were unable to recover it. 00:25:02.014 [2024-07-24 22:34:27.582744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.014 [2024-07-24 22:34:27.582769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.014 qpair failed and we were unable to recover it. 00:25:02.014 [2024-07-24 22:34:27.582863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.014 [2024-07-24 22:34:27.582895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.014 qpair failed and we were unable to recover it. 00:25:02.014 [2024-07-24 22:34:27.583008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.014 [2024-07-24 22:34:27.583033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.014 qpair failed and we were unable to recover it. 00:25:02.014 [2024-07-24 22:34:27.583149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.014 [2024-07-24 22:34:27.583174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.014 qpair failed and we were unable to recover it. 00:25:02.014 [2024-07-24 22:34:27.583275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.014 [2024-07-24 22:34:27.583301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.014 qpair failed and we were unable to recover it. 00:25:02.014 [2024-07-24 22:34:27.583407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.014 [2024-07-24 22:34:27.583433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.014 qpair failed and we were unable to recover it. 00:25:02.014 [2024-07-24 22:34:27.583550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.014 [2024-07-24 22:34:27.583579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.014 qpair failed and we were unable to recover it. 00:25:02.014 [2024-07-24 22:34:27.583688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.014 [2024-07-24 22:34:27.583715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.014 qpair failed and we were unable to recover it. 00:25:02.014 [2024-07-24 22:34:27.583817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.014 [2024-07-24 22:34:27.583843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.014 qpair failed and we were unable to recover it. 00:25:02.014 [2024-07-24 22:34:27.583959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.014 [2024-07-24 22:34:27.583984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.014 qpair failed and we were unable to recover it. 00:25:02.014 [2024-07-24 22:34:27.584097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.014 [2024-07-24 22:34:27.584124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.014 qpair failed and we were unable to recover it. 00:25:02.014 [2024-07-24 22:34:27.584228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.014 [2024-07-24 22:34:27.584256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.014 qpair failed and we were unable to recover it. 00:25:02.014 [2024-07-24 22:34:27.584367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.014 [2024-07-24 22:34:27.584394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.014 qpair failed and we were unable to recover it. 00:25:02.014 [2024-07-24 22:34:27.584496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.014 [2024-07-24 22:34:27.584525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.014 qpair failed and we were unable to recover it. 00:25:02.014 [2024-07-24 22:34:27.584644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.014 [2024-07-24 22:34:27.584671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.014 qpair failed and we were unable to recover it. 00:25:02.014 [2024-07-24 22:34:27.584777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.014 [2024-07-24 22:34:27.584803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.014 qpair failed and we were unable to recover it. 00:25:02.014 [2024-07-24 22:34:27.584899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.014 [2024-07-24 22:34:27.584925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.014 qpair failed and we were unable to recover it. 00:25:02.015 [2024-07-24 22:34:27.585043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.015 [2024-07-24 22:34:27.585072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.015 qpair failed and we were unable to recover it. 00:25:02.015 [2024-07-24 22:34:27.585173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.015 [2024-07-24 22:34:27.585199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.015 qpair failed and we were unable to recover it. 00:25:02.015 [2024-07-24 22:34:27.585307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.015 [2024-07-24 22:34:27.585333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.015 qpair failed and we were unable to recover it. 00:25:02.015 [2024-07-24 22:34:27.585440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.015 [2024-07-24 22:34:27.585467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.015 qpair failed and we were unable to recover it. 00:25:02.015 [2024-07-24 22:34:27.585574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.015 [2024-07-24 22:34:27.585600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.015 qpair failed and we were unable to recover it. 00:25:02.015 [2024-07-24 22:34:27.585739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.015 [2024-07-24 22:34:27.585765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.015 qpair failed and we were unable to recover it. 00:25:02.015 [2024-07-24 22:34:27.585894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.015 [2024-07-24 22:34:27.585920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.015 qpair failed and we were unable to recover it. 00:25:02.015 [2024-07-24 22:34:27.586055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.015 [2024-07-24 22:34:27.586081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.015 qpair failed and we were unable to recover it. 00:25:02.015 [2024-07-24 22:34:27.586210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.015 [2024-07-24 22:34:27.586236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.015 qpair failed and we were unable to recover it. 00:25:02.015 [2024-07-24 22:34:27.586340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.015 [2024-07-24 22:34:27.586369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.015 qpair failed and we were unable to recover it. 00:25:02.015 [2024-07-24 22:34:27.586491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.015 [2024-07-24 22:34:27.586518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.015 qpair failed and we were unable to recover it. 00:25:02.015 [2024-07-24 22:34:27.586639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.015 [2024-07-24 22:34:27.586669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.015 qpair failed and we were unable to recover it. 00:25:02.015 [2024-07-24 22:34:27.586775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.015 [2024-07-24 22:34:27.586801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.015 qpair failed and we were unable to recover it. 00:25:02.015 [2024-07-24 22:34:27.586912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.015 [2024-07-24 22:34:27.586937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.015 qpair failed and we were unable to recover it. 00:25:02.015 [2024-07-24 22:34:27.587067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.015 [2024-07-24 22:34:27.587093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.015 qpair failed and we were unable to recover it. 00:25:02.015 [2024-07-24 22:34:27.587228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.015 [2024-07-24 22:34:27.587255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.015 qpair failed and we were unable to recover it. 00:25:02.015 [2024-07-24 22:34:27.587355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.015 [2024-07-24 22:34:27.587380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.015 qpair failed and we were unable to recover it. 00:25:02.015 [2024-07-24 22:34:27.587494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.015 [2024-07-24 22:34:27.587524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.015 qpair failed and we were unable to recover it. 00:25:02.015 [2024-07-24 22:34:27.587639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.015 [2024-07-24 22:34:27.587664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.015 qpair failed and we were unable to recover it. 00:25:02.015 [2024-07-24 22:34:27.587764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.015 [2024-07-24 22:34:27.587790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.015 qpair failed and we were unable to recover it. 00:25:02.015 [2024-07-24 22:34:27.587906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.015 [2024-07-24 22:34:27.587932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.015 qpair failed and we were unable to recover it. 00:25:02.015 [2024-07-24 22:34:27.588037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.015 [2024-07-24 22:34:27.588062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.015 qpair failed and we were unable to recover it. 00:25:02.015 [2024-07-24 22:34:27.588172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.015 [2024-07-24 22:34:27.588198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.015 qpair failed and we were unable to recover it. 00:25:02.015 [2024-07-24 22:34:27.588325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.015 [2024-07-24 22:34:27.588350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.015 qpair failed and we were unable to recover it. 00:25:02.015 [2024-07-24 22:34:27.588478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.015 [2024-07-24 22:34:27.588521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.015 qpair failed and we were unable to recover it. 00:25:02.015 [2024-07-24 22:34:27.588627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.015 [2024-07-24 22:34:27.588654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.015 qpair failed and we were unable to recover it. 00:25:02.015 [2024-07-24 22:34:27.588785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.015 [2024-07-24 22:34:27.588810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.015 qpair failed and we were unable to recover it. 00:25:02.015 [2024-07-24 22:34:27.588913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.015 [2024-07-24 22:34:27.588938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.015 qpair failed and we were unable to recover it. 00:25:02.015 [2024-07-24 22:34:27.589044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.015 [2024-07-24 22:34:27.589069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.015 qpair failed and we were unable to recover it. 00:25:02.015 [2024-07-24 22:34:27.589204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.015 [2024-07-24 22:34:27.589230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.015 qpair failed and we were unable to recover it. 00:25:02.015 [2024-07-24 22:34:27.589379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.015 [2024-07-24 22:34:27.589408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.015 qpair failed and we were unable to recover it. 00:25:02.015 [2024-07-24 22:34:27.589516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.015 [2024-07-24 22:34:27.589543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.015 qpair failed and we were unable to recover it. 00:25:02.015 [2024-07-24 22:34:27.589642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.015 [2024-07-24 22:34:27.589668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.015 qpair failed and we were unable to recover it. 00:25:02.015 [2024-07-24 22:34:27.589776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.015 [2024-07-24 22:34:27.589802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.015 qpair failed and we were unable to recover it. 00:25:02.015 [2024-07-24 22:34:27.589903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.015 [2024-07-24 22:34:27.589929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.015 qpair failed and we were unable to recover it. 00:25:02.015 [2024-07-24 22:34:27.590044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.015 [2024-07-24 22:34:27.590070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.015 qpair failed and we were unable to recover it. 00:25:02.015 [2024-07-24 22:34:27.590166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.015 [2024-07-24 22:34:27.590192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.015 qpair failed and we were unable to recover it. 00:25:02.015 [2024-07-24 22:34:27.590298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.015 [2024-07-24 22:34:27.590323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.015 qpair failed and we were unable to recover it. 00:25:02.015 [2024-07-24 22:34:27.590439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.015 [2024-07-24 22:34:27.590465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.015 qpair failed and we were unable to recover it. 00:25:02.015 [2024-07-24 22:34:27.590581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.015 [2024-07-24 22:34:27.590610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.015 qpair failed and we were unable to recover it. 00:25:02.015 [2024-07-24 22:34:27.590717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.015 [2024-07-24 22:34:27.590743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.015 qpair failed and we were unable to recover it. 00:25:02.015 [2024-07-24 22:34:27.590876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.015 [2024-07-24 22:34:27.590901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.015 qpair failed and we were unable to recover it. 00:25:02.015 [2024-07-24 22:34:27.591036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.015 [2024-07-24 22:34:27.591062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.015 qpair failed and we were unable to recover it. 00:25:02.015 [2024-07-24 22:34:27.591179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.015 [2024-07-24 22:34:27.591205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.015 qpair failed and we were unable to recover it. 00:25:02.015 [2024-07-24 22:34:27.591306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.015 [2024-07-24 22:34:27.591332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.015 qpair failed and we were unable to recover it. 00:25:02.015 [2024-07-24 22:34:27.591429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.015 [2024-07-24 22:34:27.591454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.015 qpair failed and we were unable to recover it. 00:25:02.015 [2024-07-24 22:34:27.591579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.015 [2024-07-24 22:34:27.591606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.015 qpair failed and we were unable to recover it. 00:25:02.015 [2024-07-24 22:34:27.591722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.015 [2024-07-24 22:34:27.591748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.015 qpair failed and we were unable to recover it. 00:25:02.015 [2024-07-24 22:34:27.591853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.015 [2024-07-24 22:34:27.591881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.015 qpair failed and we were unable to recover it. 00:25:02.015 [2024-07-24 22:34:27.591998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.015 [2024-07-24 22:34:27.592024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.015 qpair failed and we were unable to recover it. 00:25:02.015 [2024-07-24 22:34:27.592173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.015 [2024-07-24 22:34:27.592199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.015 qpair failed and we were unable to recover it. 00:25:02.015 [2024-07-24 22:34:27.592305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.015 [2024-07-24 22:34:27.592333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.015 qpair failed and we were unable to recover it. 00:25:02.015 [2024-07-24 22:34:27.592450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.015 [2024-07-24 22:34:27.592477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.015 qpair failed and we were unable to recover it. 00:25:02.015 [2024-07-24 22:34:27.592590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.015 [2024-07-24 22:34:27.592620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.015 qpair failed and we were unable to recover it. 00:25:02.015 [2024-07-24 22:34:27.592724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.015 [2024-07-24 22:34:27.592750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.015 qpair failed and we were unable to recover it. 00:25:02.015 [2024-07-24 22:34:27.592853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.015 [2024-07-24 22:34:27.592880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.015 qpair failed and we were unable to recover it. 00:25:02.015 [2024-07-24 22:34:27.592996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.015 [2024-07-24 22:34:27.593022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.015 qpair failed and we were unable to recover it. 00:25:02.015 [2024-07-24 22:34:27.593139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.015 [2024-07-24 22:34:27.593167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.015 qpair failed and we were unable to recover it. 00:25:02.015 [2024-07-24 22:34:27.593268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.015 [2024-07-24 22:34:27.593294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.015 qpair failed and we were unable to recover it. 00:25:02.015 [2024-07-24 22:34:27.593427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.015 [2024-07-24 22:34:27.593453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.015 qpair failed and we were unable to recover it. 00:25:02.015 [2024-07-24 22:34:27.593594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.015 [2024-07-24 22:34:27.593621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.015 qpair failed and we were unable to recover it. 00:25:02.015 [2024-07-24 22:34:27.593734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.015 [2024-07-24 22:34:27.593760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.015 qpair failed and we were unable to recover it. 00:25:02.015 [2024-07-24 22:34:27.593875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.015 [2024-07-24 22:34:27.593901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.015 qpair failed and we were unable to recover it. 00:25:02.015 [2024-07-24 22:34:27.594006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.015 [2024-07-24 22:34:27.594032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.015 qpair failed and we were unable to recover it. 00:25:02.015 [2024-07-24 22:34:27.594134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.016 [2024-07-24 22:34:27.594166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.016 qpair failed and we were unable to recover it. 00:25:02.016 [2024-07-24 22:34:27.594273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.016 [2024-07-24 22:34:27.594298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.016 qpair failed and we were unable to recover it. 00:25:02.016 [2024-07-24 22:34:27.594410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.016 [2024-07-24 22:34:27.594440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.016 qpair failed and we were unable to recover it. 00:25:02.016 [2024-07-24 22:34:27.594582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.016 [2024-07-24 22:34:27.594608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.016 qpair failed and we were unable to recover it. 00:25:02.016 [2024-07-24 22:34:27.594709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.016 [2024-07-24 22:34:27.594734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.016 qpair failed and we were unable to recover it. 00:25:02.016 [2024-07-24 22:34:27.594847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.016 [2024-07-24 22:34:27.594873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.016 qpair failed and we were unable to recover it. 00:25:02.016 [2024-07-24 22:34:27.594974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.016 [2024-07-24 22:34:27.594998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.016 qpair failed and we were unable to recover it. 00:25:02.016 [2024-07-24 22:34:27.595111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.016 [2024-07-24 22:34:27.595138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.016 qpair failed and we were unable to recover it. 00:25:02.016 [2024-07-24 22:34:27.595243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.016 [2024-07-24 22:34:27.595271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.016 qpair failed and we were unable to recover it. 00:25:02.016 [2024-07-24 22:34:27.595374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.016 [2024-07-24 22:34:27.595400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.016 qpair failed and we were unable to recover it. 00:25:02.016 [2024-07-24 22:34:27.595504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.016 [2024-07-24 22:34:27.595531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.016 qpair failed and we were unable to recover it. 00:25:02.016 [2024-07-24 22:34:27.595639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.016 [2024-07-24 22:34:27.595665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.016 qpair failed and we were unable to recover it. 00:25:02.016 [2024-07-24 22:34:27.595811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.016 [2024-07-24 22:34:27.595836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.016 qpair failed and we were unable to recover it. 00:25:02.016 [2024-07-24 22:34:27.595956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.016 [2024-07-24 22:34:27.595985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.016 qpair failed and we were unable to recover it. 00:25:02.016 [2024-07-24 22:34:27.596101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.016 [2024-07-24 22:34:27.596128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.016 qpair failed and we were unable to recover it. 00:25:02.016 [2024-07-24 22:34:27.596257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.016 [2024-07-24 22:34:27.596283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.016 qpair failed and we were unable to recover it. 00:25:02.016 [2024-07-24 22:34:27.596394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.016 [2024-07-24 22:34:27.596420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.016 qpair failed and we were unable to recover it. 00:25:02.016 [2024-07-24 22:34:27.596547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.016 [2024-07-24 22:34:27.596573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.016 qpair failed and we were unable to recover it. 00:25:02.016 [2024-07-24 22:34:27.596710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.016 [2024-07-24 22:34:27.596736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.016 qpair failed and we were unable to recover it. 00:25:02.016 [2024-07-24 22:34:27.596832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.016 [2024-07-24 22:34:27.596858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.016 qpair failed and we were unable to recover it. 00:25:02.016 [2024-07-24 22:34:27.596980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.016 [2024-07-24 22:34:27.597009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.016 qpair failed and we were unable to recover it. 00:25:02.016 [2024-07-24 22:34:27.597117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.016 [2024-07-24 22:34:27.597143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.016 qpair failed and we were unable to recover it. 00:25:02.016 [2024-07-24 22:34:27.597246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.016 [2024-07-24 22:34:27.597272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.016 qpair failed and we were unable to recover it. 00:25:02.016 [2024-07-24 22:34:27.597406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.016 [2024-07-24 22:34:27.597431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.016 qpair failed and we were unable to recover it. 00:25:02.016 [2024-07-24 22:34:27.597533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.016 [2024-07-24 22:34:27.597559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.016 qpair failed and we were unable to recover it. 00:25:02.016 [2024-07-24 22:34:27.597657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.016 [2024-07-24 22:34:27.597684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.016 qpair failed and we were unable to recover it. 00:25:02.016 [2024-07-24 22:34:27.597782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.016 [2024-07-24 22:34:27.597807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.016 qpair failed and we were unable to recover it. 00:25:02.016 [2024-07-24 22:34:27.597914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.016 [2024-07-24 22:34:27.597950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.016 qpair failed and we were unable to recover it. 00:25:02.016 [2024-07-24 22:34:27.598092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.016 [2024-07-24 22:34:27.598121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.016 qpair failed and we were unable to recover it. 00:25:02.016 [2024-07-24 22:34:27.598231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.016 [2024-07-24 22:34:27.598259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.016 qpair failed and we were unable to recover it. 00:25:02.016 [2024-07-24 22:34:27.598397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.016 [2024-07-24 22:34:27.598424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.016 qpair failed and we were unable to recover it. 00:25:02.016 [2024-07-24 22:34:27.598559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.016 [2024-07-24 22:34:27.598586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.016 qpair failed and we were unable to recover it. 00:25:02.016 [2024-07-24 22:34:27.598718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.016 [2024-07-24 22:34:27.598744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.016 qpair failed and we were unable to recover it. 00:25:02.016 [2024-07-24 22:34:27.598874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.016 [2024-07-24 22:34:27.598900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.016 qpair failed and we were unable to recover it. 00:25:02.016 [2024-07-24 22:34:27.599002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.016 [2024-07-24 22:34:27.599029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.016 qpair failed and we were unable to recover it. 00:25:02.016 [2024-07-24 22:34:27.599126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.016 [2024-07-24 22:34:27.599153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.016 qpair failed and we were unable to recover it. 00:25:02.016 [2024-07-24 22:34:27.599290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.016 [2024-07-24 22:34:27.599319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.016 qpair failed and we were unable to recover it. 00:25:02.016 [2024-07-24 22:34:27.599422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.016 [2024-07-24 22:34:27.599447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.016 qpair failed and we were unable to recover it. 00:25:02.016 [2024-07-24 22:34:27.599561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.016 [2024-07-24 22:34:27.599588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.016 qpair failed and we were unable to recover it. 00:25:02.016 [2024-07-24 22:34:27.599698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.016 [2024-07-24 22:34:27.599724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.016 qpair failed and we were unable to recover it. 00:25:02.016 [2024-07-24 22:34:27.599845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.016 [2024-07-24 22:34:27.599890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.016 qpair failed and we were unable to recover it. 00:25:02.016 [2024-07-24 22:34:27.599993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.016 [2024-07-24 22:34:27.600020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.016 qpair failed and we were unable to recover it. 00:25:02.016 [2024-07-24 22:34:27.600123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.016 [2024-07-24 22:34:27.600148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.016 qpair failed and we were unable to recover it. 00:25:02.016 [2024-07-24 22:34:27.600251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.016 [2024-07-24 22:34:27.600278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.016 qpair failed and we were unable to recover it. 00:25:02.016 [2024-07-24 22:34:27.600398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.016 [2024-07-24 22:34:27.600425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.016 qpair failed and we were unable to recover it. 00:25:02.016 [2024-07-24 22:34:27.600542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.016 [2024-07-24 22:34:27.600569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.016 qpair failed and we were unable to recover it. 00:25:02.016 [2024-07-24 22:34:27.600673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.016 [2024-07-24 22:34:27.600698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.016 qpair failed and we were unable to recover it. 00:25:02.016 [2024-07-24 22:34:27.600798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.016 [2024-07-24 22:34:27.600824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.016 qpair failed and we were unable to recover it. 00:25:02.016 [2024-07-24 22:34:27.600937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.016 [2024-07-24 22:34:27.600962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.016 qpair failed and we were unable to recover it. 00:25:02.016 [2024-07-24 22:34:27.601063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.016 [2024-07-24 22:34:27.601089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.016 qpair failed and we were unable to recover it. 00:25:02.016 [2024-07-24 22:34:27.601188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.016 [2024-07-24 22:34:27.601213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.016 qpair failed and we were unable to recover it. 00:25:02.016 [2024-07-24 22:34:27.601317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.016 [2024-07-24 22:34:27.601343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.016 qpair failed and we were unable to recover it. 00:25:02.016 [2024-07-24 22:34:27.601458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.016 [2024-07-24 22:34:27.601489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.016 qpair failed and we were unable to recover it. 00:25:02.016 [2024-07-24 22:34:27.601595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.016 [2024-07-24 22:34:27.601621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.016 qpair failed and we were unable to recover it. 00:25:02.016 [2024-07-24 22:34:27.601741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.016 [2024-07-24 22:34:27.601767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.016 qpair failed and we were unable to recover it. 00:25:02.016 [2024-07-24 22:34:27.601888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.016 [2024-07-24 22:34:27.601913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.016 qpair failed and we were unable to recover it. 00:25:02.016 [2024-07-24 22:34:27.602021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.016 [2024-07-24 22:34:27.602046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.016 qpair failed and we were unable to recover it. 00:25:02.016 [2024-07-24 22:34:27.602177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.016 [2024-07-24 22:34:27.602206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.016 qpair failed and we were unable to recover it. 00:25:02.016 [2024-07-24 22:34:27.602307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.016 [2024-07-24 22:34:27.602334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.016 qpair failed and we were unable to recover it. 00:25:02.016 [2024-07-24 22:34:27.602440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.016 [2024-07-24 22:34:27.602467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.016 qpair failed and we were unable to recover it. 00:25:02.016 [2024-07-24 22:34:27.602583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.016 [2024-07-24 22:34:27.602610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.016 qpair failed and we were unable to recover it. 00:25:02.016 [2024-07-24 22:34:27.602717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.016 [2024-07-24 22:34:27.602745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.016 qpair failed and we were unable to recover it. 00:25:02.016 [2024-07-24 22:34:27.602846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.016 [2024-07-24 22:34:27.602872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.016 qpair failed and we were unable to recover it. 00:25:02.016 [2024-07-24 22:34:27.602995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.016 [2024-07-24 22:34:27.603024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.016 qpair failed and we were unable to recover it. 00:25:02.016 [2024-07-24 22:34:27.603146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.016 [2024-07-24 22:34:27.603172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.016 qpair failed and we were unable to recover it. 00:25:02.016 [2024-07-24 22:34:27.603274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.016 [2024-07-24 22:34:27.603299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.016 qpair failed and we were unable to recover it. 00:25:02.016 [2024-07-24 22:34:27.603401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.016 [2024-07-24 22:34:27.603427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.016 qpair failed and we were unable to recover it. 00:25:02.016 [2024-07-24 22:34:27.603542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.016 [2024-07-24 22:34:27.603577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.016 qpair failed and we were unable to recover it. 00:25:02.016 [2024-07-24 22:34:27.603698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.016 [2024-07-24 22:34:27.603727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.016 qpair failed and we were unable to recover it. 00:25:02.016 [2024-07-24 22:34:27.603845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.016 [2024-07-24 22:34:27.603871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.016 qpair failed and we were unable to recover it. 00:25:02.016 [2024-07-24 22:34:27.603988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.016 [2024-07-24 22:34:27.604014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.016 qpair failed and we were unable to recover it. 00:25:02.016 [2024-07-24 22:34:27.604119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.016 [2024-07-24 22:34:27.604145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.017 qpair failed and we were unable to recover it. 00:25:02.017 [2024-07-24 22:34:27.604262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.017 [2024-07-24 22:34:27.604288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.017 qpair failed and we were unable to recover it. 00:25:02.017 [2024-07-24 22:34:27.604404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.017 [2024-07-24 22:34:27.604430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.017 qpair failed and we were unable to recover it. 00:25:02.017 [2024-07-24 22:34:27.604534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.017 [2024-07-24 22:34:27.604561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.017 qpair failed and we were unable to recover it. 00:25:02.017 [2024-07-24 22:34:27.604663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.017 [2024-07-24 22:34:27.604690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.017 qpair failed and we were unable to recover it. 00:25:02.017 [2024-07-24 22:34:27.604803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.017 [2024-07-24 22:34:27.604829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.017 qpair failed and we were unable to recover it. 00:25:02.017 [2024-07-24 22:34:27.604934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.017 [2024-07-24 22:34:27.604961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.017 qpair failed and we were unable to recover it. 00:25:02.017 [2024-07-24 22:34:27.605074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.017 [2024-07-24 22:34:27.605101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.017 qpair failed and we were unable to recover it. 00:25:02.017 [2024-07-24 22:34:27.605215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.017 [2024-07-24 22:34:27.605242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.017 qpair failed and we were unable to recover it. 00:25:02.017 [2024-07-24 22:34:27.605341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.017 [2024-07-24 22:34:27.605367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.017 qpair failed and we were unable to recover it. 00:25:02.017 [2024-07-24 22:34:27.605499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.017 [2024-07-24 22:34:27.605528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.017 qpair failed and we were unable to recover it. 00:25:02.017 [2024-07-24 22:34:27.605643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.017 [2024-07-24 22:34:27.605669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.017 qpair failed and we were unable to recover it. 00:25:02.017 [2024-07-24 22:34:27.605774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.017 [2024-07-24 22:34:27.605800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.017 qpair failed and we were unable to recover it. 00:25:02.017 [2024-07-24 22:34:27.605901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.017 [2024-07-24 22:34:27.605927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.017 qpair failed and we were unable to recover it. 00:25:02.017 [2024-07-24 22:34:27.606040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.017 [2024-07-24 22:34:27.606065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.017 qpair failed and we were unable to recover it. 00:25:02.017 [2024-07-24 22:34:27.606181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.017 [2024-07-24 22:34:27.606206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.017 qpair failed and we were unable to recover it. 00:25:02.017 [2024-07-24 22:34:27.606326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.017 [2024-07-24 22:34:27.606353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.017 qpair failed and we were unable to recover it. 00:25:02.017 [2024-07-24 22:34:27.606487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.017 [2024-07-24 22:34:27.606513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.017 qpair failed and we were unable to recover it. 00:25:02.017 [2024-07-24 22:34:27.606621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.017 [2024-07-24 22:34:27.606649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.017 qpair failed and we were unable to recover it. 00:25:02.017 [2024-07-24 22:34:27.606769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.017 [2024-07-24 22:34:27.606796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.017 qpair failed and we were unable to recover it. 00:25:02.017 [2024-07-24 22:34:27.606907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.017 [2024-07-24 22:34:27.606937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.017 qpair failed and we were unable to recover it. 00:25:02.017 [2024-07-24 22:34:27.607043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.017 [2024-07-24 22:34:27.607071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.017 qpair failed and we were unable to recover it. 00:25:02.017 [2024-07-24 22:34:27.607174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.017 [2024-07-24 22:34:27.607200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.017 qpair failed and we were unable to recover it. 00:25:02.017 [2024-07-24 22:34:27.607304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.017 [2024-07-24 22:34:27.607329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.017 qpair failed and we were unable to recover it. 00:25:02.017 [2024-07-24 22:34:27.607444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.017 [2024-07-24 22:34:27.607470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.017 qpair failed and we were unable to recover it. 00:25:02.017 [2024-07-24 22:34:27.607581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.017 [2024-07-24 22:34:27.607609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.017 qpair failed and we were unable to recover it. 00:25:02.017 [2024-07-24 22:34:27.607715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.017 [2024-07-24 22:34:27.607741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.017 qpair failed and we were unable to recover it. 00:25:02.017 [2024-07-24 22:34:27.607843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.017 [2024-07-24 22:34:27.607871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.017 qpair failed and we were unable to recover it. 00:25:02.017 [2024-07-24 22:34:27.607975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.017 [2024-07-24 22:34:27.608003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.017 qpair failed and we were unable to recover it. 00:25:02.017 [2024-07-24 22:34:27.608110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.017 [2024-07-24 22:34:27.608137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.017 qpair failed and we were unable to recover it. 00:25:02.017 [2024-07-24 22:34:27.608249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.017 [2024-07-24 22:34:27.608275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.017 qpair failed and we were unable to recover it. 00:25:02.017 [2024-07-24 22:34:27.608391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.017 [2024-07-24 22:34:27.608417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.017 qpair failed and we were unable to recover it. 00:25:02.017 [2024-07-24 22:34:27.608518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.017 [2024-07-24 22:34:27.608545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.017 qpair failed and we were unable to recover it. 00:25:02.017 [2024-07-24 22:34:27.608649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.017 [2024-07-24 22:34:27.608675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.017 qpair failed and we were unable to recover it. 00:25:02.017 [2024-07-24 22:34:27.608783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.017 [2024-07-24 22:34:27.608809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.017 qpair failed and we were unable to recover it. 00:25:02.017 [2024-07-24 22:34:27.608910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.017 [2024-07-24 22:34:27.608936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.017 qpair failed and we were unable to recover it. 00:25:02.017 [2024-07-24 22:34:27.609049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.017 [2024-07-24 22:34:27.609079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.017 qpair failed and we were unable to recover it. 00:25:02.017 [2024-07-24 22:34:27.609179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.017 [2024-07-24 22:34:27.609205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.017 qpair failed and we were unable to recover it. 00:25:02.017 [2024-07-24 22:34:27.609310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.017 [2024-07-24 22:34:27.609336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.017 qpair failed and we were unable to recover it. 00:25:02.017 [2024-07-24 22:34:27.609438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.017 [2024-07-24 22:34:27.609464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.017 qpair failed and we were unable to recover it. 00:25:02.017 [2024-07-24 22:34:27.609593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.017 [2024-07-24 22:34:27.609620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.017 qpair failed and we were unable to recover it. 00:25:02.017 [2024-07-24 22:34:27.609720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.017 [2024-07-24 22:34:27.609747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.017 qpair failed and we were unable to recover it. 00:25:02.017 [2024-07-24 22:34:27.609862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.017 [2024-07-24 22:34:27.609888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.017 qpair failed and we were unable to recover it. 00:25:02.017 [2024-07-24 22:34:27.609993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.017 [2024-07-24 22:34:27.610023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.017 qpair failed and we were unable to recover it. 00:25:02.017 [2024-07-24 22:34:27.610138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.017 [2024-07-24 22:34:27.610163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.017 qpair failed and we were unable to recover it. 00:25:02.017 [2024-07-24 22:34:27.610277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.017 [2024-07-24 22:34:27.610304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.017 qpair failed and we were unable to recover it. 00:25:02.017 [2024-07-24 22:34:27.610417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.017 [2024-07-24 22:34:27.610442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.017 qpair failed and we were unable to recover it. 00:25:02.017 [2024-07-24 22:34:27.610554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.017 [2024-07-24 22:34:27.610580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.017 qpair failed and we were unable to recover it. 00:25:02.017 [2024-07-24 22:34:27.610696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.017 [2024-07-24 22:34:27.610721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.017 qpair failed and we were unable to recover it. 00:25:02.017 [2024-07-24 22:34:27.610825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.017 [2024-07-24 22:34:27.610852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.017 qpair failed and we were unable to recover it. 00:25:02.017 [2024-07-24 22:34:27.610962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.017 [2024-07-24 22:34:27.610989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.017 qpair failed and we were unable to recover it. 00:25:02.017 [2024-07-24 22:34:27.611105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.017 [2024-07-24 22:34:27.611130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.017 qpair failed and we were unable to recover it. 00:25:02.017 [2024-07-24 22:34:27.611249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.017 [2024-07-24 22:34:27.611278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.017 qpair failed and we were unable to recover it. 00:25:02.017 [2024-07-24 22:34:27.611397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.017 [2024-07-24 22:34:27.611427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.017 qpair failed and we were unable to recover it. 00:25:02.017 [2024-07-24 22:34:27.611549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.017 [2024-07-24 22:34:27.611577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.017 qpair failed and we were unable to recover it. 00:25:02.017 [2024-07-24 22:34:27.611699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.017 [2024-07-24 22:34:27.611727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.017 qpair failed and we were unable to recover it. 00:25:02.017 [2024-07-24 22:34:27.611826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.017 [2024-07-24 22:34:27.611852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.017 qpair failed and we were unable to recover it. 00:25:02.017 [2024-07-24 22:34:27.611980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.017 [2024-07-24 22:34:27.612006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.017 qpair failed and we were unable to recover it. 00:25:02.017 [2024-07-24 22:34:27.612109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.017 [2024-07-24 22:34:27.612138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.017 qpair failed and we were unable to recover it. 00:25:02.017 [2024-07-24 22:34:27.612248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.017 [2024-07-24 22:34:27.612276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.017 qpair failed and we were unable to recover it. 00:25:02.017 [2024-07-24 22:34:27.612379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.017 [2024-07-24 22:34:27.612405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.017 qpair failed and we were unable to recover it. 00:25:02.017 [2024-07-24 22:34:27.612518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.017 [2024-07-24 22:34:27.612544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.017 qpair failed and we were unable to recover it. 00:25:02.017 [2024-07-24 22:34:27.612663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.017 [2024-07-24 22:34:27.612689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.017 qpair failed and we were unable to recover it. 00:25:02.017 [2024-07-24 22:34:27.612793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.017 [2024-07-24 22:34:27.612819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.017 qpair failed and we were unable to recover it. 00:25:02.017 [2024-07-24 22:34:27.612925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.017 [2024-07-24 22:34:27.612952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.017 qpair failed and we were unable to recover it. 00:25:02.017 [2024-07-24 22:34:27.613054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.613080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.018 [2024-07-24 22:34:27.613177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.613202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.018 [2024-07-24 22:34:27.613313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.613341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.018 [2024-07-24 22:34:27.613450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.613483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.018 [2024-07-24 22:34:27.613587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.613613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.018 [2024-07-24 22:34:27.613731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.613757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.018 [2024-07-24 22:34:27.613873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.613898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.018 [2024-07-24 22:34:27.614001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.614027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.018 [2024-07-24 22:34:27.614131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.614159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.018 [2024-07-24 22:34:27.614274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.614300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.018 [2024-07-24 22:34:27.614402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.614427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.018 [2024-07-24 22:34:27.614540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.614571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.018 [2024-07-24 22:34:27.614680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.614709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.018 [2024-07-24 22:34:27.614824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.614850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.018 [2024-07-24 22:34:27.614953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.614980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.018 [2024-07-24 22:34:27.615078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.615105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.018 [2024-07-24 22:34:27.615223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.615250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.018 [2024-07-24 22:34:27.615359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.615388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.018 [2024-07-24 22:34:27.615505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.615534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.018 [2024-07-24 22:34:27.615653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.615679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.018 [2024-07-24 22:34:27.615784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.615811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.018 [2024-07-24 22:34:27.615914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.615940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.018 [2024-07-24 22:34:27.616065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.616091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.018 [2024-07-24 22:34:27.616195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.616221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.018 [2024-07-24 22:34:27.616334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.616361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.018 [2024-07-24 22:34:27.616469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.616502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.018 [2024-07-24 22:34:27.616616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.616642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.018 [2024-07-24 22:34:27.616745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.616771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.018 [2024-07-24 22:34:27.616871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.616897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.018 [2024-07-24 22:34:27.617001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.617027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.018 [2024-07-24 22:34:27.617125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.617152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.018 [2024-07-24 22:34:27.617283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.617309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.018 [2024-07-24 22:34:27.617419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.617447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.018 [2024-07-24 22:34:27.617564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.617591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.018 [2024-07-24 22:34:27.617711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.617738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.018 [2024-07-24 22:34:27.617842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.617869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.018 [2024-07-24 22:34:27.617966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.617991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.018 [2024-07-24 22:34:27.618125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.618151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.018 [2024-07-24 22:34:27.618260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.618288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.018 [2024-07-24 22:34:27.618392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.618418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.018 [2024-07-24 22:34:27.618532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.618559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.018 [2024-07-24 22:34:27.618678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.618705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.018 [2024-07-24 22:34:27.618825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.618853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.018 [2024-07-24 22:34:27.618959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.618986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.018 [2024-07-24 22:34:27.619086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.619113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.018 [2024-07-24 22:34:27.619216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.619241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.018 [2024-07-24 22:34:27.619337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.619363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.018 [2024-07-24 22:34:27.619464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.619500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.018 [2024-07-24 22:34:27.619614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.619639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.018 [2024-07-24 22:34:27.619752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.619777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.018 [2024-07-24 22:34:27.619885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.619911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.018 [2024-07-24 22:34:27.620029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.620063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.018 [2024-07-24 22:34:27.620186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.620212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.018 [2024-07-24 22:34:27.620321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.620350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.018 [2024-07-24 22:34:27.620455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.620487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.018 [2024-07-24 22:34:27.620599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.620626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.018 [2024-07-24 22:34:27.620724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.620750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.018 [2024-07-24 22:34:27.620864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.620890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.018 [2024-07-24 22:34:27.620994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.621020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.018 [2024-07-24 22:34:27.621121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.621147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.018 [2024-07-24 22:34:27.621246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.621272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.018 [2024-07-24 22:34:27.621377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.621404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.018 [2024-07-24 22:34:27.621512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.621540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.018 [2024-07-24 22:34:27.621657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.621683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.018 [2024-07-24 22:34:27.621803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.621830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.018 [2024-07-24 22:34:27.621954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.621983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.018 [2024-07-24 22:34:27.622092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.622118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.018 [2024-07-24 22:34:27.622250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.622275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.018 [2024-07-24 22:34:27.622396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.622422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.018 [2024-07-24 22:34:27.622541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.622568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.018 [2024-07-24 22:34:27.622665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.622690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.018 [2024-07-24 22:34:27.622793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.622819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.018 [2024-07-24 22:34:27.622920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.622947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.018 [2024-07-24 22:34:27.623049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.623074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.018 [2024-07-24 22:34:27.623180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.623206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.018 [2024-07-24 22:34:27.623323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.623349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.018 [2024-07-24 22:34:27.623456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.018 [2024-07-24 22:34:27.623489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.018 qpair failed and we were unable to recover it. 00:25:02.019 [2024-07-24 22:34:27.623606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.019 [2024-07-24 22:34:27.623632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.019 qpair failed and we were unable to recover it. 00:25:02.019 [2024-07-24 22:34:27.623741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.019 [2024-07-24 22:34:27.623766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.019 qpair failed and we were unable to recover it. 00:25:02.019 [2024-07-24 22:34:27.623892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.019 [2024-07-24 22:34:27.623917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.019 qpair failed and we were unable to recover it. 00:25:02.019 [2024-07-24 22:34:27.624023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.019 [2024-07-24 22:34:27.624051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.019 qpair failed and we were unable to recover it. 00:25:02.019 [2024-07-24 22:34:27.624170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.019 [2024-07-24 22:34:27.624198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.019 qpair failed and we were unable to recover it. 00:25:02.019 [2024-07-24 22:34:27.624302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.019 [2024-07-24 22:34:27.624330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.019 qpair failed and we were unable to recover it. 00:25:02.019 [2024-07-24 22:34:27.624439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.019 [2024-07-24 22:34:27.624466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.019 qpair failed and we were unable to recover it. 00:25:02.019 [2024-07-24 22:34:27.624582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.019 [2024-07-24 22:34:27.624609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.019 qpair failed and we were unable to recover it. 00:25:02.019 [2024-07-24 22:34:27.624728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.019 [2024-07-24 22:34:27.624754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.019 qpair failed and we were unable to recover it. 00:25:02.019 [2024-07-24 22:34:27.624857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.019 [2024-07-24 22:34:27.624885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.019 qpair failed and we were unable to recover it. 00:25:02.019 [2024-07-24 22:34:27.625006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.019 [2024-07-24 22:34:27.625033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.019 qpair failed and we were unable to recover it. 00:25:02.019 [2024-07-24 22:34:27.625134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.019 [2024-07-24 22:34:27.625159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.019 qpair failed and we were unable to recover it. 00:25:02.019 [2024-07-24 22:34:27.625282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.019 [2024-07-24 22:34:27.625310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.019 qpair failed and we were unable to recover it. 00:25:02.019 [2024-07-24 22:34:27.625422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.019 [2024-07-24 22:34:27.625452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.019 qpair failed and we were unable to recover it. 00:25:02.019 [2024-07-24 22:34:27.625571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.019 [2024-07-24 22:34:27.625603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.019 qpair failed and we were unable to recover it. 00:25:02.019 [2024-07-24 22:34:27.625725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.019 [2024-07-24 22:34:27.625753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.019 qpair failed and we were unable to recover it. 00:25:02.019 [2024-07-24 22:34:27.625856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.019 [2024-07-24 22:34:27.625883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.019 qpair failed and we were unable to recover it. 00:25:02.019 [2024-07-24 22:34:27.625986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.019 [2024-07-24 22:34:27.626012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.019 qpair failed and we were unable to recover it. 00:25:02.019 [2024-07-24 22:34:27.626112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.019 [2024-07-24 22:34:27.626139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.019 qpair failed and we were unable to recover it. 00:25:02.019 [2024-07-24 22:34:27.626245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.019 [2024-07-24 22:34:27.626270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.019 qpair failed and we were unable to recover it. 00:25:02.019 [2024-07-24 22:34:27.626374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.019 [2024-07-24 22:34:27.626402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.019 qpair failed and we were unable to recover it. 00:25:02.019 [2024-07-24 22:34:27.626516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.019 [2024-07-24 22:34:27.626543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.019 qpair failed and we were unable to recover it. 00:25:02.019 [2024-07-24 22:34:27.626644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.019 [2024-07-24 22:34:27.626670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.019 qpair failed and we were unable to recover it. 00:25:02.019 [2024-07-24 22:34:27.626801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.019 [2024-07-24 22:34:27.626827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.019 qpair failed and we were unable to recover it. 00:25:02.019 [2024-07-24 22:34:27.626946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.019 [2024-07-24 22:34:27.626973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.019 qpair failed and we were unable to recover it. 00:25:02.019 [2024-07-24 22:34:27.627073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.019 [2024-07-24 22:34:27.627100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.019 qpair failed and we were unable to recover it. 00:25:02.019 [2024-07-24 22:34:27.627215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.019 [2024-07-24 22:34:27.627242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.019 qpair failed and we were unable to recover it. 00:25:02.019 [2024-07-24 22:34:27.627346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.019 [2024-07-24 22:34:27.627375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.019 qpair failed and we were unable to recover it. 00:25:02.019 [2024-07-24 22:34:27.627501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.019 [2024-07-24 22:34:27.627528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.019 qpair failed and we were unable to recover it. 00:25:02.019 [2024-07-24 22:34:27.627634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.019 [2024-07-24 22:34:27.627661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.019 qpair failed and we were unable to recover it. 00:25:02.019 [2024-07-24 22:34:27.627764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.019 [2024-07-24 22:34:27.627790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.019 qpair failed and we were unable to recover it. 00:25:02.019 [2024-07-24 22:34:27.627892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.019 [2024-07-24 22:34:27.627917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.019 qpair failed and we were unable to recover it. 00:25:02.019 [2024-07-24 22:34:27.628025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.019 [2024-07-24 22:34:27.628051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.019 qpair failed and we were unable to recover it. 00:25:02.019 [2024-07-24 22:34:27.628155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.019 [2024-07-24 22:34:27.628182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.019 qpair failed and we were unable to recover it. 00:25:02.019 [2024-07-24 22:34:27.628283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.019 [2024-07-24 22:34:27.628308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.019 qpair failed and we were unable to recover it. 00:25:02.019 [2024-07-24 22:34:27.628412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.019 [2024-07-24 22:34:27.628439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.019 qpair failed and we were unable to recover it. 00:25:02.019 [2024-07-24 22:34:27.628547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.019 [2024-07-24 22:34:27.628574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.019 qpair failed and we were unable to recover it. 00:25:02.019 [2024-07-24 22:34:27.628681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.019 [2024-07-24 22:34:27.628707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.019 qpair failed and we were unable to recover it. 00:25:02.019 [2024-07-24 22:34:27.628809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.019 [2024-07-24 22:34:27.628834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.019 qpair failed and we were unable to recover it. 00:25:02.019 [2024-07-24 22:34:27.628946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.019 [2024-07-24 22:34:27.628972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.019 qpair failed and we were unable to recover it. 00:25:02.019 [2024-07-24 22:34:27.629113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.019 [2024-07-24 22:34:27.629147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.019 qpair failed and we were unable to recover it. 00:25:02.019 [2024-07-24 22:34:27.629261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.019 [2024-07-24 22:34:27.629297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.019 qpair failed and we were unable to recover it. 00:25:02.019 [2024-07-24 22:34:27.629425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.019 [2024-07-24 22:34:27.629464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.019 qpair failed and we were unable to recover it. 00:25:02.019 [2024-07-24 22:34:27.629599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.019 [2024-07-24 22:34:27.629628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.019 qpair failed and we were unable to recover it. 00:25:02.019 [2024-07-24 22:34:27.629740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.019 [2024-07-24 22:34:27.629767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.019 qpair failed and we were unable to recover it. 00:25:02.019 [2024-07-24 22:34:27.629877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.019 [2024-07-24 22:34:27.629905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.019 qpair failed and we were unable to recover it. 00:25:02.019 [2024-07-24 22:34:27.630024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.019 [2024-07-24 22:34:27.630051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.019 qpair failed and we were unable to recover it. 00:25:02.019 [2024-07-24 22:34:27.630170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.019 [2024-07-24 22:34:27.630196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.019 qpair failed and we were unable to recover it. 00:25:02.019 [2024-07-24 22:34:27.630327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.019 [2024-07-24 22:34:27.630353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.019 qpair failed and we were unable to recover it. 00:25:02.019 [2024-07-24 22:34:27.630471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.019 [2024-07-24 22:34:27.630505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.019 qpair failed and we were unable to recover it. 00:25:02.019 [2024-07-24 22:34:27.630620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.019 [2024-07-24 22:34:27.630646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.019 qpair failed and we were unable to recover it. 00:25:02.019 [2024-07-24 22:34:27.630762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.019 [2024-07-24 22:34:27.630788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.019 qpair failed and we were unable to recover it. 00:25:02.019 [2024-07-24 22:34:27.630927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.019 [2024-07-24 22:34:27.630957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.019 qpair failed and we were unable to recover it. 00:25:02.019 [2024-07-24 22:34:27.631077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.019 [2024-07-24 22:34:27.631103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.019 qpair failed and we were unable to recover it. 00:25:02.019 [2024-07-24 22:34:27.631210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.019 [2024-07-24 22:34:27.631242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.019 qpair failed and we were unable to recover it. 00:25:02.019 [2024-07-24 22:34:27.631361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.019 [2024-07-24 22:34:27.631387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.019 qpair failed and we were unable to recover it. 00:25:02.019 [2024-07-24 22:34:27.631511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.019 [2024-07-24 22:34:27.631550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.019 qpair failed and we were unable to recover it. 00:25:02.019 [2024-07-24 22:34:27.631676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.019 [2024-07-24 22:34:27.631705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.019 qpair failed and we were unable to recover it. 00:25:02.019 [2024-07-24 22:34:27.631826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.019 [2024-07-24 22:34:27.631852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.019 qpair failed and we were unable to recover it. 00:25:02.019 [2024-07-24 22:34:27.631954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.019 [2024-07-24 22:34:27.631980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.019 qpair failed and we were unable to recover it. 00:25:02.019 [2024-07-24 22:34:27.632089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.019 [2024-07-24 22:34:27.632116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.019 qpair failed and we were unable to recover it. 00:25:02.019 [2024-07-24 22:34:27.632219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.019 [2024-07-24 22:34:27.632247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.019 qpair failed and we were unable to recover it. 00:25:02.019 [2024-07-24 22:34:27.632352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.019 [2024-07-24 22:34:27.632378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.019 qpair failed and we were unable to recover it. 00:25:02.019 [2024-07-24 22:34:27.632493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.019 [2024-07-24 22:34:27.632521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.019 qpair failed and we were unable to recover it. 00:25:02.019 [2024-07-24 22:34:27.632640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.019 [2024-07-24 22:34:27.632667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.019 qpair failed and we were unable to recover it. 00:25:02.019 [2024-07-24 22:34:27.632785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.019 [2024-07-24 22:34:27.632811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.019 qpair failed and we were unable to recover it. 00:25:02.019 [2024-07-24 22:34:27.632931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.019 [2024-07-24 22:34:27.632958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.019 qpair failed and we were unable to recover it. 00:25:02.019 [2024-07-24 22:34:27.633064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.019 [2024-07-24 22:34:27.633090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.019 qpair failed and we were unable to recover it. 00:25:02.019 [2024-07-24 22:34:27.633201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.019 [2024-07-24 22:34:27.633227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.019 qpair failed and we were unable to recover it. 00:25:02.019 [2024-07-24 22:34:27.633331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.019 [2024-07-24 22:34:27.633358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.019 qpair failed and we were unable to recover it. 00:25:02.019 [2024-07-24 22:34:27.633475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.019 [2024-07-24 22:34:27.633509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.019 qpair failed and we were unable to recover it. 00:25:02.019 [2024-07-24 22:34:27.633632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.019 [2024-07-24 22:34:27.633660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.019 qpair failed and we were unable to recover it. 00:25:02.019 [2024-07-24 22:34:27.633774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.019 [2024-07-24 22:34:27.633800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.019 qpair failed and we were unable to recover it. 00:25:02.019 [2024-07-24 22:34:27.633908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.019 [2024-07-24 22:34:27.633934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.019 qpair failed and we were unable to recover it. 00:25:02.019 [2024-07-24 22:34:27.634039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.019 [2024-07-24 22:34:27.634066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.019 qpair failed and we were unable to recover it. 00:25:02.308 [2024-07-24 22:34:27.634203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.308 [2024-07-24 22:34:27.634230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.308 qpair failed and we were unable to recover it. 00:25:02.308 [2024-07-24 22:34:27.634332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.308 [2024-07-24 22:34:27.634360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.308 qpair failed and we were unable to recover it. 00:25:02.308 [2024-07-24 22:34:27.634467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.308 [2024-07-24 22:34:27.634501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.308 qpair failed and we were unable to recover it. 00:25:02.308 [2024-07-24 22:34:27.634633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.308 [2024-07-24 22:34:27.634660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.308 qpair failed and we were unable to recover it. 00:25:02.308 [2024-07-24 22:34:27.634766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.308 [2024-07-24 22:34:27.634795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.308 qpair failed and we were unable to recover it. 00:25:02.308 [2024-07-24 22:34:27.634905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.308 [2024-07-24 22:34:27.634932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.308 qpair failed and we were unable to recover it. 00:25:02.308 [2024-07-24 22:34:27.635060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.308 [2024-07-24 22:34:27.635090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.308 qpair failed and we were unable to recover it. 00:25:02.308 [2024-07-24 22:34:27.635213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.308 [2024-07-24 22:34:27.635240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.308 qpair failed and we were unable to recover it. 00:25:02.308 [2024-07-24 22:34:27.635342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.308 [2024-07-24 22:34:27.635369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.308 qpair failed and we were unable to recover it. 00:25:02.308 [2024-07-24 22:34:27.635472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.308 [2024-07-24 22:34:27.635505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.308 qpair failed and we were unable to recover it. 00:25:02.308 [2024-07-24 22:34:27.635611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.308 [2024-07-24 22:34:27.635637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.308 qpair failed and we were unable to recover it. 00:25:02.308 [2024-07-24 22:34:27.635763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.308 [2024-07-24 22:34:27.635789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.308 qpair failed and we were unable to recover it. 00:25:02.308 [2024-07-24 22:34:27.635892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.308 [2024-07-24 22:34:27.635918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.308 qpair failed and we were unable to recover it. 00:25:02.308 [2024-07-24 22:34:27.636016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.308 [2024-07-24 22:34:27.636042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.308 qpair failed and we were unable to recover it. 00:25:02.308 [2024-07-24 22:34:27.636154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.308 [2024-07-24 22:34:27.636179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.308 qpair failed and we were unable to recover it. 00:25:02.308 [2024-07-24 22:34:27.636293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.308 [2024-07-24 22:34:27.636321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.308 qpair failed and we were unable to recover it. 00:25:02.308 [2024-07-24 22:34:27.636424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.308 [2024-07-24 22:34:27.636453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.308 qpair failed and we were unable to recover it. 00:25:02.308 [2024-07-24 22:34:27.636565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.308 [2024-07-24 22:34:27.636593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.308 qpair failed and we were unable to recover it. 00:25:02.308 [2024-07-24 22:34:27.636699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.308 [2024-07-24 22:34:27.636725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.308 qpair failed and we were unable to recover it. 00:25:02.308 [2024-07-24 22:34:27.636846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.308 [2024-07-24 22:34:27.636881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.308 qpair failed and we were unable to recover it. 00:25:02.308 [2024-07-24 22:34:27.636993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.308 [2024-07-24 22:34:27.637020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.308 qpair failed and we were unable to recover it. 00:25:02.308 [2024-07-24 22:34:27.637124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.308 [2024-07-24 22:34:27.637150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.308 qpair failed and we were unable to recover it. 00:25:02.308 [2024-07-24 22:34:27.637274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.308 [2024-07-24 22:34:27.637303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.308 qpair failed and we were unable to recover it. 00:25:02.308 [2024-07-24 22:34:27.637435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.308 [2024-07-24 22:34:27.637461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.308 qpair failed and we were unable to recover it. 00:25:02.308 [2024-07-24 22:34:27.637578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.308 [2024-07-24 22:34:27.637604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.308 qpair failed and we were unable to recover it. 00:25:02.308 [2024-07-24 22:34:27.637709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.308 [2024-07-24 22:34:27.637735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.308 qpair failed and we were unable to recover it. 00:25:02.308 [2024-07-24 22:34:27.637860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.308 [2024-07-24 22:34:27.637886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.308 qpair failed and we were unable to recover it. 00:25:02.308 [2024-07-24 22:34:27.637987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.308 [2024-07-24 22:34:27.638014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.308 qpair failed and we were unable to recover it. 00:25:02.308 [2024-07-24 22:34:27.638111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.308 [2024-07-24 22:34:27.638136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.308 qpair failed and we were unable to recover it. 00:25:02.308 [2024-07-24 22:34:27.638238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.308 [2024-07-24 22:34:27.638264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.308 qpair failed and we were unable to recover it. 00:25:02.308 [2024-07-24 22:34:27.638401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.308 [2024-07-24 22:34:27.638427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.308 qpair failed and we were unable to recover it. 00:25:02.308 [2024-07-24 22:34:27.638536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.308 [2024-07-24 22:34:27.638563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.308 qpair failed and we were unable to recover it. 00:25:02.308 [2024-07-24 22:34:27.638669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.308 [2024-07-24 22:34:27.638696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.308 qpair failed and we were unable to recover it. 00:25:02.308 [2024-07-24 22:34:27.638810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.308 [2024-07-24 22:34:27.638839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.308 qpair failed and we were unable to recover it. 00:25:02.308 [2024-07-24 22:34:27.638965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.308 [2024-07-24 22:34:27.638991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.308 qpair failed and we were unable to recover it. 00:25:02.308 [2024-07-24 22:34:27.639097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.308 [2024-07-24 22:34:27.639123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.308 qpair failed and we were unable to recover it. 00:25:02.308 [2024-07-24 22:34:27.639283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.309 [2024-07-24 22:34:27.639312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.309 qpair failed and we were unable to recover it. 00:25:02.309 [2024-07-24 22:34:27.639416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.309 [2024-07-24 22:34:27.639445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.309 qpair failed and we were unable to recover it. 00:25:02.309 [2024-07-24 22:34:27.639558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.309 [2024-07-24 22:34:27.639585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.309 qpair failed and we were unable to recover it. 00:25:02.309 [2024-07-24 22:34:27.639716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.309 [2024-07-24 22:34:27.639742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.309 qpair failed and we were unable to recover it. 00:25:02.309 [2024-07-24 22:34:27.639848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.309 [2024-07-24 22:34:27.639874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.309 qpair failed and we were unable to recover it. 00:25:02.309 [2024-07-24 22:34:27.639991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.309 [2024-07-24 22:34:27.640017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.309 qpair failed and we were unable to recover it. 00:25:02.309 [2024-07-24 22:34:27.640120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.309 [2024-07-24 22:34:27.640146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.309 qpair failed and we were unable to recover it. 00:25:02.309 [2024-07-24 22:34:27.640245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.309 [2024-07-24 22:34:27.640271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.309 qpair failed and we were unable to recover it. 00:25:02.309 [2024-07-24 22:34:27.640408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.309 [2024-07-24 22:34:27.640435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.309 qpair failed and we were unable to recover it. 00:25:02.309 [2024-07-24 22:34:27.640568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.309 [2024-07-24 22:34:27.640596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.309 qpair failed and we were unable to recover it. 00:25:02.309 [2024-07-24 22:34:27.640704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.309 [2024-07-24 22:34:27.640729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.309 qpair failed and we were unable to recover it. 00:25:02.309 [2024-07-24 22:34:27.640844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.309 [2024-07-24 22:34:27.640870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.309 qpair failed and we were unable to recover it. 00:25:02.309 [2024-07-24 22:34:27.640972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.309 [2024-07-24 22:34:27.640998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.309 qpair failed and we were unable to recover it. 00:25:02.309 [2024-07-24 22:34:27.641107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.309 [2024-07-24 22:34:27.641136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.309 qpair failed and we were unable to recover it. 00:25:02.309 [2024-07-24 22:34:27.641245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.309 [2024-07-24 22:34:27.641272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.309 qpair failed and we were unable to recover it. 00:25:02.309 [2024-07-24 22:34:27.641382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.309 [2024-07-24 22:34:27.641410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.309 qpair failed and we were unable to recover it. 00:25:02.309 [2024-07-24 22:34:27.641523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.309 [2024-07-24 22:34:27.641550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.309 qpair failed and we were unable to recover it. 00:25:02.309 [2024-07-24 22:34:27.641646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.309 [2024-07-24 22:34:27.641673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.309 qpair failed and we were unable to recover it. 00:25:02.309 [2024-07-24 22:34:27.641782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.309 [2024-07-24 22:34:27.641811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.309 qpair failed and we were unable to recover it. 00:25:02.309 [2024-07-24 22:34:27.641926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.309 [2024-07-24 22:34:27.641952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.309 qpair failed and we were unable to recover it. 00:25:02.309 [2024-07-24 22:34:27.642056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.309 [2024-07-24 22:34:27.642083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.309 qpair failed and we were unable to recover it. 00:25:02.309 [2024-07-24 22:34:27.642185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.309 [2024-07-24 22:34:27.642211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.309 qpair failed and we were unable to recover it. 00:25:02.309 [2024-07-24 22:34:27.642330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.309 [2024-07-24 22:34:27.642355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.309 qpair failed and we were unable to recover it. 00:25:02.309 [2024-07-24 22:34:27.642457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.309 [2024-07-24 22:34:27.642494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.309 qpair failed and we were unable to recover it. 00:25:02.309 [2024-07-24 22:34:27.642615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.309 [2024-07-24 22:34:27.642641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.309 qpair failed and we were unable to recover it. 00:25:02.309 [2024-07-24 22:34:27.642760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.309 [2024-07-24 22:34:27.642794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.309 qpair failed and we were unable to recover it. 00:25:02.309 [2024-07-24 22:34:27.642902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.309 [2024-07-24 22:34:27.642928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.309 qpair failed and we were unable to recover it. 00:25:02.309 [2024-07-24 22:34:27.643035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.309 [2024-07-24 22:34:27.643063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.309 qpair failed and we were unable to recover it. 00:25:02.309 [2024-07-24 22:34:27.643180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.309 [2024-07-24 22:34:27.643206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.309 qpair failed and we were unable to recover it. 00:25:02.309 [2024-07-24 22:34:27.643302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.309 [2024-07-24 22:34:27.643329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.309 qpair failed and we were unable to recover it. 00:25:02.309 [2024-07-24 22:34:27.643439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.309 [2024-07-24 22:34:27.643465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.309 qpair failed and we were unable to recover it. 00:25:02.309 [2024-07-24 22:34:27.643592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.309 [2024-07-24 22:34:27.643620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.309 qpair failed and we were unable to recover it. 00:25:02.309 [2024-07-24 22:34:27.643725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.309 [2024-07-24 22:34:27.643751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.309 qpair failed and we were unable to recover it. 00:25:02.309 [2024-07-24 22:34:27.643858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.309 [2024-07-24 22:34:27.643885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.309 qpair failed and we were unable to recover it. 00:25:02.309 [2024-07-24 22:34:27.643987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.309 [2024-07-24 22:34:27.644012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.309 qpair failed and we were unable to recover it. 00:25:02.309 [2024-07-24 22:34:27.644140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.309 [2024-07-24 22:34:27.644166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.309 qpair failed and we were unable to recover it. 00:25:02.309 [2024-07-24 22:34:27.644299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.309 [2024-07-24 22:34:27.644324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.309 qpair failed and we were unable to recover it. 00:25:02.309 [2024-07-24 22:34:27.644434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.310 [2024-07-24 22:34:27.644460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.310 qpair failed and we were unable to recover it. 00:25:02.310 [2024-07-24 22:34:27.644577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.310 [2024-07-24 22:34:27.644605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.310 qpair failed and we were unable to recover it. 00:25:02.310 [2024-07-24 22:34:27.644731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.310 [2024-07-24 22:34:27.644757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.310 qpair failed and we were unable to recover it. 00:25:02.310 [2024-07-24 22:34:27.644873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.310 [2024-07-24 22:34:27.644898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.310 qpair failed and we were unable to recover it. 00:25:02.310 [2024-07-24 22:34:27.645011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.310 [2024-07-24 22:34:27.645037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.310 qpair failed and we were unable to recover it. 00:25:02.310 [2024-07-24 22:34:27.645151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.310 [2024-07-24 22:34:27.645176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.310 qpair failed and we were unable to recover it. 00:25:02.310 [2024-07-24 22:34:27.645280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.310 [2024-07-24 22:34:27.645307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.310 qpair failed and we were unable to recover it. 00:25:02.310 [2024-07-24 22:34:27.645405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.310 [2024-07-24 22:34:27.645430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.310 qpair failed and we were unable to recover it. 00:25:02.310 [2024-07-24 22:34:27.645545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.310 [2024-07-24 22:34:27.645572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.310 qpair failed and we were unable to recover it. 00:25:02.310 [2024-07-24 22:34:27.645674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.310 [2024-07-24 22:34:27.645700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.310 qpair failed and we were unable to recover it. 00:25:02.310 [2024-07-24 22:34:27.645803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.310 [2024-07-24 22:34:27.645829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.310 qpair failed and we were unable to recover it. 00:25:02.310 [2024-07-24 22:34:27.645942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.310 [2024-07-24 22:34:27.645967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.310 qpair failed and we were unable to recover it. 00:25:02.310 [2024-07-24 22:34:27.646068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.310 [2024-07-24 22:34:27.646093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.310 qpair failed and we were unable to recover it. 00:25:02.310 [2024-07-24 22:34:27.646225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.310 [2024-07-24 22:34:27.646264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.310 qpair failed and we were unable to recover it. 00:25:02.310 [2024-07-24 22:34:27.646387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.310 [2024-07-24 22:34:27.646417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.310 qpair failed and we were unable to recover it. 00:25:02.310 [2024-07-24 22:34:27.646541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.310 [2024-07-24 22:34:27.646570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.310 qpair failed and we were unable to recover it. 00:25:02.310 [2024-07-24 22:34:27.646685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.310 [2024-07-24 22:34:27.646711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.310 qpair failed and we were unable to recover it. 00:25:02.310 [2024-07-24 22:34:27.646838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.310 [2024-07-24 22:34:27.646865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.310 qpair failed and we were unable to recover it. 00:25:02.310 [2024-07-24 22:34:27.646981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.310 [2024-07-24 22:34:27.647009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.310 qpair failed and we were unable to recover it. 00:25:02.310 [2024-07-24 22:34:27.647112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.310 [2024-07-24 22:34:27.647138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.310 qpair failed and we were unable to recover it. 00:25:02.310 [2024-07-24 22:34:27.647254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.310 [2024-07-24 22:34:27.647280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.310 qpair failed and we were unable to recover it. 00:25:02.310 [2024-07-24 22:34:27.647386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.310 [2024-07-24 22:34:27.647412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.310 qpair failed and we were unable to recover it. 00:25:02.310 [2024-07-24 22:34:27.647512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.310 [2024-07-24 22:34:27.647538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.310 qpair failed and we were unable to recover it. 00:25:02.310 [2024-07-24 22:34:27.647653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.310 [2024-07-24 22:34:27.647678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.310 qpair failed and we were unable to recover it. 00:25:02.310 [2024-07-24 22:34:27.647795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.310 [2024-07-24 22:34:27.647822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.310 qpair failed and we were unable to recover it. 00:25:02.310 [2024-07-24 22:34:27.647921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.310 [2024-07-24 22:34:27.647947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.310 qpair failed and we were unable to recover it. 00:25:02.310 [2024-07-24 22:34:27.648046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.310 [2024-07-24 22:34:27.648076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.310 qpair failed and we were unable to recover it. 00:25:02.310 [2024-07-24 22:34:27.648188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.310 [2024-07-24 22:34:27.648213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.310 qpair failed and we were unable to recover it. 00:25:02.310 [2024-07-24 22:34:27.648315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.310 [2024-07-24 22:34:27.648341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.310 qpair failed and we were unable to recover it. 00:25:02.310 [2024-07-24 22:34:27.648447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.310 [2024-07-24 22:34:27.648474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.310 qpair failed and we were unable to recover it. 00:25:02.310 [2024-07-24 22:34:27.648586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.310 [2024-07-24 22:34:27.648612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.310 qpair failed and we were unable to recover it. 00:25:02.310 [2024-07-24 22:34:27.648725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.310 [2024-07-24 22:34:27.648751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.310 qpair failed and we were unable to recover it. 00:25:02.310 [2024-07-24 22:34:27.648869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.310 [2024-07-24 22:34:27.648895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.310 qpair failed and we were unable to recover it. 00:25:02.310 [2024-07-24 22:34:27.648994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.310 [2024-07-24 22:34:27.649020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.310 qpair failed and we were unable to recover it. 00:25:02.310 [2024-07-24 22:34:27.649151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.310 [2024-07-24 22:34:27.649180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.310 qpair failed and we were unable to recover it. 00:25:02.310 [2024-07-24 22:34:27.649301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.310 [2024-07-24 22:34:27.649326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.310 qpair failed and we were unable to recover it. 00:25:02.310 [2024-07-24 22:34:27.649432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.310 [2024-07-24 22:34:27.649458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.310 qpair failed and we were unable to recover it. 00:25:02.310 [2024-07-24 22:34:27.649584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.310 [2024-07-24 22:34:27.649610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.311 qpair failed and we were unable to recover it. 00:25:02.311 [2024-07-24 22:34:27.649726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.311 [2024-07-24 22:34:27.649752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.311 qpair failed and we were unable to recover it. 00:25:02.311 [2024-07-24 22:34:27.649858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.311 [2024-07-24 22:34:27.649884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.311 qpair failed and we were unable to recover it. 00:25:02.311 [2024-07-24 22:34:27.649988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.311 [2024-07-24 22:34:27.650014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.311 qpair failed and we were unable to recover it. 00:25:02.311 [2024-07-24 22:34:27.650116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.311 [2024-07-24 22:34:27.650142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.311 qpair failed and we were unable to recover it. 00:25:02.311 [2024-07-24 22:34:27.650258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.311 [2024-07-24 22:34:27.650284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.311 qpair failed and we were unable to recover it. 00:25:02.311 [2024-07-24 22:34:27.650399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.311 [2024-07-24 22:34:27.650426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.311 qpair failed and we were unable to recover it. 00:25:02.311 [2024-07-24 22:34:27.650546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.311 [2024-07-24 22:34:27.650573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.311 qpair failed and we were unable to recover it. 00:25:02.311 [2024-07-24 22:34:27.650677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.311 [2024-07-24 22:34:27.650704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.311 qpair failed and we were unable to recover it. 00:25:02.311 [2024-07-24 22:34:27.650818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.311 [2024-07-24 22:34:27.650844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.311 qpair failed and we were unable to recover it. 00:25:02.311 [2024-07-24 22:34:27.650960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.311 [2024-07-24 22:34:27.650986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.311 qpair failed and we were unable to recover it. 00:25:02.311 [2024-07-24 22:34:27.651110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.311 [2024-07-24 22:34:27.651139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.311 qpair failed and we were unable to recover it. 00:25:02.311 [2024-07-24 22:34:27.651240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.311 [2024-07-24 22:34:27.651266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.311 qpair failed and we were unable to recover it. 00:25:02.311 [2024-07-24 22:34:27.651384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.311 [2024-07-24 22:34:27.651411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.311 qpair failed and we were unable to recover it. 00:25:02.311 [2024-07-24 22:34:27.651558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.311 [2024-07-24 22:34:27.651586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.311 qpair failed and we were unable to recover it. 00:25:02.311 [2024-07-24 22:34:27.651706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.311 [2024-07-24 22:34:27.651735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.311 qpair failed and we were unable to recover it. 00:25:02.311 [2024-07-24 22:34:27.651857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.311 [2024-07-24 22:34:27.651890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.311 qpair failed and we were unable to recover it. 00:25:02.311 [2024-07-24 22:34:27.652028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.311 [2024-07-24 22:34:27.652056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.311 qpair failed and we were unable to recover it. 00:25:02.311 [2024-07-24 22:34:27.652162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.311 [2024-07-24 22:34:27.652188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.311 qpair failed and we were unable to recover it. 00:25:02.311 [2024-07-24 22:34:27.652289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.311 [2024-07-24 22:34:27.652315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.311 qpair failed and we were unable to recover it. 00:25:02.311 [2024-07-24 22:34:27.652417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.311 [2024-07-24 22:34:27.652443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.311 qpair failed and we were unable to recover it. 00:25:02.311 [2024-07-24 22:34:27.652553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.311 [2024-07-24 22:34:27.652581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.311 qpair failed and we were unable to recover it. 00:25:02.311 [2024-07-24 22:34:27.652699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.311 [2024-07-24 22:34:27.652727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.311 qpair failed and we were unable to recover it. 00:25:02.311 [2024-07-24 22:34:27.652833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.311 [2024-07-24 22:34:27.652860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.311 qpair failed and we were unable to recover it. 00:25:02.311 [2024-07-24 22:34:27.652982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.311 [2024-07-24 22:34:27.653008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.311 qpair failed and we were unable to recover it. 00:25:02.311 [2024-07-24 22:34:27.653120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.311 [2024-07-24 22:34:27.653146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.311 qpair failed and we were unable to recover it. 00:25:02.311 [2024-07-24 22:34:27.653261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.311 [2024-07-24 22:34:27.653287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.311 qpair failed and we were unable to recover it. 00:25:02.311 [2024-07-24 22:34:27.653391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.311 [2024-07-24 22:34:27.653417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.311 qpair failed and we were unable to recover it. 00:25:02.311 [2024-07-24 22:34:27.653539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.311 [2024-07-24 22:34:27.653566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.311 qpair failed and we were unable to recover it. 00:25:02.311 [2024-07-24 22:34:27.653670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.311 [2024-07-24 22:34:27.653703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.311 qpair failed and we were unable to recover it. 00:25:02.311 [2024-07-24 22:34:27.653840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.311 [2024-07-24 22:34:27.653869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.311 qpair failed and we were unable to recover it. 00:25:02.311 [2024-07-24 22:34:27.653988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.311 [2024-07-24 22:34:27.654014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.311 qpair failed and we were unable to recover it. 00:25:02.311 [2024-07-24 22:34:27.654131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.311 [2024-07-24 22:34:27.654158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.311 qpair failed and we were unable to recover it. 00:25:02.311 [2024-07-24 22:34:27.654261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.311 [2024-07-24 22:34:27.654287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.311 qpair failed and we were unable to recover it. 00:25:02.311 [2024-07-24 22:34:27.654401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.311 [2024-07-24 22:34:27.654427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.311 qpair failed and we were unable to recover it. 00:25:02.311 [2024-07-24 22:34:27.654529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.311 [2024-07-24 22:34:27.654556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.311 qpair failed and we were unable to recover it. 00:25:02.311 [2024-07-24 22:34:27.654691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.311 [2024-07-24 22:34:27.654717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.311 qpair failed and we were unable to recover it. 00:25:02.311 [2024-07-24 22:34:27.654818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.311 [2024-07-24 22:34:27.654845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.311 qpair failed and we were unable to recover it. 00:25:02.312 [2024-07-24 22:34:27.654949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.312 [2024-07-24 22:34:27.654975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.312 qpair failed and we were unable to recover it. 00:25:02.312 [2024-07-24 22:34:27.655081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.312 [2024-07-24 22:34:27.655110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.312 qpair failed and we were unable to recover it. 00:25:02.312 [2024-07-24 22:34:27.655249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.312 [2024-07-24 22:34:27.655279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.312 qpair failed and we were unable to recover it. 00:25:02.312 [2024-07-24 22:34:27.655416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.312 [2024-07-24 22:34:27.655443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.312 qpair failed and we were unable to recover it. 00:25:02.312 [2024-07-24 22:34:27.655562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.312 [2024-07-24 22:34:27.655588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.312 qpair failed and we were unable to recover it. 00:25:02.312 [2024-07-24 22:34:27.655698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.312 [2024-07-24 22:34:27.655726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.312 qpair failed and we were unable to recover it. 00:25:02.312 [2024-07-24 22:34:27.655846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.312 [2024-07-24 22:34:27.655872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.312 qpair failed and we were unable to recover it. 00:25:02.312 [2024-07-24 22:34:27.655974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.312 [2024-07-24 22:34:27.656000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.312 qpair failed and we were unable to recover it. 00:25:02.312 [2024-07-24 22:34:27.656100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.312 [2024-07-24 22:34:27.656127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.312 qpair failed and we were unable to recover it. 00:25:02.312 [2024-07-24 22:34:27.656241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.312 [2024-07-24 22:34:27.656268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.312 qpair failed and we were unable to recover it. 00:25:02.312 [2024-07-24 22:34:27.656368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.312 [2024-07-24 22:34:27.656395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.312 qpair failed and we were unable to recover it. 00:25:02.312 [2024-07-24 22:34:27.656502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.312 [2024-07-24 22:34:27.656528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.312 qpair failed and we were unable to recover it. 00:25:02.312 [2024-07-24 22:34:27.656632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.312 [2024-07-24 22:34:27.656658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.312 qpair failed and we were unable to recover it. 00:25:02.312 [2024-07-24 22:34:27.656792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.312 [2024-07-24 22:34:27.656818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.312 qpair failed and we were unable to recover it. 00:25:02.312 [2024-07-24 22:34:27.656923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.312 [2024-07-24 22:34:27.656951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.312 qpair failed and we were unable to recover it. 00:25:02.312 [2024-07-24 22:34:27.657082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.312 [2024-07-24 22:34:27.657109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.312 qpair failed and we were unable to recover it. 00:25:02.312 [2024-07-24 22:34:27.657226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.312 [2024-07-24 22:34:27.657253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.312 qpair failed and we were unable to recover it. 00:25:02.312 [2024-07-24 22:34:27.657364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.312 [2024-07-24 22:34:27.657389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.312 qpair failed and we were unable to recover it. 00:25:02.312 [2024-07-24 22:34:27.657503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.312 [2024-07-24 22:34:27.657537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.312 qpair failed and we were unable to recover it. 00:25:02.312 [2024-07-24 22:34:27.657679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.312 [2024-07-24 22:34:27.657708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.312 qpair failed and we were unable to recover it. 00:25:02.312 [2024-07-24 22:34:27.657836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.312 [2024-07-24 22:34:27.657865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.312 qpair failed and we were unable to recover it. 00:25:02.312 [2024-07-24 22:34:27.657973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.312 [2024-07-24 22:34:27.657998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.312 qpair failed and we were unable to recover it. 00:25:02.312 [2024-07-24 22:34:27.658098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.312 [2024-07-24 22:34:27.658124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.312 qpair failed and we were unable to recover it. 00:25:02.312 [2024-07-24 22:34:27.658241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.312 [2024-07-24 22:34:27.658266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.312 qpair failed and we were unable to recover it. 00:25:02.312 [2024-07-24 22:34:27.658359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.312 [2024-07-24 22:34:27.658385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.312 qpair failed and we were unable to recover it. 00:25:02.312 [2024-07-24 22:34:27.658489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.312 [2024-07-24 22:34:27.658516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.312 qpair failed and we were unable to recover it. 00:25:02.312 [2024-07-24 22:34:27.658616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.312 [2024-07-24 22:34:27.658642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.312 qpair failed and we were unable to recover it. 00:25:02.312 [2024-07-24 22:34:27.658761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.312 [2024-07-24 22:34:27.658786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.312 qpair failed and we were unable to recover it. 00:25:02.312 [2024-07-24 22:34:27.658894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.312 [2024-07-24 22:34:27.658921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.312 qpair failed and we were unable to recover it. 00:25:02.312 [2024-07-24 22:34:27.659036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.312 [2024-07-24 22:34:27.659064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.312 qpair failed and we were unable to recover it. 00:25:02.312 [2024-07-24 22:34:27.659168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.313 [2024-07-24 22:34:27.659197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.313 qpair failed and we were unable to recover it. 00:25:02.313 [2024-07-24 22:34:27.659315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.313 [2024-07-24 22:34:27.659346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.313 qpair failed and we were unable to recover it. 00:25:02.313 [2024-07-24 22:34:27.659444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.313 [2024-07-24 22:34:27.659470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.313 qpair failed and we were unable to recover it. 00:25:02.313 [2024-07-24 22:34:27.659596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.313 [2024-07-24 22:34:27.659621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.313 qpair failed and we were unable to recover it. 00:25:02.313 [2024-07-24 22:34:27.659721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.313 [2024-07-24 22:34:27.659747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.313 qpair failed and we were unable to recover it. 00:25:02.313 [2024-07-24 22:34:27.659865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.313 [2024-07-24 22:34:27.659891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.313 qpair failed and we were unable to recover it. 00:25:02.313 [2024-07-24 22:34:27.659985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.313 [2024-07-24 22:34:27.660011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.313 qpair failed and we were unable to recover it. 00:25:02.313 [2024-07-24 22:34:27.660118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.313 [2024-07-24 22:34:27.660148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.313 qpair failed and we were unable to recover it. 00:25:02.313 [2024-07-24 22:34:27.660255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.313 [2024-07-24 22:34:27.660284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.313 qpair failed and we were unable to recover it. 00:25:02.313 [2024-07-24 22:34:27.660414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.313 [2024-07-24 22:34:27.660442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.313 qpair failed and we were unable to recover it. 00:25:02.313 [2024-07-24 22:34:27.660561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.313 [2024-07-24 22:34:27.660587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.313 qpair failed and we were unable to recover it. 00:25:02.313 [2024-07-24 22:34:27.660716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.313 [2024-07-24 22:34:27.660742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.313 qpair failed and we were unable to recover it. 00:25:02.313 [2024-07-24 22:34:27.660845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.313 [2024-07-24 22:34:27.660872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.313 qpair failed and we were unable to recover it. 00:25:02.313 [2024-07-24 22:34:27.660990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.313 [2024-07-24 22:34:27.661017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.313 qpair failed and we were unable to recover it. 00:25:02.313 [2024-07-24 22:34:27.661137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.313 [2024-07-24 22:34:27.661163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.313 qpair failed and we were unable to recover it. 00:25:02.313 [2024-07-24 22:34:27.661285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.313 [2024-07-24 22:34:27.661322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.313 qpair failed and we were unable to recover it. 00:25:02.313 [2024-07-24 22:34:27.661449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.313 [2024-07-24 22:34:27.661477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.313 qpair failed and we were unable to recover it. 00:25:02.313 [2024-07-24 22:34:27.661624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.313 [2024-07-24 22:34:27.661650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.313 qpair failed and we were unable to recover it. 00:25:02.313 [2024-07-24 22:34:27.661767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.313 [2024-07-24 22:34:27.661792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.313 qpair failed and we were unable to recover it. 00:25:02.313 [2024-07-24 22:34:27.661910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.313 [2024-07-24 22:34:27.661935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.313 qpair failed and we were unable to recover it. 00:25:02.313 [2024-07-24 22:34:27.662039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.313 [2024-07-24 22:34:27.662064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.313 qpair failed and we were unable to recover it. 00:25:02.313 [2024-07-24 22:34:27.662167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.313 [2024-07-24 22:34:27.662193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.313 qpair failed and we were unable to recover it. 00:25:02.313 [2024-07-24 22:34:27.662309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.313 [2024-07-24 22:34:27.662336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.313 qpair failed and we were unable to recover it. 00:25:02.313 [2024-07-24 22:34:27.662455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.313 [2024-07-24 22:34:27.662495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.313 qpair failed and we were unable to recover it. 00:25:02.313 [2024-07-24 22:34:27.662601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.313 [2024-07-24 22:34:27.662628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.313 qpair failed and we were unable to recover it. 00:25:02.313 [2024-07-24 22:34:27.662736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.313 [2024-07-24 22:34:27.662763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.313 qpair failed and we were unable to recover it. 00:25:02.313 [2024-07-24 22:34:27.662870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.313 [2024-07-24 22:34:27.662897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.313 qpair failed and we were unable to recover it. 00:25:02.313 [2024-07-24 22:34:27.663004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.313 [2024-07-24 22:34:27.663031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.313 qpair failed and we were unable to recover it. 00:25:02.313 [2024-07-24 22:34:27.663159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.313 [2024-07-24 22:34:27.663193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.313 qpair failed and we were unable to recover it. 00:25:02.313 [2024-07-24 22:34:27.663309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.313 [2024-07-24 22:34:27.663337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.313 qpair failed and we were unable to recover it. 00:25:02.313 [2024-07-24 22:34:27.663440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.313 [2024-07-24 22:34:27.663467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.313 qpair failed and we were unable to recover it. 00:25:02.313 [2024-07-24 22:34:27.663592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.313 [2024-07-24 22:34:27.663619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.313 qpair failed and we were unable to recover it. 00:25:02.313 [2024-07-24 22:34:27.663724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.313 [2024-07-24 22:34:27.663750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.313 qpair failed and we were unable to recover it. 00:25:02.313 [2024-07-24 22:34:27.663854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.313 [2024-07-24 22:34:27.663879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.313 qpair failed and we were unable to recover it. 00:25:02.313 [2024-07-24 22:34:27.664003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.313 [2024-07-24 22:34:27.664030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.313 qpair failed and we were unable to recover it. 00:25:02.313 [2024-07-24 22:34:27.664129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.313 [2024-07-24 22:34:27.664156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.313 qpair failed and we were unable to recover it. 00:25:02.313 [2024-07-24 22:34:27.664261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.313 [2024-07-24 22:34:27.664289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.313 qpair failed and we were unable to recover it. 00:25:02.313 [2024-07-24 22:34:27.664390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.314 [2024-07-24 22:34:27.664416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.314 qpair failed and we were unable to recover it. 00:25:02.314 [2024-07-24 22:34:27.664515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.314 [2024-07-24 22:34:27.664542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.314 qpair failed and we were unable to recover it. 00:25:02.314 [2024-07-24 22:34:27.664647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.314 [2024-07-24 22:34:27.664674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.314 qpair failed and we were unable to recover it. 00:25:02.314 [2024-07-24 22:34:27.664775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.314 [2024-07-24 22:34:27.664801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.314 qpair failed and we were unable to recover it. 00:25:02.314 [2024-07-24 22:34:27.664902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.314 [2024-07-24 22:34:27.664928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.314 qpair failed and we were unable to recover it. 00:25:02.314 [2024-07-24 22:34:27.665029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.314 [2024-07-24 22:34:27.665056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.314 qpair failed and we were unable to recover it. 00:25:02.314 [2024-07-24 22:34:27.665157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.314 [2024-07-24 22:34:27.665183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.314 qpair failed and we were unable to recover it. 00:25:02.314 [2024-07-24 22:34:27.665283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.314 [2024-07-24 22:34:27.665309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.314 qpair failed and we were unable to recover it. 00:25:02.314 [2024-07-24 22:34:27.665421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.314 [2024-07-24 22:34:27.665447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.314 qpair failed and we were unable to recover it. 00:25:02.314 [2024-07-24 22:34:27.665571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.314 [2024-07-24 22:34:27.665598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.314 qpair failed and we were unable to recover it. 00:25:02.314 [2024-07-24 22:34:27.665712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.314 [2024-07-24 22:34:27.665740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.314 qpair failed and we were unable to recover it. 00:25:02.314 [2024-07-24 22:34:27.665860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.314 [2024-07-24 22:34:27.665886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.314 qpair failed and we were unable to recover it. 00:25:02.314 [2024-07-24 22:34:27.666003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.314 [2024-07-24 22:34:27.666030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.314 qpair failed and we were unable to recover it. 00:25:02.314 [2024-07-24 22:34:27.666146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.314 [2024-07-24 22:34:27.666173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.314 qpair failed and we were unable to recover it. 00:25:02.314 [2024-07-24 22:34:27.666291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.314 [2024-07-24 22:34:27.666321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.314 qpair failed and we were unable to recover it. 00:25:02.314 [2024-07-24 22:34:27.666433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.314 [2024-07-24 22:34:27.666459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.314 qpair failed and we were unable to recover it. 00:25:02.314 [2024-07-24 22:34:27.666595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.314 [2024-07-24 22:34:27.666621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.314 qpair failed and we were unable to recover it. 00:25:02.314 [2024-07-24 22:34:27.666735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.314 [2024-07-24 22:34:27.666761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.314 qpair failed and we were unable to recover it. 00:25:02.314 [2024-07-24 22:34:27.666886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.314 [2024-07-24 22:34:27.666911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.314 qpair failed and we were unable to recover it. 00:25:02.314 [2024-07-24 22:34:27.667010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.314 [2024-07-24 22:34:27.667036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.314 qpair failed and we were unable to recover it. 00:25:02.314 [2024-07-24 22:34:27.667139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.314 [2024-07-24 22:34:27.667165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.314 qpair failed and we were unable to recover it. 00:25:02.314 [2024-07-24 22:34:27.667288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.314 [2024-07-24 22:34:27.667318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.314 qpair failed and we were unable to recover it. 00:25:02.314 [2024-07-24 22:34:27.667440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.314 [2024-07-24 22:34:27.667466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.314 qpair failed and we were unable to recover it. 00:25:02.314 [2024-07-24 22:34:27.667576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.314 [2024-07-24 22:34:27.667602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.314 qpair failed and we were unable to recover it. 00:25:02.314 [2024-07-24 22:34:27.667702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.314 [2024-07-24 22:34:27.667728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.314 qpair failed and we were unable to recover it. 00:25:02.314 [2024-07-24 22:34:27.667847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.314 [2024-07-24 22:34:27.667874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.314 qpair failed and we were unable to recover it. 00:25:02.314 [2024-07-24 22:34:27.667978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.314 [2024-07-24 22:34:27.668005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.314 qpair failed and we were unable to recover it. 00:25:02.314 [2024-07-24 22:34:27.668125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.314 [2024-07-24 22:34:27.668151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.314 qpair failed and we were unable to recover it. 00:25:02.314 [2024-07-24 22:34:27.668266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.314 [2024-07-24 22:34:27.668293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.314 qpair failed and we were unable to recover it. 00:25:02.314 [2024-07-24 22:34:27.668398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.314 [2024-07-24 22:34:27.668425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.314 qpair failed and we were unable to recover it. 00:25:02.314 [2024-07-24 22:34:27.668540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.314 [2024-07-24 22:34:27.668575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.314 qpair failed and we were unable to recover it. 00:25:02.314 [2024-07-24 22:34:27.668684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.314 [2024-07-24 22:34:27.668716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.314 qpair failed and we were unable to recover it. 00:25:02.314 [2024-07-24 22:34:27.668836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.314 [2024-07-24 22:34:27.668862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.314 qpair failed and we were unable to recover it. 00:25:02.314 [2024-07-24 22:34:27.668960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.314 [2024-07-24 22:34:27.668985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.314 qpair failed and we were unable to recover it. 00:25:02.314 [2024-07-24 22:34:27.669089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.314 [2024-07-24 22:34:27.669115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.314 qpair failed and we were unable to recover it. 00:25:02.314 [2024-07-24 22:34:27.669244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.314 [2024-07-24 22:34:27.669273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.314 qpair failed and we were unable to recover it. 00:25:02.314 [2024-07-24 22:34:27.669383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.314 [2024-07-24 22:34:27.669412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.314 qpair failed and we were unable to recover it. 00:25:02.314 [2024-07-24 22:34:27.669538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.314 [2024-07-24 22:34:27.669565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.314 qpair failed and we were unable to recover it. 00:25:02.314 [2024-07-24 22:34:27.669668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.315 [2024-07-24 22:34:27.669695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.315 qpair failed and we were unable to recover it. 00:25:02.315 [2024-07-24 22:34:27.669802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.315 [2024-07-24 22:34:27.669829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.315 qpair failed and we were unable to recover it. 00:25:02.315 [2024-07-24 22:34:27.669933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.315 [2024-07-24 22:34:27.669961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.315 qpair failed and we were unable to recover it. 00:25:02.315 [2024-07-24 22:34:27.670075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.315 [2024-07-24 22:34:27.670101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.315 qpair failed and we were unable to recover it. 00:25:02.315 [2024-07-24 22:34:27.670201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.315 [2024-07-24 22:34:27.670227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.315 qpair failed and we were unable to recover it. 00:25:02.315 [2024-07-24 22:34:27.670344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.315 [2024-07-24 22:34:27.670371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.315 qpair failed and we were unable to recover it. 00:25:02.315 [2024-07-24 22:34:27.670495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.315 [2024-07-24 22:34:27.670521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.315 qpair failed and we were unable to recover it. 00:25:02.315 [2024-07-24 22:34:27.670627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.315 [2024-07-24 22:34:27.670654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.315 qpair failed and we were unable to recover it. 00:25:02.315 [2024-07-24 22:34:27.670763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.315 [2024-07-24 22:34:27.670790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.315 qpair failed and we were unable to recover it. 00:25:02.315 [2024-07-24 22:34:27.670910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.315 [2024-07-24 22:34:27.670936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.315 qpair failed and we were unable to recover it. 00:25:02.315 [2024-07-24 22:34:27.671037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.315 [2024-07-24 22:34:27.671064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.315 qpair failed and we were unable to recover it. 00:25:02.315 [2024-07-24 22:34:27.671177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.315 [2024-07-24 22:34:27.671203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.315 qpair failed and we were unable to recover it. 00:25:02.315 [2024-07-24 22:34:27.671324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.315 [2024-07-24 22:34:27.671349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.315 qpair failed and we were unable to recover it. 00:25:02.315 [2024-07-24 22:34:27.671465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.315 [2024-07-24 22:34:27.671499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.315 qpair failed and we were unable to recover it. 00:25:02.315 [2024-07-24 22:34:27.671634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.315 [2024-07-24 22:34:27.671660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.315 qpair failed and we were unable to recover it. 00:25:02.315 [2024-07-24 22:34:27.671769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.315 [2024-07-24 22:34:27.671796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.315 qpair failed and we were unable to recover it. 00:25:02.315 [2024-07-24 22:34:27.671916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.315 [2024-07-24 22:34:27.671943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.315 qpair failed and we were unable to recover it. 00:25:02.315 [2024-07-24 22:34:27.672045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.315 [2024-07-24 22:34:27.672071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.315 qpair failed and we were unable to recover it. 00:25:02.315 [2024-07-24 22:34:27.672173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.315 [2024-07-24 22:34:27.672201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.315 qpair failed and we were unable to recover it. 00:25:02.315 [2024-07-24 22:34:27.672336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.315 [2024-07-24 22:34:27.672366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.315 qpair failed and we were unable to recover it. 00:25:02.315 [2024-07-24 22:34:27.672492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.315 [2024-07-24 22:34:27.672519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.315 qpair failed and we were unable to recover it. 00:25:02.315 [2024-07-24 22:34:27.672640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.315 [2024-07-24 22:34:27.672668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.315 qpair failed and we were unable to recover it. 00:25:02.315 [2024-07-24 22:34:27.672775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.315 [2024-07-24 22:34:27.672803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.315 qpair failed and we were unable to recover it. 00:25:02.315 [2024-07-24 22:34:27.672934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.315 [2024-07-24 22:34:27.672960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.315 qpair failed and we were unable to recover it. 00:25:02.315 [2024-07-24 22:34:27.673069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.315 [2024-07-24 22:34:27.673099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.315 qpair failed and we were unable to recover it. 00:25:02.315 [2024-07-24 22:34:27.673203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.315 [2024-07-24 22:34:27.673230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.315 qpair failed and we were unable to recover it. 00:25:02.315 [2024-07-24 22:34:27.673347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.315 [2024-07-24 22:34:27.673373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.315 qpair failed and we were unable to recover it. 00:25:02.315 [2024-07-24 22:34:27.673489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.315 [2024-07-24 22:34:27.673516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.315 qpair failed and we were unable to recover it. 00:25:02.315 [2024-07-24 22:34:27.673628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.315 [2024-07-24 22:34:27.673653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.315 qpair failed and we were unable to recover it. 00:25:02.315 [2024-07-24 22:34:27.673750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.315 [2024-07-24 22:34:27.673776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.315 qpair failed and we were unable to recover it. 00:25:02.315 [2024-07-24 22:34:27.673881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.315 [2024-07-24 22:34:27.673909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.315 qpair failed and we were unable to recover it. 00:25:02.315 [2024-07-24 22:34:27.674011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.315 [2024-07-24 22:34:27.674036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.315 qpair failed and we were unable to recover it. 00:25:02.315 [2024-07-24 22:34:27.674140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.315 [2024-07-24 22:34:27.674166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.315 qpair failed and we were unable to recover it. 00:25:02.315 [2024-07-24 22:34:27.674260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.315 [2024-07-24 22:34:27.674292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.315 qpair failed and we were unable to recover it. 00:25:02.315 [2024-07-24 22:34:27.674401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.315 [2024-07-24 22:34:27.674427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.315 qpair failed and we were unable to recover it. 00:25:02.315 [2024-07-24 22:34:27.674540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.315 [2024-07-24 22:34:27.674567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.315 qpair failed and we were unable to recover it. 00:25:02.315 [2024-07-24 22:34:27.674667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.315 [2024-07-24 22:34:27.674693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.315 qpair failed and we were unable to recover it. 00:25:02.315 [2024-07-24 22:34:27.674788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.316 [2024-07-24 22:34:27.674815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.316 qpair failed and we were unable to recover it. 00:25:02.316 [2024-07-24 22:34:27.674921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.316 [2024-07-24 22:34:27.674949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.316 qpair failed and we were unable to recover it. 00:25:02.316 [2024-07-24 22:34:27.675047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.316 [2024-07-24 22:34:27.675073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.316 qpair failed and we were unable to recover it. 00:25:02.316 [2024-07-24 22:34:27.675177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.316 [2024-07-24 22:34:27.675206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.316 qpair failed and we were unable to recover it. 00:25:02.316 [2024-07-24 22:34:27.675302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.316 [2024-07-24 22:34:27.675329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.316 qpair failed and we were unable to recover it. 00:25:02.316 [2024-07-24 22:34:27.675434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.316 [2024-07-24 22:34:27.675460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.316 qpair failed and we were unable to recover it. 00:25:02.316 [2024-07-24 22:34:27.675562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.316 [2024-07-24 22:34:27.675592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.316 qpair failed and we were unable to recover it. 00:25:02.316 [2024-07-24 22:34:27.675695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.316 [2024-07-24 22:34:27.675722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.316 qpair failed and we were unable to recover it. 00:25:02.316 [2024-07-24 22:34:27.675830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.316 [2024-07-24 22:34:27.675858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.316 qpair failed and we were unable to recover it. 00:25:02.316 [2024-07-24 22:34:27.675991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.316 [2024-07-24 22:34:27.676017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.316 qpair failed and we were unable to recover it. 00:25:02.316 [2024-07-24 22:34:27.676124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.316 [2024-07-24 22:34:27.676149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.316 qpair failed and we were unable to recover it. 00:25:02.316 [2024-07-24 22:34:27.676253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.316 [2024-07-24 22:34:27.676279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.316 qpair failed and we were unable to recover it. 00:25:02.316 [2024-07-24 22:34:27.676396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.316 [2024-07-24 22:34:27.676424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.316 qpair failed and we were unable to recover it. 00:25:02.316 [2024-07-24 22:34:27.676523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.316 [2024-07-24 22:34:27.676550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.316 qpair failed and we were unable to recover it. 00:25:02.316 [2024-07-24 22:34:27.676648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.316 [2024-07-24 22:34:27.676674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.316 qpair failed and we were unable to recover it. 00:25:02.316 [2024-07-24 22:34:27.676792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.316 [2024-07-24 22:34:27.676818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.316 qpair failed and we were unable to recover it. 00:25:02.316 [2024-07-24 22:34:27.676938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.316 [2024-07-24 22:34:27.676964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.316 qpair failed and we were unable to recover it. 00:25:02.316 [2024-07-24 22:34:27.677059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.316 [2024-07-24 22:34:27.677086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.316 qpair failed and we were unable to recover it. 00:25:02.316 [2024-07-24 22:34:27.677191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.316 [2024-07-24 22:34:27.677218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.316 qpair failed and we were unable to recover it. 00:25:02.316 [2024-07-24 22:34:27.677313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.316 [2024-07-24 22:34:27.677339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.316 qpair failed and we were unable to recover it. 00:25:02.316 [2024-07-24 22:34:27.677441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.316 [2024-07-24 22:34:27.677467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.316 qpair failed and we were unable to recover it. 00:25:02.316 [2024-07-24 22:34:27.677582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.316 [2024-07-24 22:34:27.677609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.316 qpair failed and we were unable to recover it. 00:25:02.316 [2024-07-24 22:34:27.677714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.316 [2024-07-24 22:34:27.677741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.316 qpair failed and we were unable to recover it. 00:25:02.316 [2024-07-24 22:34:27.677844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.316 [2024-07-24 22:34:27.677870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.316 qpair failed and we were unable to recover it. 00:25:02.316 [2024-07-24 22:34:27.677975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.316 [2024-07-24 22:34:27.678003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.316 qpair failed and we were unable to recover it. 00:25:02.316 [2024-07-24 22:34:27.678118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.316 [2024-07-24 22:34:27.678144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.316 qpair failed and we were unable to recover it. 00:25:02.316 [2024-07-24 22:34:27.678251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.316 [2024-07-24 22:34:27.678277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.316 qpair failed and we were unable to recover it. 00:25:02.316 [2024-07-24 22:34:27.678393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.316 [2024-07-24 22:34:27.678420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.316 qpair failed and we were unable to recover it. 00:25:02.316 [2024-07-24 22:34:27.678544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.316 [2024-07-24 22:34:27.678570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.316 qpair failed and we were unable to recover it. 00:25:02.316 [2024-07-24 22:34:27.678669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.316 [2024-07-24 22:34:27.678694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.316 qpair failed and we were unable to recover it. 00:25:02.316 [2024-07-24 22:34:27.678801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.316 [2024-07-24 22:34:27.678830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.316 qpair failed and we were unable to recover it. 00:25:02.316 [2024-07-24 22:34:27.678928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.316 [2024-07-24 22:34:27.678954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.316 qpair failed and we were unable to recover it. 00:25:02.316 [2024-07-24 22:34:27.679061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.316 [2024-07-24 22:34:27.679089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.316 qpair failed and we were unable to recover it. 00:25:02.316 [2024-07-24 22:34:27.679184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.316 [2024-07-24 22:34:27.679210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.316 qpair failed and we were unable to recover it. 00:25:02.316 [2024-07-24 22:34:27.679307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.316 [2024-07-24 22:34:27.679334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.316 qpair failed and we were unable to recover it. 00:25:02.316 [2024-07-24 22:34:27.679437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.316 [2024-07-24 22:34:27.679467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.316 qpair failed and we were unable to recover it. 00:25:02.316 [2024-07-24 22:34:27.679580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.316 [2024-07-24 22:34:27.679611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.316 qpair failed and we were unable to recover it. 00:25:02.316 [2024-07-24 22:34:27.679720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.316 [2024-07-24 22:34:27.679748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.316 qpair failed and we were unable to recover it. 00:25:02.317 [2024-07-24 22:34:27.679854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.317 [2024-07-24 22:34:27.679880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.317 qpair failed and we were unable to recover it. 00:25:02.317 [2024-07-24 22:34:27.679984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.317 [2024-07-24 22:34:27.680010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.317 qpair failed and we were unable to recover it. 00:25:02.317 [2024-07-24 22:34:27.680118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.317 [2024-07-24 22:34:27.680146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.317 qpair failed and we were unable to recover it. 00:25:02.317 [2024-07-24 22:34:27.680264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.317 [2024-07-24 22:34:27.680291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.317 qpair failed and we were unable to recover it. 00:25:02.317 [2024-07-24 22:34:27.680401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.317 [2024-07-24 22:34:27.680427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.317 qpair failed and we were unable to recover it. 00:25:02.317 [2024-07-24 22:34:27.680527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.317 [2024-07-24 22:34:27.680554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.317 qpair failed and we were unable to recover it. 00:25:02.317 [2024-07-24 22:34:27.680651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.317 [2024-07-24 22:34:27.680678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.317 qpair failed and we were unable to recover it. 00:25:02.317 [2024-07-24 22:34:27.680778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.317 [2024-07-24 22:34:27.680805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.317 qpair failed and we were unable to recover it. 00:25:02.317 [2024-07-24 22:34:27.680911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.317 [2024-07-24 22:34:27.680937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.317 qpair failed and we were unable to recover it. 00:25:02.317 [2024-07-24 22:34:27.681043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.317 [2024-07-24 22:34:27.681070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.317 qpair failed and we were unable to recover it. 00:25:02.317 [2024-07-24 22:34:27.681175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.317 [2024-07-24 22:34:27.681202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.317 qpair failed and we were unable to recover it. 00:25:02.317 [2024-07-24 22:34:27.681303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.317 [2024-07-24 22:34:27.681330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.317 qpair failed and we were unable to recover it. 00:25:02.317 [2024-07-24 22:34:27.681459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.317 [2024-07-24 22:34:27.681493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.317 qpair failed and we were unable to recover it. 00:25:02.317 [2024-07-24 22:34:27.681598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.317 [2024-07-24 22:34:27.681626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.317 qpair failed and we were unable to recover it. 00:25:02.317 [2024-07-24 22:34:27.681721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.317 [2024-07-24 22:34:27.681747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.317 qpair failed and we were unable to recover it. 00:25:02.317 [2024-07-24 22:34:27.681859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.317 [2024-07-24 22:34:27.681885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.317 qpair failed and we were unable to recover it. 00:25:02.317 [2024-07-24 22:34:27.681987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.317 [2024-07-24 22:34:27.682013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.317 qpair failed and we were unable to recover it. 00:25:02.317 [2024-07-24 22:34:27.682128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.317 [2024-07-24 22:34:27.682155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.317 qpair failed and we were unable to recover it. 00:25:02.317 [2024-07-24 22:34:27.682254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.317 [2024-07-24 22:34:27.682280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.317 qpair failed and we were unable to recover it. 00:25:02.317 [2024-07-24 22:34:27.682392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.317 [2024-07-24 22:34:27.682418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.317 qpair failed and we were unable to recover it. 00:25:02.317 [2024-07-24 22:34:27.682513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.317 [2024-07-24 22:34:27.682539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.317 qpair failed and we were unable to recover it. 00:25:02.317 [2024-07-24 22:34:27.682638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.317 [2024-07-24 22:34:27.682665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.317 qpair failed and we were unable to recover it. 00:25:02.317 [2024-07-24 22:34:27.682763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.317 [2024-07-24 22:34:27.682789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.317 qpair failed and we were unable to recover it. 00:25:02.317 [2024-07-24 22:34:27.682905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.317 [2024-07-24 22:34:27.682930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.317 qpair failed and we were unable to recover it. 00:25:02.317 [2024-07-24 22:34:27.683031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.317 [2024-07-24 22:34:27.683057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.317 qpair failed and we were unable to recover it. 00:25:02.317 [2024-07-24 22:34:27.683161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.317 [2024-07-24 22:34:27.683188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.317 qpair failed and we were unable to recover it. 00:25:02.317 [2024-07-24 22:34:27.683303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.317 [2024-07-24 22:34:27.683329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.317 qpair failed and we were unable to recover it. 00:25:02.317 [2024-07-24 22:34:27.683428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.317 [2024-07-24 22:34:27.683454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.317 qpair failed and we were unable to recover it. 00:25:02.317 [2024-07-24 22:34:27.683577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.317 [2024-07-24 22:34:27.683607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.317 qpair failed and we were unable to recover it. 00:25:02.317 [2024-07-24 22:34:27.683710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.317 [2024-07-24 22:34:27.683736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.317 qpair failed and we were unable to recover it. 00:25:02.317 [2024-07-24 22:34:27.683839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.317 [2024-07-24 22:34:27.683866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.317 qpair failed and we were unable to recover it. 00:25:02.317 [2024-07-24 22:34:27.683965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.317 [2024-07-24 22:34:27.683992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.317 qpair failed and we were unable to recover it. 00:25:02.317 [2024-07-24 22:34:27.684089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.318 [2024-07-24 22:34:27.684115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.318 qpair failed and we were unable to recover it. 00:25:02.318 [2024-07-24 22:34:27.684221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.318 [2024-07-24 22:34:27.684246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.318 qpair failed and we were unable to recover it. 00:25:02.318 [2024-07-24 22:34:27.684364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.318 [2024-07-24 22:34:27.684392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.318 qpair failed and we were unable to recover it. 00:25:02.318 [2024-07-24 22:34:27.684502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.318 [2024-07-24 22:34:27.684529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.318 qpair failed and we were unable to recover it. 00:25:02.318 [2024-07-24 22:34:27.684636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.318 [2024-07-24 22:34:27.684663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.318 qpair failed and we were unable to recover it. 00:25:02.318 [2024-07-24 22:34:27.684778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.318 [2024-07-24 22:34:27.684804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.318 qpair failed and we were unable to recover it. 00:25:02.318 [2024-07-24 22:34:27.684903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.318 [2024-07-24 22:34:27.684935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.318 qpair failed and we were unable to recover it. 00:25:02.318 [2024-07-24 22:34:27.685053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.318 [2024-07-24 22:34:27.685080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.318 qpair failed and we were unable to recover it. 00:25:02.318 [2024-07-24 22:34:27.685176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.318 [2024-07-24 22:34:27.685202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.318 qpair failed and we were unable to recover it. 00:25:02.318 [2024-07-24 22:34:27.685300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.318 [2024-07-24 22:34:27.685327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.318 qpair failed and we were unable to recover it. 00:25:02.318 [2024-07-24 22:34:27.685442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.318 [2024-07-24 22:34:27.685468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.318 qpair failed and we were unable to recover it. 00:25:02.318 [2024-07-24 22:34:27.685582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.318 [2024-07-24 22:34:27.685608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.318 qpair failed and we were unable to recover it. 00:25:02.318 [2024-07-24 22:34:27.685719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.318 [2024-07-24 22:34:27.685745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.318 qpair failed and we were unable to recover it. 00:25:02.318 [2024-07-24 22:34:27.685854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.318 [2024-07-24 22:34:27.685884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.318 qpair failed and we were unable to recover it. 00:25:02.318 [2024-07-24 22:34:27.685992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.318 [2024-07-24 22:34:27.686022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.318 qpair failed and we were unable to recover it. 00:25:02.318 [2024-07-24 22:34:27.686128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.318 [2024-07-24 22:34:27.686154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.318 qpair failed and we were unable to recover it. 00:25:02.318 [2024-07-24 22:34:27.686256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.318 [2024-07-24 22:34:27.686282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.318 qpair failed and we were unable to recover it. 00:25:02.318 [2024-07-24 22:34:27.686378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.318 [2024-07-24 22:34:27.686403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.318 qpair failed and we were unable to recover it. 00:25:02.318 [2024-07-24 22:34:27.686505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.318 [2024-07-24 22:34:27.686532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.318 qpair failed and we were unable to recover it. 00:25:02.318 [2024-07-24 22:34:27.686642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.318 [2024-07-24 22:34:27.686668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.318 qpair failed and we were unable to recover it. 00:25:02.318 [2024-07-24 22:34:27.686771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.318 [2024-07-24 22:34:27.686798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.318 qpair failed and we were unable to recover it. 00:25:02.318 [2024-07-24 22:34:27.686894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.318 [2024-07-24 22:34:27.686920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.318 qpair failed and we were unable to recover it. 00:25:02.318 [2024-07-24 22:34:27.687026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.318 [2024-07-24 22:34:27.687054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.318 qpair failed and we were unable to recover it. 00:25:02.318 [2024-07-24 22:34:27.687178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.318 [2024-07-24 22:34:27.687207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.318 qpair failed and we were unable to recover it. 00:25:02.318 [2024-07-24 22:34:27.687308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.318 [2024-07-24 22:34:27.687334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.318 qpair failed and we were unable to recover it. 00:25:02.318 [2024-07-24 22:34:27.687431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.318 [2024-07-24 22:34:27.687457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.318 qpair failed and we were unable to recover it. 00:25:02.318 [2024-07-24 22:34:27.687574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.318 [2024-07-24 22:34:27.687602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.318 qpair failed and we were unable to recover it. 00:25:02.318 [2024-07-24 22:34:27.687703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.318 [2024-07-24 22:34:27.687730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.318 qpair failed and we were unable to recover it. 00:25:02.318 [2024-07-24 22:34:27.687844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.318 [2024-07-24 22:34:27.687870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.318 qpair failed and we were unable to recover it. 00:25:02.318 [2024-07-24 22:34:27.687969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.318 [2024-07-24 22:34:27.687995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.318 qpair failed and we were unable to recover it. 00:25:02.318 [2024-07-24 22:34:27.688100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.318 [2024-07-24 22:34:27.688127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.318 qpair failed and we were unable to recover it. 00:25:02.318 [2024-07-24 22:34:27.688246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.318 [2024-07-24 22:34:27.688274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.318 qpair failed and we were unable to recover it. 00:25:02.318 [2024-07-24 22:34:27.688377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.318 [2024-07-24 22:34:27.688404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.318 qpair failed and we were unable to recover it. 00:25:02.318 [2024-07-24 22:34:27.688542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.318 [2024-07-24 22:34:27.688570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.318 qpair failed and we were unable to recover it. 00:25:02.318 [2024-07-24 22:34:27.688673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.318 [2024-07-24 22:34:27.688698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.318 qpair failed and we were unable to recover it. 00:25:02.318 [2024-07-24 22:34:27.688810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.318 [2024-07-24 22:34:27.688839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.318 qpair failed and we were unable to recover it. 00:25:02.318 [2024-07-24 22:34:27.688936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.318 [2024-07-24 22:34:27.688962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.318 qpair failed and we were unable to recover it. 00:25:02.318 [2024-07-24 22:34:27.689068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.318 [2024-07-24 22:34:27.689097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.318 qpair failed and we were unable to recover it. 00:25:02.319 [2024-07-24 22:34:27.689216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.319 [2024-07-24 22:34:27.689243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.319 qpair failed and we were unable to recover it. 00:25:02.319 [2024-07-24 22:34:27.689362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.319 [2024-07-24 22:34:27.689388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.319 qpair failed and we were unable to recover it. 00:25:02.319 [2024-07-24 22:34:27.689496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.319 [2024-07-24 22:34:27.689523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.319 qpair failed and we were unable to recover it. 00:25:02.319 [2024-07-24 22:34:27.689622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.319 [2024-07-24 22:34:27.689649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.319 qpair failed and we were unable to recover it. 00:25:02.319 [2024-07-24 22:34:27.689762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.319 [2024-07-24 22:34:27.689789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.319 qpair failed and we were unable to recover it. 00:25:02.319 [2024-07-24 22:34:27.689895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.319 [2024-07-24 22:34:27.689924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.319 qpair failed and we were unable to recover it. 00:25:02.319 [2024-07-24 22:34:27.690031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.319 [2024-07-24 22:34:27.690060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.319 qpair failed and we were unable to recover it. 00:25:02.319 [2024-07-24 22:34:27.690168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.319 [2024-07-24 22:34:27.690195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.319 qpair failed and we were unable to recover it. 00:25:02.319 [2024-07-24 22:34:27.690299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.319 [2024-07-24 22:34:27.690330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.319 qpair failed and we were unable to recover it. 00:25:02.319 [2024-07-24 22:34:27.690426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.319 [2024-07-24 22:34:27.690452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.319 qpair failed and we were unable to recover it. 00:25:02.319 [2024-07-24 22:34:27.690570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.319 [2024-07-24 22:34:27.690596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.319 qpair failed and we were unable to recover it. 00:25:02.319 [2024-07-24 22:34:27.690706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.319 [2024-07-24 22:34:27.690735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.319 qpair failed and we were unable to recover it. 00:25:02.319 [2024-07-24 22:34:27.690841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.319 [2024-07-24 22:34:27.690868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.319 qpair failed and we were unable to recover it. 00:25:02.319 [2024-07-24 22:34:27.690989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.319 [2024-07-24 22:34:27.691015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.319 qpair failed and we were unable to recover it. 00:25:02.319 [2024-07-24 22:34:27.691129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.319 [2024-07-24 22:34:27.691155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.319 qpair failed and we were unable to recover it. 00:25:02.319 [2024-07-24 22:34:27.691260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.319 [2024-07-24 22:34:27.691286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.319 qpair failed and we were unable to recover it. 00:25:02.319 [2024-07-24 22:34:27.691388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.319 [2024-07-24 22:34:27.691415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.319 qpair failed and we were unable to recover it. 00:25:02.319 [2024-07-24 22:34:27.691531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.319 [2024-07-24 22:34:27.691560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.319 qpair failed and we were unable to recover it. 00:25:02.319 [2024-07-24 22:34:27.691662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.319 [2024-07-24 22:34:27.691688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.319 qpair failed and we were unable to recover it. 00:25:02.319 [2024-07-24 22:34:27.691790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.319 [2024-07-24 22:34:27.691817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.319 qpair failed and we were unable to recover it. 00:25:02.319 [2024-07-24 22:34:27.691917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.319 [2024-07-24 22:34:27.691943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.319 qpair failed and we were unable to recover it. 00:25:02.319 [2024-07-24 22:34:27.692040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.319 [2024-07-24 22:34:27.692066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.319 qpair failed and we were unable to recover it. 00:25:02.319 [2024-07-24 22:34:27.692174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.319 [2024-07-24 22:34:27.692201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.319 qpair failed and we were unable to recover it. 00:25:02.319 [2024-07-24 22:34:27.692296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.319 [2024-07-24 22:34:27.692322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.319 qpair failed and we were unable to recover it. 00:25:02.319 [2024-07-24 22:34:27.692420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.319 [2024-07-24 22:34:27.692446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.319 qpair failed and we were unable to recover it. 00:25:02.319 [2024-07-24 22:34:27.692561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.319 [2024-07-24 22:34:27.692589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.319 qpair failed and we were unable to recover it. 00:25:02.319 [2024-07-24 22:34:27.692692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.319 [2024-07-24 22:34:27.692720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.319 qpair failed and we were unable to recover it. 00:25:02.319 [2024-07-24 22:34:27.692820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.319 [2024-07-24 22:34:27.692846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.319 qpair failed and we were unable to recover it. 00:25:02.319 [2024-07-24 22:34:27.692945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.319 [2024-07-24 22:34:27.692971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.319 qpair failed and we were unable to recover it. 00:25:02.319 [2024-07-24 22:34:27.693088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.319 [2024-07-24 22:34:27.693114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.319 qpair failed and we were unable to recover it. 00:25:02.319 [2024-07-24 22:34:27.693211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.319 [2024-07-24 22:34:27.693237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.319 qpair failed and we were unable to recover it. 00:25:02.319 [2024-07-24 22:34:27.693340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.319 [2024-07-24 22:34:27.693366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.319 qpair failed and we were unable to recover it. 00:25:02.319 [2024-07-24 22:34:27.693475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.319 [2024-07-24 22:34:27.693516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.319 qpair failed and we were unable to recover it. 00:25:02.319 [2024-07-24 22:34:27.693621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.319 [2024-07-24 22:34:27.693648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.319 qpair failed and we were unable to recover it. 00:25:02.319 [2024-07-24 22:34:27.693751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.319 [2024-07-24 22:34:27.693777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.319 qpair failed and we were unable to recover it. 00:25:02.319 [2024-07-24 22:34:27.693892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.319 [2024-07-24 22:34:27.693920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.319 qpair failed and we were unable to recover it. 00:25:02.319 [2024-07-24 22:34:27.694020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.319 [2024-07-24 22:34:27.694046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.319 qpair failed and we were unable to recover it. 00:25:02.319 [2024-07-24 22:34:27.694152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.320 [2024-07-24 22:34:27.694178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.320 qpair failed and we were unable to recover it. 00:25:02.320 [2024-07-24 22:34:27.694273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.320 [2024-07-24 22:34:27.694298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.320 qpair failed and we were unable to recover it. 00:25:02.320 [2024-07-24 22:34:27.694393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.320 [2024-07-24 22:34:27.694419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.320 qpair failed and we were unable to recover it. 00:25:02.320 [2024-07-24 22:34:27.694530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.320 [2024-07-24 22:34:27.694557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.320 qpair failed and we were unable to recover it. 00:25:02.320 [2024-07-24 22:34:27.694661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.320 [2024-07-24 22:34:27.694689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.320 qpair failed and we were unable to recover it. 00:25:02.320 [2024-07-24 22:34:27.694794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.320 [2024-07-24 22:34:27.694822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.320 qpair failed and we were unable to recover it. 00:25:02.320 [2024-07-24 22:34:27.694939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.320 [2024-07-24 22:34:27.694965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.320 qpair failed and we were unable to recover it. 00:25:02.320 [2024-07-24 22:34:27.695066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.320 [2024-07-24 22:34:27.695092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.320 qpair failed and we were unable to recover it. 00:25:02.320 [2024-07-24 22:34:27.695210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.320 [2024-07-24 22:34:27.695236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.320 qpair failed and we were unable to recover it. 00:25:02.320 [2024-07-24 22:34:27.695357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.320 [2024-07-24 22:34:27.695382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.320 qpair failed and we were unable to recover it. 00:25:02.320 [2024-07-24 22:34:27.695494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.320 [2024-07-24 22:34:27.695523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.320 qpair failed and we were unable to recover it. 00:25:02.320 [2024-07-24 22:34:27.695637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.320 [2024-07-24 22:34:27.695671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.320 qpair failed and we were unable to recover it. 00:25:02.320 [2024-07-24 22:34:27.695779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.320 [2024-07-24 22:34:27.695806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.320 qpair failed and we were unable to recover it. 00:25:02.320 [2024-07-24 22:34:27.695922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.320 [2024-07-24 22:34:27.695949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.320 qpair failed and we were unable to recover it. 00:25:02.320 [2024-07-24 22:34:27.696070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.320 [2024-07-24 22:34:27.696098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.320 qpair failed and we were unable to recover it. 00:25:02.320 [2024-07-24 22:34:27.696204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.320 [2024-07-24 22:34:27.696230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.320 qpair failed and we were unable to recover it. 00:25:02.320 [2024-07-24 22:34:27.696345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.320 [2024-07-24 22:34:27.696373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.320 qpair failed and we were unable to recover it. 00:25:02.320 [2024-07-24 22:34:27.696487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.320 [2024-07-24 22:34:27.696514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.320 qpair failed and we were unable to recover it. 00:25:02.320 [2024-07-24 22:34:27.696631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.320 [2024-07-24 22:34:27.696657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.320 qpair failed and we were unable to recover it. 00:25:02.320 [2024-07-24 22:34:27.696773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.320 [2024-07-24 22:34:27.696799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.320 qpair failed and we were unable to recover it. 00:25:02.320 [2024-07-24 22:34:27.696899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.320 [2024-07-24 22:34:27.696925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.320 qpair failed and we were unable to recover it. 00:25:02.320 [2024-07-24 22:34:27.697021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.320 [2024-07-24 22:34:27.697046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.320 qpair failed and we were unable to recover it. 00:25:02.320 [2024-07-24 22:34:27.697160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.320 [2024-07-24 22:34:27.697186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.320 qpair failed and we were unable to recover it. 00:25:02.320 [2024-07-24 22:34:27.697288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.320 [2024-07-24 22:34:27.697313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.320 qpair failed and we were unable to recover it. 00:25:02.320 [2024-07-24 22:34:27.697427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.320 [2024-07-24 22:34:27.697452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.320 qpair failed and we were unable to recover it. 00:25:02.320 [2024-07-24 22:34:27.697563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.320 [2024-07-24 22:34:27.697592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.320 qpair failed and we were unable to recover it. 00:25:02.320 [2024-07-24 22:34:27.697691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.320 [2024-07-24 22:34:27.697718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.320 qpair failed and we were unable to recover it. 00:25:02.320 [2024-07-24 22:34:27.697819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.320 [2024-07-24 22:34:27.697845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.320 qpair failed and we were unable to recover it. 00:25:02.320 [2024-07-24 22:34:27.697959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.320 [2024-07-24 22:34:27.697985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.320 qpair failed and we were unable to recover it. 00:25:02.320 [2024-07-24 22:34:27.698092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.320 [2024-07-24 22:34:27.698118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.320 qpair failed and we were unable to recover it. 00:25:02.320 [2024-07-24 22:34:27.698227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.320 [2024-07-24 22:34:27.698254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.320 qpair failed and we were unable to recover it. 00:25:02.320 [2024-07-24 22:34:27.698353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.320 [2024-07-24 22:34:27.698380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.320 qpair failed and we were unable to recover it. 00:25:02.320 [2024-07-24 22:34:27.698485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.320 [2024-07-24 22:34:27.698513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.320 qpair failed and we were unable to recover it. 00:25:02.320 [2024-07-24 22:34:27.698607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.320 [2024-07-24 22:34:27.698634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.320 qpair failed and we were unable to recover it. 00:25:02.320 [2024-07-24 22:34:27.698741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.320 [2024-07-24 22:34:27.698771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.320 qpair failed and we were unable to recover it. 00:25:02.320 [2024-07-24 22:34:27.698873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.320 [2024-07-24 22:34:27.698900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.320 qpair failed and we were unable to recover it. 00:25:02.320 [2024-07-24 22:34:27.699001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.320 [2024-07-24 22:34:27.699027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.320 qpair failed and we were unable to recover it. 00:25:02.320 [2024-07-24 22:34:27.699136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.320 [2024-07-24 22:34:27.699162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.320 qpair failed and we were unable to recover it. 00:25:02.321 [2024-07-24 22:34:27.699296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.321 [2024-07-24 22:34:27.699325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.321 qpair failed and we were unable to recover it. 00:25:02.321 [2024-07-24 22:34:27.699431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.321 [2024-07-24 22:34:27.699457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.321 qpair failed and we were unable to recover it. 00:25:02.321 [2024-07-24 22:34:27.699561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.321 [2024-07-24 22:34:27.699590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.321 qpair failed and we were unable to recover it. 00:25:02.321 [2024-07-24 22:34:27.699698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.321 [2024-07-24 22:34:27.699724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.321 qpair failed and we were unable to recover it. 00:25:02.321 [2024-07-24 22:34:27.699822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.321 [2024-07-24 22:34:27.699848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.321 qpair failed and we were unable to recover it. 00:25:02.321 [2024-07-24 22:34:27.699964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.321 [2024-07-24 22:34:27.699991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.321 qpair failed and we were unable to recover it. 00:25:02.321 [2024-07-24 22:34:27.700116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.321 [2024-07-24 22:34:27.700144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.321 qpair failed and we were unable to recover it. 00:25:02.321 [2024-07-24 22:34:27.700259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.321 [2024-07-24 22:34:27.700285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.321 qpair failed and we were unable to recover it. 00:25:02.321 [2024-07-24 22:34:27.700385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.321 [2024-07-24 22:34:27.700411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.321 qpair failed and we were unable to recover it. 00:25:02.321 [2024-07-24 22:34:27.700520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.321 [2024-07-24 22:34:27.700548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.321 qpair failed and we were unable to recover it. 00:25:02.321 [2024-07-24 22:34:27.700654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.321 [2024-07-24 22:34:27.700680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.321 qpair failed and we were unable to recover it. 00:25:02.321 [2024-07-24 22:34:27.700783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.321 [2024-07-24 22:34:27.700810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.321 qpair failed and we were unable to recover it. 00:25:02.321 [2024-07-24 22:34:27.700912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.321 [2024-07-24 22:34:27.700938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.321 qpair failed and we were unable to recover it. 00:25:02.321 [2024-07-24 22:34:27.701052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.321 [2024-07-24 22:34:27.701082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.321 qpair failed and we were unable to recover it. 00:25:02.321 [2024-07-24 22:34:27.701203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.321 [2024-07-24 22:34:27.701231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.321 qpair failed and we were unable to recover it. 00:25:02.321 [2024-07-24 22:34:27.701340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.321 [2024-07-24 22:34:27.701367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.321 qpair failed and we were unable to recover it. 00:25:02.321 [2024-07-24 22:34:27.701465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.321 [2024-07-24 22:34:27.701498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.321 qpair failed and we were unable to recover it. 00:25:02.321 [2024-07-24 22:34:27.701618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.321 [2024-07-24 22:34:27.701645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.321 qpair failed and we were unable to recover it. 00:25:02.321 [2024-07-24 22:34:27.701765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.321 [2024-07-24 22:34:27.701794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.321 qpair failed and we were unable to recover it. 00:25:02.321 [2024-07-24 22:34:27.701902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.321 [2024-07-24 22:34:27.701930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.321 qpair failed and we were unable to recover it. 00:25:02.321 [2024-07-24 22:34:27.702040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.321 [2024-07-24 22:34:27.702072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.321 qpair failed and we were unable to recover it. 00:25:02.321 [2024-07-24 22:34:27.702175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.321 [2024-07-24 22:34:27.702200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.321 qpair failed and we were unable to recover it. 00:25:02.321 [2024-07-24 22:34:27.702302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.321 [2024-07-24 22:34:27.702328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.321 qpair failed and we were unable to recover it. 00:25:02.321 [2024-07-24 22:34:27.702431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.321 [2024-07-24 22:34:27.702458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.321 qpair failed and we were unable to recover it. 00:25:02.321 [2024-07-24 22:34:27.702567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.321 [2024-07-24 22:34:27.702593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.321 qpair failed and we were unable to recover it. 00:25:02.321 [2024-07-24 22:34:27.702701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.321 [2024-07-24 22:34:27.702727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.321 qpair failed and we were unable to recover it. 00:25:02.321 [2024-07-24 22:34:27.702839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.321 [2024-07-24 22:34:27.702864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.321 qpair failed and we were unable to recover it. 00:25:02.321 [2024-07-24 22:34:27.702966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.321 [2024-07-24 22:34:27.702992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.321 qpair failed and we were unable to recover it. 00:25:02.321 [2024-07-24 22:34:27.703092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.321 [2024-07-24 22:34:27.703117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.321 qpair failed and we were unable to recover it. 00:25:02.321 [2024-07-24 22:34:27.703234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.321 [2024-07-24 22:34:27.703261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.321 qpair failed and we were unable to recover it. 00:25:02.321 [2024-07-24 22:34:27.703374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.321 [2024-07-24 22:34:27.703400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.321 qpair failed and we were unable to recover it. 00:25:02.321 [2024-07-24 22:34:27.703507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.321 [2024-07-24 22:34:27.703535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.321 qpair failed and we were unable to recover it. 00:25:02.321 [2024-07-24 22:34:27.703635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.321 [2024-07-24 22:34:27.703663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.321 qpair failed and we were unable to recover it. 00:25:02.321 [2024-07-24 22:34:27.703762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.322 [2024-07-24 22:34:27.703788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.322 qpair failed and we were unable to recover it. 00:25:02.322 [2024-07-24 22:34:27.703914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.322 [2024-07-24 22:34:27.703940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.322 qpair failed and we were unable to recover it. 00:25:02.322 [2024-07-24 22:34:27.704049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.322 [2024-07-24 22:34:27.704079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.322 qpair failed and we were unable to recover it. 00:25:02.322 [2024-07-24 22:34:27.704187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.322 [2024-07-24 22:34:27.704216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.322 qpair failed and we were unable to recover it. 00:25:02.322 [2024-07-24 22:34:27.704331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.322 [2024-07-24 22:34:27.704357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.322 qpair failed and we were unable to recover it. 00:25:02.322 [2024-07-24 22:34:27.704453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.322 [2024-07-24 22:34:27.704488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.322 qpair failed and we were unable to recover it. 00:25:02.322 [2024-07-24 22:34:27.704589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.322 [2024-07-24 22:34:27.704616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.322 qpair failed and we were unable to recover it. 00:25:02.322 [2024-07-24 22:34:27.704734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.322 [2024-07-24 22:34:27.704770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.322 qpair failed and we were unable to recover it. 00:25:02.322 [2024-07-24 22:34:27.704896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.322 [2024-07-24 22:34:27.704925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.322 qpair failed and we were unable to recover it. 00:25:02.322 [2024-07-24 22:34:27.705027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.322 [2024-07-24 22:34:27.705053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.322 qpair failed and we were unable to recover it. 00:25:02.322 [2024-07-24 22:34:27.705152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.322 [2024-07-24 22:34:27.705178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.322 qpair failed and we were unable to recover it. 00:25:02.322 [2024-07-24 22:34:27.705275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.322 [2024-07-24 22:34:27.705300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.322 qpair failed and we were unable to recover it. 00:25:02.322 [2024-07-24 22:34:27.705413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.322 [2024-07-24 22:34:27.705440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.322 qpair failed and we were unable to recover it. 00:25:02.322 [2024-07-24 22:34:27.705549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.322 [2024-07-24 22:34:27.705578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.322 qpair failed and we were unable to recover it. 00:25:02.322 [2024-07-24 22:34:27.705684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.322 [2024-07-24 22:34:27.705712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.322 qpair failed and we were unable to recover it. 00:25:02.322 [2024-07-24 22:34:27.705815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.322 [2024-07-24 22:34:27.705842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.322 qpair failed and we were unable to recover it. 00:25:02.322 [2024-07-24 22:34:27.705962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.322 [2024-07-24 22:34:27.705988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.322 qpair failed and we were unable to recover it. 00:25:02.322 [2024-07-24 22:34:27.706084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.322 [2024-07-24 22:34:27.706110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.322 qpair failed and we were unable to recover it. 00:25:02.322 [2024-07-24 22:34:27.706213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.322 [2024-07-24 22:34:27.706239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.322 qpair failed and we were unable to recover it. 00:25:02.322 [2024-07-24 22:34:27.706340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.322 [2024-07-24 22:34:27.706366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.322 qpair failed and we were unable to recover it. 00:25:02.322 [2024-07-24 22:34:27.706471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.322 [2024-07-24 22:34:27.706511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.322 qpair failed and we were unable to recover it. 00:25:02.322 [2024-07-24 22:34:27.706614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.322 [2024-07-24 22:34:27.706640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.322 qpair failed and we were unable to recover it. 00:25:02.322 [2024-07-24 22:34:27.706757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.322 [2024-07-24 22:34:27.706783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.322 qpair failed and we were unable to recover it. 00:25:02.322 [2024-07-24 22:34:27.706897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.322 [2024-07-24 22:34:27.706923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.322 qpair failed and we were unable to recover it. 00:25:02.322 [2024-07-24 22:34:27.707023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.322 [2024-07-24 22:34:27.707049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.322 qpair failed and we were unable to recover it. 00:25:02.322 [2024-07-24 22:34:27.707151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.322 [2024-07-24 22:34:27.707178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.322 qpair failed and we were unable to recover it. 00:25:02.322 [2024-07-24 22:34:27.707284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.322 [2024-07-24 22:34:27.707312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.322 qpair failed and we were unable to recover it. 00:25:02.322 [2024-07-24 22:34:27.707486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.322 [2024-07-24 22:34:27.707516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.322 qpair failed and we were unable to recover it. 00:25:02.322 [2024-07-24 22:34:27.707640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.322 [2024-07-24 22:34:27.707666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.322 qpair failed and we were unable to recover it. 00:25:02.322 [2024-07-24 22:34:27.707784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.322 [2024-07-24 22:34:27.707811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.322 qpair failed and we were unable to recover it. 00:25:02.322 [2024-07-24 22:34:27.707916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.322 [2024-07-24 22:34:27.707944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.322 qpair failed and we were unable to recover it. 00:25:02.322 [2024-07-24 22:34:27.708056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.322 [2024-07-24 22:34:27.708087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.322 qpair failed and we were unable to recover it. 00:25:02.322 [2024-07-24 22:34:27.708189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.322 [2024-07-24 22:34:27.708215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.322 qpair failed and we were unable to recover it. 00:25:02.322 [2024-07-24 22:34:27.708313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.322 [2024-07-24 22:34:27.708338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.322 qpair failed and we were unable to recover it. 00:25:02.322 [2024-07-24 22:34:27.708457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.322 [2024-07-24 22:34:27.708488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.322 qpair failed and we were unable to recover it. 00:25:02.322 [2024-07-24 22:34:27.708591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.322 [2024-07-24 22:34:27.708616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.322 qpair failed and we were unable to recover it. 00:25:02.322 [2024-07-24 22:34:27.708733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.322 [2024-07-24 22:34:27.708762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.322 qpair failed and we were unable to recover it. 00:25:02.323 [2024-07-24 22:34:27.708876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.323 [2024-07-24 22:34:27.708902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.323 qpair failed and we were unable to recover it. 00:25:02.323 [2024-07-24 22:34:27.709002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.323 [2024-07-24 22:34:27.709028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.323 qpair failed and we were unable to recover it. 00:25:02.323 [2024-07-24 22:34:27.709135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.323 [2024-07-24 22:34:27.709163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.323 qpair failed and we were unable to recover it. 00:25:02.323 [2024-07-24 22:34:27.709279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.323 [2024-07-24 22:34:27.709306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.323 qpair failed and we were unable to recover it. 00:25:02.323 [2024-07-24 22:34:27.709414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.323 [2024-07-24 22:34:27.709441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.323 qpair failed and we were unable to recover it. 00:25:02.323 [2024-07-24 22:34:27.709550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.323 [2024-07-24 22:34:27.709578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.323 qpair failed and we were unable to recover it. 00:25:02.323 [2024-07-24 22:34:27.709686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.323 [2024-07-24 22:34:27.709715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.323 qpair failed and we were unable to recover it. 00:25:02.323 [2024-07-24 22:34:27.709878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.323 [2024-07-24 22:34:27.709904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.323 qpair failed and we were unable to recover it. 00:25:02.323 [2024-07-24 22:34:27.710006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.323 [2024-07-24 22:34:27.710033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.323 qpair failed and we were unable to recover it. 00:25:02.323 [2024-07-24 22:34:27.710132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.323 [2024-07-24 22:34:27.710158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.323 qpair failed and we were unable to recover it. 00:25:02.323 [2024-07-24 22:34:27.710276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.323 [2024-07-24 22:34:27.710311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.323 qpair failed and we were unable to recover it. 00:25:02.323 [2024-07-24 22:34:27.710447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.323 [2024-07-24 22:34:27.710488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.323 qpair failed and we were unable to recover it. 00:25:02.323 [2024-07-24 22:34:27.710598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.323 [2024-07-24 22:34:27.710626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.323 qpair failed and we were unable to recover it. 00:25:02.323 [2024-07-24 22:34:27.710729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.323 [2024-07-24 22:34:27.710754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.323 qpair failed and we were unable to recover it. 00:25:02.323 [2024-07-24 22:34:27.710847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.323 [2024-07-24 22:34:27.710874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.323 qpair failed and we were unable to recover it. 00:25:02.323 [2024-07-24 22:34:27.710977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.323 [2024-07-24 22:34:27.711002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.323 qpair failed and we were unable to recover it. 00:25:02.323 [2024-07-24 22:34:27.711100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.323 [2024-07-24 22:34:27.711127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.323 qpair failed and we were unable to recover it. 00:25:02.323 [2024-07-24 22:34:27.711226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.323 [2024-07-24 22:34:27.711252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.323 qpair failed and we were unable to recover it. 00:25:02.323 [2024-07-24 22:34:27.711352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.323 [2024-07-24 22:34:27.711379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.323 qpair failed and we were unable to recover it. 00:25:02.323 [2024-07-24 22:34:27.711475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.323 [2024-07-24 22:34:27.711510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.323 qpair failed and we were unable to recover it. 00:25:02.323 [2024-07-24 22:34:27.711611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.323 [2024-07-24 22:34:27.711637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.323 qpair failed and we were unable to recover it. 00:25:02.323 [2024-07-24 22:34:27.711732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.323 [2024-07-24 22:34:27.711759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.323 qpair failed and we were unable to recover it. 00:25:02.323 [2024-07-24 22:34:27.711855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.323 [2024-07-24 22:34:27.711881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.323 qpair failed and we were unable to recover it. 00:25:02.323 [2024-07-24 22:34:27.711999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.323 [2024-07-24 22:34:27.712032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.323 qpair failed and we were unable to recover it. 00:25:02.323 [2024-07-24 22:34:27.712158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.323 [2024-07-24 22:34:27.712186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.323 qpair failed and we were unable to recover it. 00:25:02.323 [2024-07-24 22:34:27.712291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.323 [2024-07-24 22:34:27.712318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.323 qpair failed and we were unable to recover it. 00:25:02.323 [2024-07-24 22:34:27.712430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.323 [2024-07-24 22:34:27.712458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.323 qpair failed and we were unable to recover it. 00:25:02.323 [2024-07-24 22:34:27.712579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.323 [2024-07-24 22:34:27.712609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.323 qpair failed and we were unable to recover it. 00:25:02.323 [2024-07-24 22:34:27.712707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.323 [2024-07-24 22:34:27.712733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.323 qpair failed and we were unable to recover it. 00:25:02.323 [2024-07-24 22:34:27.712829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.323 [2024-07-24 22:34:27.712855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.323 qpair failed and we were unable to recover it. 00:25:02.323 [2024-07-24 22:34:27.712961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.323 [2024-07-24 22:34:27.712987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.323 qpair failed and we were unable to recover it. 00:25:02.323 [2024-07-24 22:34:27.713096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.323 [2024-07-24 22:34:27.713126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.323 qpair failed and we were unable to recover it. 00:25:02.323 [2024-07-24 22:34:27.713232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.323 [2024-07-24 22:34:27.713259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.323 qpair failed and we were unable to recover it. 00:25:02.323 [2024-07-24 22:34:27.713377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.323 [2024-07-24 22:34:27.713405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.323 qpair failed and we were unable to recover it. 00:25:02.323 [2024-07-24 22:34:27.713518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.323 [2024-07-24 22:34:27.713544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.323 qpair failed and we were unable to recover it. 00:25:02.323 [2024-07-24 22:34:27.713660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.323 [2024-07-24 22:34:27.713688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.323 qpair failed and we were unable to recover it. 00:25:02.323 [2024-07-24 22:34:27.713792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.323 [2024-07-24 22:34:27.713818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.323 qpair failed and we were unable to recover it. 00:25:02.323 [2024-07-24 22:34:27.713928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.324 [2024-07-24 22:34:27.713955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.324 qpair failed and we were unable to recover it. 00:25:02.324 [2024-07-24 22:34:27.714053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.324 [2024-07-24 22:34:27.714079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.324 qpair failed and we were unable to recover it. 00:25:02.324 [2024-07-24 22:34:27.714175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.324 [2024-07-24 22:34:27.714201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.324 qpair failed and we were unable to recover it. 00:25:02.324 [2024-07-24 22:34:27.714294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.324 [2024-07-24 22:34:27.714320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.324 qpair failed and we were unable to recover it. 00:25:02.324 [2024-07-24 22:34:27.714421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.324 [2024-07-24 22:34:27.714447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.324 qpair failed and we were unable to recover it. 00:25:02.324 [2024-07-24 22:34:27.714551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.324 [2024-07-24 22:34:27.714578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.324 qpair failed and we were unable to recover it. 00:25:02.324 [2024-07-24 22:34:27.714696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.324 [2024-07-24 22:34:27.714723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.324 qpair failed and we were unable to recover it. 00:25:02.324 [2024-07-24 22:34:27.714844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.324 [2024-07-24 22:34:27.714869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.324 qpair failed and we were unable to recover it. 00:25:02.324 [2024-07-24 22:34:27.714981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.324 [2024-07-24 22:34:27.715008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.324 qpair failed and we were unable to recover it. 00:25:02.324 [2024-07-24 22:34:27.715125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.324 [2024-07-24 22:34:27.715154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.324 qpair failed and we were unable to recover it. 00:25:02.324 [2024-07-24 22:34:27.715272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.324 [2024-07-24 22:34:27.715300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.324 qpair failed and we were unable to recover it. 00:25:02.324 [2024-07-24 22:34:27.715412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.324 [2024-07-24 22:34:27.715439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.324 qpair failed and we were unable to recover it. 00:25:02.324 [2024-07-24 22:34:27.715555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.324 [2024-07-24 22:34:27.715582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.324 qpair failed and we were unable to recover it. 00:25:02.324 [2024-07-24 22:34:27.715741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.324 [2024-07-24 22:34:27.715775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.324 qpair failed and we were unable to recover it. 00:25:02.324 [2024-07-24 22:34:27.715895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.324 [2024-07-24 22:34:27.715924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.324 qpair failed and we were unable to recover it. 00:25:02.324 [2024-07-24 22:34:27.716020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.324 [2024-07-24 22:34:27.716046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.324 qpair failed and we were unable to recover it. 00:25:02.324 [2024-07-24 22:34:27.716152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.324 [2024-07-24 22:34:27.716180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.324 qpair failed and we were unable to recover it. 00:25:02.324 [2024-07-24 22:34:27.716281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.324 [2024-07-24 22:34:27.716308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.324 qpair failed and we were unable to recover it. 00:25:02.324 [2024-07-24 22:34:27.716415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.324 [2024-07-24 22:34:27.716444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.324 qpair failed and we were unable to recover it. 00:25:02.324 [2024-07-24 22:34:27.716570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.324 [2024-07-24 22:34:27.716597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.324 qpair failed and we were unable to recover it. 00:25:02.324 [2024-07-24 22:34:27.716701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.324 [2024-07-24 22:34:27.716727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.324 qpair failed and we were unable to recover it. 00:25:02.324 [2024-07-24 22:34:27.716828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.324 [2024-07-24 22:34:27.716854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.324 qpair failed and we were unable to recover it. 00:25:02.324 [2024-07-24 22:34:27.716955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.324 [2024-07-24 22:34:27.716984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.324 qpair failed and we were unable to recover it. 00:25:02.324 [2024-07-24 22:34:27.717086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.324 [2024-07-24 22:34:27.717113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.324 qpair failed and we were unable to recover it. 00:25:02.324 [2024-07-24 22:34:27.717211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.324 [2024-07-24 22:34:27.717237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.324 qpair failed and we were unable to recover it. 00:25:02.324 [2024-07-24 22:34:27.717335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.324 [2024-07-24 22:34:27.717361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.324 qpair failed and we were unable to recover it. 00:25:02.324 [2024-07-24 22:34:27.717460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.324 [2024-07-24 22:34:27.717496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.324 qpair failed and we were unable to recover it. 00:25:02.324 [2024-07-24 22:34:27.717599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.324 [2024-07-24 22:34:27.717625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.324 qpair failed and we were unable to recover it. 00:25:02.324 [2024-07-24 22:34:27.717741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.324 [2024-07-24 22:34:27.717767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.324 qpair failed and we were unable to recover it. 00:25:02.324 [2024-07-24 22:34:27.717879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.324 [2024-07-24 22:34:27.717906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.324 qpair failed and we were unable to recover it. 00:25:02.324 [2024-07-24 22:34:27.718012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.324 [2024-07-24 22:34:27.718041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.324 qpair failed and we were unable to recover it. 00:25:02.324 [2024-07-24 22:34:27.718143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.324 [2024-07-24 22:34:27.718169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.324 qpair failed and we were unable to recover it. 00:25:02.324 [2024-07-24 22:34:27.718278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.324 [2024-07-24 22:34:27.718304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.324 qpair failed and we were unable to recover it. 00:25:02.324 [2024-07-24 22:34:27.718407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.324 [2024-07-24 22:34:27.718433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.324 qpair failed and we were unable to recover it. 00:25:02.324 [2024-07-24 22:34:27.718543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.324 [2024-07-24 22:34:27.718570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.324 qpair failed and we were unable to recover it. 00:25:02.324 [2024-07-24 22:34:27.718673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.324 [2024-07-24 22:34:27.718699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.324 qpair failed and we were unable to recover it. 00:25:02.324 [2024-07-24 22:34:27.718795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.324 [2024-07-24 22:34:27.718822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.324 qpair failed and we were unable to recover it. 00:25:02.324 [2024-07-24 22:34:27.718929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.324 [2024-07-24 22:34:27.718958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.324 qpair failed and we were unable to recover it. 00:25:02.325 [2024-07-24 22:34:27.719059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.325 [2024-07-24 22:34:27.719086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.325 qpair failed and we were unable to recover it. 00:25:02.325 [2024-07-24 22:34:27.719187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.325 [2024-07-24 22:34:27.719214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.325 qpair failed and we were unable to recover it. 00:25:02.325 [2024-07-24 22:34:27.719321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.325 [2024-07-24 22:34:27.719347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.325 qpair failed and we were unable to recover it. 00:25:02.325 [2024-07-24 22:34:27.719447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.325 [2024-07-24 22:34:27.719473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.325 qpair failed and we were unable to recover it. 00:25:02.325 [2024-07-24 22:34:27.719601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.325 [2024-07-24 22:34:27.719627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.325 qpair failed and we were unable to recover it. 00:25:02.325 [2024-07-24 22:34:27.719730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.325 [2024-07-24 22:34:27.719756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.325 qpair failed and we were unable to recover it. 00:25:02.325 [2024-07-24 22:34:27.719855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.325 [2024-07-24 22:34:27.719882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.325 qpair failed and we were unable to recover it. 00:25:02.325 [2024-07-24 22:34:27.719986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.325 [2024-07-24 22:34:27.720014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.325 qpair failed and we were unable to recover it. 00:25:02.325 [2024-07-24 22:34:27.720112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.325 [2024-07-24 22:34:27.720139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.325 qpair failed and we were unable to recover it. 00:25:02.325 [2024-07-24 22:34:27.720236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.325 [2024-07-24 22:34:27.720262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.325 qpair failed and we were unable to recover it. 00:25:02.325 [2024-07-24 22:34:27.720355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.325 [2024-07-24 22:34:27.720380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.325 qpair failed and we were unable to recover it. 00:25:02.325 [2024-07-24 22:34:27.720505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.325 [2024-07-24 22:34:27.720537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.325 qpair failed and we were unable to recover it. 00:25:02.325 [2024-07-24 22:34:27.720648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.325 [2024-07-24 22:34:27.720674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.325 qpair failed and we were unable to recover it. 00:25:02.325 [2024-07-24 22:34:27.720792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.325 [2024-07-24 22:34:27.720822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.325 qpair failed and we were unable to recover it. 00:25:02.325 [2024-07-24 22:34:27.720941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.325 [2024-07-24 22:34:27.720968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.325 qpair failed and we were unable to recover it. 00:25:02.325 [2024-07-24 22:34:27.721127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.325 [2024-07-24 22:34:27.721160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.325 qpair failed and we were unable to recover it. 00:25:02.325 [2024-07-24 22:34:27.721276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.325 [2024-07-24 22:34:27.721304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.325 qpair failed and we were unable to recover it. 00:25:02.325 [2024-07-24 22:34:27.721412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.325 [2024-07-24 22:34:27.721441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.325 qpair failed and we were unable to recover it. 00:25:02.325 [2024-07-24 22:34:27.721567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.325 [2024-07-24 22:34:27.721594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.325 qpair failed and we were unable to recover it. 00:25:02.325 [2024-07-24 22:34:27.721698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.325 [2024-07-24 22:34:27.721725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.325 qpair failed and we were unable to recover it. 00:25:02.325 [2024-07-24 22:34:27.721827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.325 [2024-07-24 22:34:27.721853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.325 qpair failed and we were unable to recover it. 00:25:02.325 [2024-07-24 22:34:27.721955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.325 [2024-07-24 22:34:27.721981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.325 qpair failed and we were unable to recover it. 00:25:02.325 [2024-07-24 22:34:27.722081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.325 [2024-07-24 22:34:27.722107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.325 qpair failed and we were unable to recover it. 00:25:02.325 [2024-07-24 22:34:27.722227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.325 [2024-07-24 22:34:27.722254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.325 qpair failed and we were unable to recover it. 00:25:02.325 [2024-07-24 22:34:27.722362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.325 [2024-07-24 22:34:27.722390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.325 qpair failed and we were unable to recover it. 00:25:02.325 [2024-07-24 22:34:27.722505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.325 [2024-07-24 22:34:27.722538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.325 qpair failed and we were unable to recover it. 00:25:02.325 [2024-07-24 22:34:27.722642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.325 [2024-07-24 22:34:27.722669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.325 qpair failed and we were unable to recover it. 00:25:02.325 [2024-07-24 22:34:27.722766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.325 [2024-07-24 22:34:27.722792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.325 qpair failed and we were unable to recover it. 00:25:02.325 [2024-07-24 22:34:27.722897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.325 [2024-07-24 22:34:27.722924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.325 qpair failed and we were unable to recover it. 00:25:02.325 [2024-07-24 22:34:27.723035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.325 [2024-07-24 22:34:27.723062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.325 qpair failed and we were unable to recover it. 00:25:02.325 [2024-07-24 22:34:27.723161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.325 [2024-07-24 22:34:27.723187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.325 qpair failed and we were unable to recover it. 00:25:02.325 [2024-07-24 22:34:27.723292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.325 [2024-07-24 22:34:27.723319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.325 qpair failed and we were unable to recover it. 00:25:02.325 [2024-07-24 22:34:27.723433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.325 [2024-07-24 22:34:27.723459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.325 qpair failed and we were unable to recover it. 00:25:02.325 [2024-07-24 22:34:27.723574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.325 [2024-07-24 22:34:27.723601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.325 qpair failed and we were unable to recover it. 00:25:02.325 [2024-07-24 22:34:27.723705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.325 [2024-07-24 22:34:27.723732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.325 qpair failed and we were unable to recover it. 00:25:02.325 [2024-07-24 22:34:27.723849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.325 [2024-07-24 22:34:27.723875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.325 qpair failed and we were unable to recover it. 00:25:02.325 [2024-07-24 22:34:27.723985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.325 [2024-07-24 22:34:27.724013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.325 qpair failed and we were unable to recover it. 00:25:02.325 [2024-07-24 22:34:27.724114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.326 [2024-07-24 22:34:27.724140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.326 qpair failed and we were unable to recover it. 00:25:02.326 [2024-07-24 22:34:27.724237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.326 [2024-07-24 22:34:27.724263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.326 qpair failed and we were unable to recover it. 00:25:02.326 [2024-07-24 22:34:27.724364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.326 [2024-07-24 22:34:27.724391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.326 qpair failed and we were unable to recover it. 00:25:02.326 [2024-07-24 22:34:27.724499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.326 [2024-07-24 22:34:27.724526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.326 qpair failed and we were unable to recover it. 00:25:02.326 [2024-07-24 22:34:27.724626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.326 [2024-07-24 22:34:27.724653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.326 qpair failed and we were unable to recover it. 00:25:02.326 [2024-07-24 22:34:27.724775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.326 [2024-07-24 22:34:27.724802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.326 qpair failed and we were unable to recover it. 00:25:02.326 [2024-07-24 22:34:27.724924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.326 [2024-07-24 22:34:27.724951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.326 qpair failed and we were unable to recover it. 00:25:02.326 [2024-07-24 22:34:27.725062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.326 [2024-07-24 22:34:27.725090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.326 qpair failed and we were unable to recover it. 00:25:02.326 [2024-07-24 22:34:27.725206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.326 [2024-07-24 22:34:27.725238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.326 qpair failed and we were unable to recover it. 00:25:02.326 [2024-07-24 22:34:27.725366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.326 [2024-07-24 22:34:27.725395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.326 qpair failed and we were unable to recover it. 00:25:02.326 [2024-07-24 22:34:27.725507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.326 [2024-07-24 22:34:27.725535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.326 qpair failed and we were unable to recover it. 00:25:02.326 [2024-07-24 22:34:27.725650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.326 [2024-07-24 22:34:27.725677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.326 qpair failed and we were unable to recover it. 00:25:02.326 [2024-07-24 22:34:27.725781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.326 [2024-07-24 22:34:27.725808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.326 qpair failed and we were unable to recover it. 00:25:02.326 [2024-07-24 22:34:27.725911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.326 [2024-07-24 22:34:27.725939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.326 qpair failed and we were unable to recover it. 00:25:02.326 [2024-07-24 22:34:27.726037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.326 [2024-07-24 22:34:27.726063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.326 qpair failed and we were unable to recover it. 00:25:02.326 [2024-07-24 22:34:27.726169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.326 [2024-07-24 22:34:27.726207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.326 qpair failed and we were unable to recover it. 00:25:02.326 [2024-07-24 22:34:27.726328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.326 [2024-07-24 22:34:27.726356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.326 qpair failed and we were unable to recover it. 00:25:02.326 [2024-07-24 22:34:27.726458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.326 [2024-07-24 22:34:27.726488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.326 qpair failed and we were unable to recover it. 00:25:02.326 [2024-07-24 22:34:27.726602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.326 [2024-07-24 22:34:27.726632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.326 qpair failed and we were unable to recover it. 00:25:02.326 [2024-07-24 22:34:27.726737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.326 [2024-07-24 22:34:27.726764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.326 qpair failed and we were unable to recover it. 00:25:02.326 [2024-07-24 22:34:27.726870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.326 [2024-07-24 22:34:27.726897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.326 qpair failed and we were unable to recover it. 00:25:02.326 [2024-07-24 22:34:27.726996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.326 [2024-07-24 22:34:27.727023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.326 qpair failed and we were unable to recover it. 00:25:02.326 [2024-07-24 22:34:27.727141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.326 [2024-07-24 22:34:27.727167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.326 qpair failed and we were unable to recover it. 00:25:02.326 [2024-07-24 22:34:27.727275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.326 [2024-07-24 22:34:27.727307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.326 qpair failed and we were unable to recover it. 00:25:02.326 [2024-07-24 22:34:27.727412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.326 [2024-07-24 22:34:27.727439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.326 qpair failed and we were unable to recover it. 00:25:02.326 [2024-07-24 22:34:27.727548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.326 [2024-07-24 22:34:27.727575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.326 qpair failed and we were unable to recover it. 00:25:02.326 [2024-07-24 22:34:27.727676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.326 [2024-07-24 22:34:27.727702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.326 qpair failed and we were unable to recover it. 00:25:02.326 [2024-07-24 22:34:27.727812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.326 [2024-07-24 22:34:27.727838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.326 qpair failed and we were unable to recover it. 00:25:02.326 [2024-07-24 22:34:27.727948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.326 [2024-07-24 22:34:27.727979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.326 qpair failed and we were unable to recover it. 00:25:02.326 [2024-07-24 22:34:27.728087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.326 [2024-07-24 22:34:27.728113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.326 qpair failed and we were unable to recover it. 00:25:02.326 [2024-07-24 22:34:27.728231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.326 [2024-07-24 22:34:27.728257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.326 qpair failed and we were unable to recover it. 00:25:02.326 [2024-07-24 22:34:27.728357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.326 [2024-07-24 22:34:27.728383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.326 qpair failed and we were unable to recover it. 00:25:02.326 [2024-07-24 22:34:27.728505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.327 [2024-07-24 22:34:27.728534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.327 qpair failed and we were unable to recover it. 00:25:02.327 [2024-07-24 22:34:27.728638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.327 [2024-07-24 22:34:27.728664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.327 qpair failed and we were unable to recover it. 00:25:02.327 [2024-07-24 22:34:27.728772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.327 [2024-07-24 22:34:27.728799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.327 qpair failed and we were unable to recover it. 00:25:02.327 [2024-07-24 22:34:27.728906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.327 [2024-07-24 22:34:27.728934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.327 qpair failed and we were unable to recover it. 00:25:02.327 [2024-07-24 22:34:27.729036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.327 [2024-07-24 22:34:27.729061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.327 qpair failed and we were unable to recover it. 00:25:02.327 [2024-07-24 22:34:27.729179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.327 [2024-07-24 22:34:27.729207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.327 qpair failed and we were unable to recover it. 00:25:02.327 [2024-07-24 22:34:27.729311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.327 [2024-07-24 22:34:27.729337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.327 qpair failed and we were unable to recover it. 00:25:02.327 [2024-07-24 22:34:27.729437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.327 [2024-07-24 22:34:27.729462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.327 qpair failed and we were unable to recover it. 00:25:02.327 [2024-07-24 22:34:27.729582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.327 [2024-07-24 22:34:27.729611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.327 qpair failed and we were unable to recover it. 00:25:02.327 [2024-07-24 22:34:27.729716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.327 [2024-07-24 22:34:27.729744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.327 qpair failed and we were unable to recover it. 00:25:02.327 [2024-07-24 22:34:27.729844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.327 [2024-07-24 22:34:27.729871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.327 qpair failed and we were unable to recover it. 00:25:02.327 [2024-07-24 22:34:27.729971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.327 [2024-07-24 22:34:27.729997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.327 qpair failed and we were unable to recover it. 00:25:02.327 [2024-07-24 22:34:27.730104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.327 [2024-07-24 22:34:27.730129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.327 qpair failed and we were unable to recover it. 00:25:02.327 [2024-07-24 22:34:27.730235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.327 [2024-07-24 22:34:27.730262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.327 qpair failed and we were unable to recover it. 00:25:02.327 [2024-07-24 22:34:27.730364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.327 [2024-07-24 22:34:27.730391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.327 qpair failed and we were unable to recover it. 00:25:02.327 [2024-07-24 22:34:27.730491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.327 [2024-07-24 22:34:27.730518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.327 qpair failed and we were unable to recover it. 00:25:02.327 [2024-07-24 22:34:27.730627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.327 [2024-07-24 22:34:27.730654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.327 qpair failed and we were unable to recover it. 00:25:02.327 [2024-07-24 22:34:27.730782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.327 [2024-07-24 22:34:27.730808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.327 qpair failed and we were unable to recover it. 00:25:02.327 [2024-07-24 22:34:27.730904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.327 [2024-07-24 22:34:27.730931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.327 qpair failed and we were unable to recover it. 00:25:02.327 [2024-07-24 22:34:27.731047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.327 [2024-07-24 22:34:27.731074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.327 qpair failed and we were unable to recover it. 00:25:02.327 [2024-07-24 22:34:27.731179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.327 [2024-07-24 22:34:27.731209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.327 qpair failed and we were unable to recover it. 00:25:02.327 [2024-07-24 22:34:27.731309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.327 [2024-07-24 22:34:27.731336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.327 qpair failed and we were unable to recover it. 00:25:02.327 [2024-07-24 22:34:27.731448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.327 [2024-07-24 22:34:27.731474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.327 qpair failed and we were unable to recover it. 00:25:02.327 [2024-07-24 22:34:27.731597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.327 [2024-07-24 22:34:27.731623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.327 qpair failed and we were unable to recover it. 00:25:02.327 [2024-07-24 22:34:27.731741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.327 [2024-07-24 22:34:27.731768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.327 qpair failed and we were unable to recover it. 00:25:02.327 [2024-07-24 22:34:27.731870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.327 [2024-07-24 22:34:27.731896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.327 qpair failed and we were unable to recover it. 00:25:02.327 [2024-07-24 22:34:27.732009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.327 [2024-07-24 22:34:27.732041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.327 qpair failed and we were unable to recover it. 00:25:02.327 [2024-07-24 22:34:27.732157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.327 [2024-07-24 22:34:27.732184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.327 qpair failed and we were unable to recover it. 00:25:02.327 [2024-07-24 22:34:27.732285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.327 [2024-07-24 22:34:27.732311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.327 qpair failed and we were unable to recover it. 00:25:02.327 [2024-07-24 22:34:27.732407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.327 [2024-07-24 22:34:27.732433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.327 qpair failed and we were unable to recover it. 00:25:02.327 [2024-07-24 22:34:27.732544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.327 [2024-07-24 22:34:27.732572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.327 qpair failed and we were unable to recover it. 00:25:02.327 [2024-07-24 22:34:27.732679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.327 [2024-07-24 22:34:27.732706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.327 qpair failed and we were unable to recover it. 00:25:02.327 [2024-07-24 22:34:27.732818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.327 [2024-07-24 22:34:27.732846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.327 qpair failed and we were unable to recover it. 00:25:02.327 [2024-07-24 22:34:27.732960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.327 [2024-07-24 22:34:27.732986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.327 qpair failed and we were unable to recover it. 00:25:02.327 [2024-07-24 22:34:27.733086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.327 [2024-07-24 22:34:27.733112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.327 qpair failed and we were unable to recover it. 00:25:02.327 [2024-07-24 22:34:27.733214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.327 [2024-07-24 22:34:27.733243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.327 qpair failed and we were unable to recover it. 00:25:02.327 [2024-07-24 22:34:27.733342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.327 [2024-07-24 22:34:27.733368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.327 qpair failed and we were unable to recover it. 00:25:02.327 [2024-07-24 22:34:27.733465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.327 [2024-07-24 22:34:27.733498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.327 qpair failed and we were unable to recover it. 00:25:02.328 [2024-07-24 22:34:27.733598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.328 [2024-07-24 22:34:27.733625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.328 qpair failed and we were unable to recover it. 00:25:02.328 [2024-07-24 22:34:27.733721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.328 [2024-07-24 22:34:27.733747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.328 qpair failed and we were unable to recover it. 00:25:02.328 [2024-07-24 22:34:27.733853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.328 [2024-07-24 22:34:27.733880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.328 qpair failed and we were unable to recover it. 00:25:02.328 [2024-07-24 22:34:27.733975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.328 [2024-07-24 22:34:27.734001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.328 qpair failed and we were unable to recover it. 00:25:02.328 [2024-07-24 22:34:27.734109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.328 [2024-07-24 22:34:27.734138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.328 qpair failed and we were unable to recover it. 00:25:02.328 [2024-07-24 22:34:27.734250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.328 [2024-07-24 22:34:27.734277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.328 qpair failed and we were unable to recover it. 00:25:02.328 [2024-07-24 22:34:27.734378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.328 [2024-07-24 22:34:27.734406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.328 qpair failed and we were unable to recover it. 00:25:02.328 [2024-07-24 22:34:27.734522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.328 [2024-07-24 22:34:27.734550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.328 qpair failed and we were unable to recover it. 00:25:02.328 [2024-07-24 22:34:27.734664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.328 [2024-07-24 22:34:27.734691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.328 qpair failed and we were unable to recover it. 00:25:02.328 [2024-07-24 22:34:27.734793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.328 [2024-07-24 22:34:27.734819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.328 qpair failed and we were unable to recover it. 00:25:02.328 [2024-07-24 22:34:27.734926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.328 [2024-07-24 22:34:27.734953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.328 qpair failed and we were unable to recover it. 00:25:02.328 [2024-07-24 22:34:27.735144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.328 [2024-07-24 22:34:27.735196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.328 qpair failed and we were unable to recover it. 00:25:02.328 [2024-07-24 22:34:27.735385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.328 [2024-07-24 22:34:27.735441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.328 qpair failed and we were unable to recover it. 00:25:02.328 [2024-07-24 22:34:27.735634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.328 [2024-07-24 22:34:27.735661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.328 qpair failed and we were unable to recover it. 00:25:02.328 [2024-07-24 22:34:27.735767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.328 [2024-07-24 22:34:27.735793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.328 qpair failed and we were unable to recover it. 00:25:02.328 [2024-07-24 22:34:27.735947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.328 [2024-07-24 22:34:27.735997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.328 qpair failed and we were unable to recover it. 00:25:02.328 [2024-07-24 22:34:27.736146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.328 [2024-07-24 22:34:27.736200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.328 qpair failed and we were unable to recover it. 00:25:02.328 [2024-07-24 22:34:27.736339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.328 [2024-07-24 22:34:27.736394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.328 qpair failed and we were unable to recover it. 00:25:02.328 [2024-07-24 22:34:27.736534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.328 [2024-07-24 22:34:27.736590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.328 qpair failed and we were unable to recover it. 00:25:02.328 [2024-07-24 22:34:27.736693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.328 [2024-07-24 22:34:27.736720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.328 qpair failed and we were unable to recover it. 00:25:02.328 [2024-07-24 22:34:27.736843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.328 [2024-07-24 22:34:27.736900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.328 qpair failed and we were unable to recover it. 00:25:02.328 [2024-07-24 22:34:27.737078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.328 [2024-07-24 22:34:27.737132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.328 qpair failed and we were unable to recover it. 00:25:02.328 [2024-07-24 22:34:27.737280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.328 [2024-07-24 22:34:27.737337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.328 qpair failed and we were unable to recover it. 00:25:02.328 [2024-07-24 22:34:27.737439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.328 [2024-07-24 22:34:27.737465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.328 qpair failed and we were unable to recover it. 00:25:02.328 [2024-07-24 22:34:27.737656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.328 [2024-07-24 22:34:27.737721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.328 qpair failed and we were unable to recover it. 00:25:02.328 [2024-07-24 22:34:27.737844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.328 [2024-07-24 22:34:27.737900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.328 qpair failed and we were unable to recover it. 00:25:02.328 [2024-07-24 22:34:27.738047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.328 [2024-07-24 22:34:27.738097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.328 qpair failed and we were unable to recover it. 00:25:02.328 [2024-07-24 22:34:27.738195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.328 [2024-07-24 22:34:27.738222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.328 qpair failed and we were unable to recover it. 00:25:02.328 [2024-07-24 22:34:27.738326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.328 [2024-07-24 22:34:27.738358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.328 qpair failed and we were unable to recover it. 00:25:02.328 [2024-07-24 22:34:27.738516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.328 [2024-07-24 22:34:27.738559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.328 qpair failed and we were unable to recover it. 00:25:02.328 [2024-07-24 22:34:27.738728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.328 [2024-07-24 22:34:27.738781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.328 qpair failed and we were unable to recover it. 00:25:02.328 [2024-07-24 22:34:27.738925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.328 [2024-07-24 22:34:27.738978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.328 qpair failed and we were unable to recover it. 00:25:02.328 [2024-07-24 22:34:27.739145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.328 [2024-07-24 22:34:27.739196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.328 qpair failed and we were unable to recover it. 00:25:02.328 [2024-07-24 22:34:27.739376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.328 [2024-07-24 22:34:27.739405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.328 qpair failed and we were unable to recover it. 00:25:02.328 [2024-07-24 22:34:27.739534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.328 [2024-07-24 22:34:27.739594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.328 qpair failed and we were unable to recover it. 00:25:02.328 [2024-07-24 22:34:27.739737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.328 [2024-07-24 22:34:27.739791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.328 qpair failed and we were unable to recover it. 00:25:02.328 [2024-07-24 22:34:27.739944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.328 [2024-07-24 22:34:27.739996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.328 qpair failed and we were unable to recover it. 00:25:02.329 [2024-07-24 22:34:27.740146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.329 [2024-07-24 22:34:27.740203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.329 qpair failed and we were unable to recover it. 00:25:02.329 [2024-07-24 22:34:27.740309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.329 [2024-07-24 22:34:27.740336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.329 qpair failed and we were unable to recover it. 00:25:02.329 [2024-07-24 22:34:27.740453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.329 [2024-07-24 22:34:27.740484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.329 qpair failed and we were unable to recover it. 00:25:02.329 [2024-07-24 22:34:27.740606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.329 [2024-07-24 22:34:27.740632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.329 qpair failed and we were unable to recover it. 00:25:02.329 [2024-07-24 22:34:27.740777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.329 [2024-07-24 22:34:27.740833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.329 qpair failed and we were unable to recover it. 00:25:02.329 [2024-07-24 22:34:27.740970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.329 [2024-07-24 22:34:27.741022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.329 qpair failed and we were unable to recover it. 00:25:02.329 [2024-07-24 22:34:27.741127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.329 [2024-07-24 22:34:27.741153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.329 qpair failed and we were unable to recover it. 00:25:02.329 [2024-07-24 22:34:27.741307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.329 [2024-07-24 22:34:27.741360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.329 qpair failed and we were unable to recover it. 00:25:02.329 [2024-07-24 22:34:27.741533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.329 [2024-07-24 22:34:27.741559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.329 qpair failed and we were unable to recover it. 00:25:02.329 [2024-07-24 22:34:27.741710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.329 [2024-07-24 22:34:27.741766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.329 qpair failed and we were unable to recover it. 00:25:02.329 [2024-07-24 22:34:27.741890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.329 [2024-07-24 22:34:27.741951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.329 qpair failed and we were unable to recover it. 00:25:02.329 [2024-07-24 22:34:27.742101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.329 [2024-07-24 22:34:27.742152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.329 qpair failed and we were unable to recover it. 00:25:02.329 [2024-07-24 22:34:27.742304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.329 [2024-07-24 22:34:27.742361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.329 qpair failed and we were unable to recover it. 00:25:02.329 [2024-07-24 22:34:27.742476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.329 [2024-07-24 22:34:27.742508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.329 qpair failed and we were unable to recover it. 00:25:02.329 [2024-07-24 22:34:27.742630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.329 [2024-07-24 22:34:27.742658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.329 qpair failed and we were unable to recover it. 00:25:02.329 [2024-07-24 22:34:27.742826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.329 [2024-07-24 22:34:27.742877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.329 qpair failed and we were unable to recover it. 00:25:02.329 [2024-07-24 22:34:27.743022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.329 [2024-07-24 22:34:27.743075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.329 qpair failed and we were unable to recover it. 00:25:02.329 [2024-07-24 22:34:27.743184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.329 [2024-07-24 22:34:27.743214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.329 qpair failed and we were unable to recover it. 00:25:02.329 [2024-07-24 22:34:27.743353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.329 [2024-07-24 22:34:27.743407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.329 qpair failed and we were unable to recover it. 00:25:02.329 [2024-07-24 22:34:27.743542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.329 [2024-07-24 22:34:27.743594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.329 qpair failed and we were unable to recover it. 00:25:02.329 [2024-07-24 22:34:27.743745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.329 [2024-07-24 22:34:27.743807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.329 qpair failed and we were unable to recover it. 00:25:02.329 [2024-07-24 22:34:27.743909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.329 [2024-07-24 22:34:27.743937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.329 qpair failed and we were unable to recover it. 00:25:02.329 [2024-07-24 22:34:27.744064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.329 [2024-07-24 22:34:27.744119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.329 qpair failed and we were unable to recover it. 00:25:02.329 [2024-07-24 22:34:27.744252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.329 [2024-07-24 22:34:27.744303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.329 qpair failed and we were unable to recover it. 00:25:02.329 [2024-07-24 22:34:27.744435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.329 [2024-07-24 22:34:27.744497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.329 qpair failed and we were unable to recover it. 00:25:02.329 [2024-07-24 22:34:27.744638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.329 [2024-07-24 22:34:27.744689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.329 qpair failed and we were unable to recover it. 00:25:02.329 [2024-07-24 22:34:27.744848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.329 [2024-07-24 22:34:27.744875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.329 qpair failed and we were unable to recover it. 00:25:02.329 [2024-07-24 22:34:27.744983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.329 [2024-07-24 22:34:27.745009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.329 qpair failed and we were unable to recover it. 00:25:02.329 [2024-07-24 22:34:27.745148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.329 [2024-07-24 22:34:27.745199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.329 qpair failed and we were unable to recover it. 00:25:02.329 [2024-07-24 22:34:27.745341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.329 [2024-07-24 22:34:27.745394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.329 qpair failed and we were unable to recover it. 00:25:02.329 [2024-07-24 22:34:27.745530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.329 [2024-07-24 22:34:27.745587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.329 qpair failed and we were unable to recover it. 00:25:02.329 [2024-07-24 22:34:27.745762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.329 [2024-07-24 22:34:27.745794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.329 qpair failed and we were unable to recover it. 00:25:02.329 [2024-07-24 22:34:27.745937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.329 [2024-07-24 22:34:27.745990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.329 qpair failed and we were unable to recover it. 00:25:02.329 [2024-07-24 22:34:27.746090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.329 [2024-07-24 22:34:27.746117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.329 qpair failed and we were unable to recover it. 00:25:02.329 [2024-07-24 22:34:27.746258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.329 [2024-07-24 22:34:27.746309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.329 qpair failed and we were unable to recover it. 00:25:02.329 [2024-07-24 22:34:27.746442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.329 [2024-07-24 22:34:27.746502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.329 qpair failed and we were unable to recover it. 00:25:02.329 [2024-07-24 22:34:27.746666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.329 [2024-07-24 22:34:27.746719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.329 qpair failed and we were unable to recover it. 00:25:02.330 [2024-07-24 22:34:27.746863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.330 [2024-07-24 22:34:27.746916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.330 qpair failed and we were unable to recover it. 00:25:02.330 [2024-07-24 22:34:27.747014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.330 [2024-07-24 22:34:27.747042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.330 qpair failed and we were unable to recover it. 00:25:02.330 [2024-07-24 22:34:27.747144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.330 [2024-07-24 22:34:27.747170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.330 qpair failed and we were unable to recover it. 00:25:02.330 [2024-07-24 22:34:27.747304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.330 [2024-07-24 22:34:27.747357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.330 qpair failed and we were unable to recover it. 00:25:02.330 [2024-07-24 22:34:27.747531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.330 [2024-07-24 22:34:27.747560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.330 qpair failed and we were unable to recover it. 00:25:02.330 [2024-07-24 22:34:27.747725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.330 [2024-07-24 22:34:27.747780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.330 qpair failed and we were unable to recover it. 00:25:02.330 [2024-07-24 22:34:27.747880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.330 [2024-07-24 22:34:27.747907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.330 qpair failed and we were unable to recover it. 00:25:02.330 [2024-07-24 22:34:27.748006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.330 [2024-07-24 22:34:27.748032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.330 qpair failed and we were unable to recover it. 00:25:02.330 [2024-07-24 22:34:27.748137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.330 [2024-07-24 22:34:27.748162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.330 qpair failed and we were unable to recover it. 00:25:02.330 [2024-07-24 22:34:27.748315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.330 [2024-07-24 22:34:27.748368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.330 qpair failed and we were unable to recover it. 00:25:02.330 [2024-07-24 22:34:27.748475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.330 [2024-07-24 22:34:27.748517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.330 qpair failed and we were unable to recover it. 00:25:02.330 [2024-07-24 22:34:27.748618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.330 [2024-07-24 22:34:27.748645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.330 qpair failed and we were unable to recover it. 00:25:02.330 [2024-07-24 22:34:27.748797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.330 [2024-07-24 22:34:27.748854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.330 qpair failed and we were unable to recover it. 00:25:02.330 [2024-07-24 22:34:27.749001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.330 [2024-07-24 22:34:27.749059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.330 qpair failed and we were unable to recover it. 00:25:02.330 [2024-07-24 22:34:27.749244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.330 [2024-07-24 22:34:27.749300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.330 qpair failed and we were unable to recover it. 00:25:02.330 [2024-07-24 22:34:27.749459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.330 [2024-07-24 22:34:27.749494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.330 qpair failed and we were unable to recover it. 00:25:02.330 [2024-07-24 22:34:27.749639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.330 [2024-07-24 22:34:27.749687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.330 qpair failed and we were unable to recover it. 00:25:02.330 [2024-07-24 22:34:27.749843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.330 [2024-07-24 22:34:27.749901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.330 qpair failed and we were unable to recover it. 00:25:02.330 [2024-07-24 22:34:27.750063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.330 [2024-07-24 22:34:27.750117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.330 qpair failed and we were unable to recover it. 00:25:02.330 [2024-07-24 22:34:27.750213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.330 [2024-07-24 22:34:27.750238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.330 qpair failed and we were unable to recover it. 00:25:02.330 [2024-07-24 22:34:27.750390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.330 [2024-07-24 22:34:27.750415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.330 qpair failed and we were unable to recover it. 00:25:02.330 [2024-07-24 22:34:27.750515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.330 [2024-07-24 22:34:27.750548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.330 qpair failed and we were unable to recover it. 00:25:02.330 [2024-07-24 22:34:27.750652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.330 [2024-07-24 22:34:27.750679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.330 qpair failed and we were unable to recover it. 00:25:02.330 [2024-07-24 22:34:27.750824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.330 [2024-07-24 22:34:27.750879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.330 qpair failed and we were unable to recover it. 00:25:02.330 [2024-07-24 22:34:27.751050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.330 [2024-07-24 22:34:27.751103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.330 qpair failed and we were unable to recover it. 00:25:02.330 [2024-07-24 22:34:27.751270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.330 [2024-07-24 22:34:27.751320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.330 qpair failed and we were unable to recover it. 00:25:02.330 [2024-07-24 22:34:27.751438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.330 [2024-07-24 22:34:27.751467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.330 qpair failed and we were unable to recover it. 00:25:02.330 [2024-07-24 22:34:27.751658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.330 [2024-07-24 22:34:27.751710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.330 qpair failed and we were unable to recover it. 00:25:02.330 [2024-07-24 22:34:27.751855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.330 [2024-07-24 22:34:27.751913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.330 qpair failed and we were unable to recover it. 00:25:02.330 [2024-07-24 22:34:27.752044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.330 [2024-07-24 22:34:27.752099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.330 qpair failed and we were unable to recover it. 00:25:02.330 [2024-07-24 22:34:27.752274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.330 [2024-07-24 22:34:27.752327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.330 qpair failed and we were unable to recover it. 00:25:02.330 [2024-07-24 22:34:27.752421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.330 [2024-07-24 22:34:27.752447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.330 qpair failed and we were unable to recover it. 00:25:02.330 [2024-07-24 22:34:27.752616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.330 [2024-07-24 22:34:27.752670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.330 qpair failed and we were unable to recover it. 00:25:02.330 [2024-07-24 22:34:27.752777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.330 [2024-07-24 22:34:27.752804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.330 qpair failed and we were unable to recover it. 00:25:02.330 [2024-07-24 22:34:27.752964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.330 [2024-07-24 22:34:27.753022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.330 qpair failed and we were unable to recover it. 00:25:02.330 [2024-07-24 22:34:27.753181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.330 [2024-07-24 22:34:27.753238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.330 qpair failed and we were unable to recover it. 00:25:02.330 [2024-07-24 22:34:27.753351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.330 [2024-07-24 22:34:27.753377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.330 qpair failed and we were unable to recover it. 00:25:02.330 [2024-07-24 22:34:27.753535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.331 [2024-07-24 22:34:27.753563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.331 qpair failed and we were unable to recover it. 00:25:02.331 [2024-07-24 22:34:27.753707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.331 [2024-07-24 22:34:27.753762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.331 qpair failed and we were unable to recover it. 00:25:02.331 [2024-07-24 22:34:27.753967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.331 [2024-07-24 22:34:27.754017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.331 qpair failed and we were unable to recover it. 00:25:02.331 [2024-07-24 22:34:27.754112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.331 [2024-07-24 22:34:27.754138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.331 qpair failed and we were unable to recover it. 00:25:02.331 [2024-07-24 22:34:27.754240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.331 [2024-07-24 22:34:27.754266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.331 qpair failed and we were unable to recover it. 00:25:02.331 [2024-07-24 22:34:27.754421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.331 [2024-07-24 22:34:27.754475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.331 qpair failed and we were unable to recover it. 00:25:02.331 [2024-07-24 22:34:27.754619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.331 [2024-07-24 22:34:27.754670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.331 qpair failed and we were unable to recover it. 00:25:02.331 [2024-07-24 22:34:27.754769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.331 [2024-07-24 22:34:27.754795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.331 qpair failed and we were unable to recover it. 00:25:02.331 [2024-07-24 22:34:27.754934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.331 [2024-07-24 22:34:27.754986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.331 qpair failed and we were unable to recover it. 00:25:02.331 [2024-07-24 22:34:27.755144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.331 [2024-07-24 22:34:27.755172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.331 qpair failed and we were unable to recover it. 00:25:02.331 [2024-07-24 22:34:27.755375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.331 [2024-07-24 22:34:27.755428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.331 qpair failed and we were unable to recover it. 00:25:02.331 [2024-07-24 22:34:27.755546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.331 [2024-07-24 22:34:27.755573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.331 qpair failed and we were unable to recover it. 00:25:02.331 [2024-07-24 22:34:27.755726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.331 [2024-07-24 22:34:27.755778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.331 qpair failed and we were unable to recover it. 00:25:02.331 [2024-07-24 22:34:27.755885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.331 [2024-07-24 22:34:27.755914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.331 qpair failed and we were unable to recover it. 00:25:02.331 [2024-07-24 22:34:27.756066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.331 [2024-07-24 22:34:27.756120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.331 qpair failed and we were unable to recover it. 00:25:02.331 [2024-07-24 22:34:27.756325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.331 [2024-07-24 22:34:27.756376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.331 qpair failed and we were unable to recover it. 00:25:02.331 [2024-07-24 22:34:27.756531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.331 [2024-07-24 22:34:27.756559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.331 qpair failed and we were unable to recover it. 00:25:02.331 [2024-07-24 22:34:27.756660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.331 [2024-07-24 22:34:27.756686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.331 qpair failed and we were unable to recover it. 00:25:02.331 [2024-07-24 22:34:27.756900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.331 [2024-07-24 22:34:27.756949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.331 qpair failed and we were unable to recover it. 00:25:02.331 [2024-07-24 22:34:27.757103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.331 [2024-07-24 22:34:27.757129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.331 qpair failed and we were unable to recover it. 00:25:02.331 [2024-07-24 22:34:27.757305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.331 [2024-07-24 22:34:27.757353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.331 qpair failed and we were unable to recover it. 00:25:02.331 [2024-07-24 22:34:27.757509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.331 [2024-07-24 22:34:27.757554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.331 qpair failed and we were unable to recover it. 00:25:02.331 [2024-07-24 22:34:27.757654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.331 [2024-07-24 22:34:27.757680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.331 qpair failed and we were unable to recover it. 00:25:02.331 [2024-07-24 22:34:27.757777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.331 [2024-07-24 22:34:27.757803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.331 qpair failed and we were unable to recover it. 00:25:02.331 [2024-07-24 22:34:27.757960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.331 [2024-07-24 22:34:27.758019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.331 qpair failed and we were unable to recover it. 00:25:02.331 [2024-07-24 22:34:27.758201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.331 [2024-07-24 22:34:27.758227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.331 qpair failed and we were unable to recover it. 00:25:02.331 [2024-07-24 22:34:27.758349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.331 [2024-07-24 22:34:27.758402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.331 qpair failed and we were unable to recover it. 00:25:02.331 [2024-07-24 22:34:27.758582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.331 [2024-07-24 22:34:27.758637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.331 qpair failed and we were unable to recover it. 00:25:02.331 [2024-07-24 22:34:27.758738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.331 [2024-07-24 22:34:27.758765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.331 qpair failed and we were unable to recover it. 00:25:02.331 [2024-07-24 22:34:27.758863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.331 [2024-07-24 22:34:27.758888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.331 qpair failed and we were unable to recover it. 00:25:02.331 [2024-07-24 22:34:27.759042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.331 [2024-07-24 22:34:27.759093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.331 qpair failed and we were unable to recover it. 00:25:02.331 [2024-07-24 22:34:27.759197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.332 [2024-07-24 22:34:27.759224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.332 qpair failed and we were unable to recover it. 00:25:02.332 [2024-07-24 22:34:27.759358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.332 [2024-07-24 22:34:27.759384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.332 qpair failed and we were unable to recover it. 00:25:02.332 [2024-07-24 22:34:27.759492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.332 [2024-07-24 22:34:27.759519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.332 qpair failed and we were unable to recover it. 00:25:02.332 [2024-07-24 22:34:27.759676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.332 [2024-07-24 22:34:27.759704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.332 qpair failed and we were unable to recover it. 00:25:02.332 [2024-07-24 22:34:27.759898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.332 [2024-07-24 22:34:27.759924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.332 qpair failed and we were unable to recover it. 00:25:02.332 [2024-07-24 22:34:27.760078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.332 [2024-07-24 22:34:27.760123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.332 qpair failed and we were unable to recover it. 00:25:02.332 [2024-07-24 22:34:27.760268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.332 [2024-07-24 22:34:27.760325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.332 qpair failed and we were unable to recover it. 00:25:02.332 [2024-07-24 22:34:27.760428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.332 [2024-07-24 22:34:27.760454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.332 qpair failed and we were unable to recover it. 00:25:02.332 [2024-07-24 22:34:27.760568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.332 [2024-07-24 22:34:27.760597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.332 qpair failed and we were unable to recover it. 00:25:02.332 [2024-07-24 22:34:27.760699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.332 [2024-07-24 22:34:27.760725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.332 qpair failed and we were unable to recover it. 00:25:02.332 [2024-07-24 22:34:27.760911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.332 [2024-07-24 22:34:27.760960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.332 qpair failed and we were unable to recover it. 00:25:02.332 [2024-07-24 22:34:27.761148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.332 [2024-07-24 22:34:27.761202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.332 qpair failed and we were unable to recover it. 00:25:02.332 [2024-07-24 22:34:27.761401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.332 [2024-07-24 22:34:27.761456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.332 qpair failed and we were unable to recover it. 00:25:02.332 [2024-07-24 22:34:27.761677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.332 [2024-07-24 22:34:27.761714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.332 qpair failed and we were unable to recover it. 00:25:02.332 [2024-07-24 22:34:27.761872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.332 [2024-07-24 22:34:27.761927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.332 qpair failed and we were unable to recover it. 00:25:02.332 [2024-07-24 22:34:27.762083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.332 [2024-07-24 22:34:27.762110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.332 qpair failed and we were unable to recover it. 00:25:02.332 [2024-07-24 22:34:27.762212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.332 [2024-07-24 22:34:27.762237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.332 qpair failed and we were unable to recover it. 00:25:02.332 [2024-07-24 22:34:27.762342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.332 [2024-07-24 22:34:27.762369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.332 qpair failed and we were unable to recover it. 00:25:02.332 [2024-07-24 22:34:27.762553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.332 [2024-07-24 22:34:27.762581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.332 qpair failed and we were unable to recover it. 00:25:02.332 [2024-07-24 22:34:27.762687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.332 [2024-07-24 22:34:27.762715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.332 qpair failed and we were unable to recover it. 00:25:02.332 [2024-07-24 22:34:27.762928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.332 [2024-07-24 22:34:27.762982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.332 qpair failed and we were unable to recover it. 00:25:02.332 [2024-07-24 22:34:27.763121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.332 [2024-07-24 22:34:27.763176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.332 qpair failed and we were unable to recover it. 00:25:02.332 [2024-07-24 22:34:27.763373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.332 [2024-07-24 22:34:27.763426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.332 qpair failed and we were unable to recover it. 00:25:02.332 [2024-07-24 22:34:27.763627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.332 [2024-07-24 22:34:27.763676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.332 qpair failed and we were unable to recover it. 00:25:02.332 [2024-07-24 22:34:27.763809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.332 [2024-07-24 22:34:27.763865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.332 qpair failed and we were unable to recover it. 00:25:02.332 [2024-07-24 22:34:27.764073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.332 [2024-07-24 22:34:27.764125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.332 qpair failed and we were unable to recover it. 00:25:02.332 [2024-07-24 22:34:27.764328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.332 [2024-07-24 22:34:27.764378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.332 qpair failed and we were unable to recover it. 00:25:02.332 [2024-07-24 22:34:27.764532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.332 [2024-07-24 22:34:27.764560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.332 qpair failed and we were unable to recover it. 00:25:02.332 [2024-07-24 22:34:27.764724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.332 [2024-07-24 22:34:27.764777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.332 qpair failed and we were unable to recover it. 00:25:02.332 [2024-07-24 22:34:27.764975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.332 [2024-07-24 22:34:27.765024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.332 qpair failed and we were unable to recover it. 00:25:02.332 [2024-07-24 22:34:27.765125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.332 [2024-07-24 22:34:27.765151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.332 qpair failed and we were unable to recover it. 00:25:02.332 [2024-07-24 22:34:27.765284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.332 [2024-07-24 22:34:27.765343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.332 qpair failed and we were unable to recover it. 00:25:02.332 [2024-07-24 22:34:27.765551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.332 [2024-07-24 22:34:27.765579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.332 qpair failed and we were unable to recover it. 00:25:02.332 [2024-07-24 22:34:27.765783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.332 [2024-07-24 22:34:27.765831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.332 qpair failed and we were unable to recover it. 00:25:02.332 [2024-07-24 22:34:27.765935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.332 [2024-07-24 22:34:27.765961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.332 qpair failed and we were unable to recover it. 00:25:02.332 [2024-07-24 22:34:27.766121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.332 [2024-07-24 22:34:27.766173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.332 qpair failed and we were unable to recover it. 00:25:02.332 [2024-07-24 22:34:27.766312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.332 [2024-07-24 22:34:27.766367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.332 qpair failed and we were unable to recover it. 00:25:02.332 [2024-07-24 22:34:27.766500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.333 [2024-07-24 22:34:27.766553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.333 qpair failed and we were unable to recover it. 00:25:02.333 [2024-07-24 22:34:27.766758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.333 [2024-07-24 22:34:27.766807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.333 qpair failed and we were unable to recover it. 00:25:02.333 [2024-07-24 22:34:27.766939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.333 [2024-07-24 22:34:27.766991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.333 qpair failed and we were unable to recover it. 00:25:02.333 [2024-07-24 22:34:27.767094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.333 [2024-07-24 22:34:27.767120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.333 qpair failed and we were unable to recover it. 00:25:02.333 [2024-07-24 22:34:27.767246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.333 [2024-07-24 22:34:27.767301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.333 qpair failed and we were unable to recover it. 00:25:02.333 [2024-07-24 22:34:27.767543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.333 [2024-07-24 22:34:27.767570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.333 qpair failed and we were unable to recover it. 00:25:02.333 [2024-07-24 22:34:27.767757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.333 [2024-07-24 22:34:27.767807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.333 qpair failed and we were unable to recover it. 00:25:02.333 [2024-07-24 22:34:27.767908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.333 [2024-07-24 22:34:27.767934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.333 qpair failed and we were unable to recover it. 00:25:02.333 [2024-07-24 22:34:27.768092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.333 [2024-07-24 22:34:27.768120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.333 qpair failed and we were unable to recover it. 00:25:02.333 [2024-07-24 22:34:27.768310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.333 [2024-07-24 22:34:27.768364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.333 qpair failed and we were unable to recover it. 00:25:02.333 [2024-07-24 22:34:27.768533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.333 [2024-07-24 22:34:27.768559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.333 qpair failed and we were unable to recover it. 00:25:02.333 [2024-07-24 22:34:27.768744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.333 [2024-07-24 22:34:27.768795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.333 qpair failed and we were unable to recover it. 00:25:02.333 [2024-07-24 22:34:27.768981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.333 [2024-07-24 22:34:27.769010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.333 qpair failed and we were unable to recover it. 00:25:02.333 [2024-07-24 22:34:27.769205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.333 [2024-07-24 22:34:27.769257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.333 qpair failed and we were unable to recover it. 00:25:02.333 [2024-07-24 22:34:27.769411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.333 [2024-07-24 22:34:27.769467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.333 qpair failed and we were unable to recover it. 00:25:02.333 [2024-07-24 22:34:27.769643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.333 [2024-07-24 22:34:27.769697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.333 qpair failed and we were unable to recover it. 00:25:02.333 [2024-07-24 22:34:27.769805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.333 [2024-07-24 22:34:27.769832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.333 qpair failed and we were unable to recover it. 00:25:02.333 [2024-07-24 22:34:27.769976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.333 [2024-07-24 22:34:27.770023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.333 qpair failed and we were unable to recover it. 00:25:02.333 [2024-07-24 22:34:27.770208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.333 [2024-07-24 22:34:27.770259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.333 qpair failed and we were unable to recover it. 00:25:02.333 [2024-07-24 22:34:27.770404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.333 [2024-07-24 22:34:27.770456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.333 qpair failed and we were unable to recover it. 00:25:02.333 [2024-07-24 22:34:27.770566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.333 [2024-07-24 22:34:27.770594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.333 qpair failed and we were unable to recover it. 00:25:02.333 [2024-07-24 22:34:27.770698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.333 [2024-07-24 22:34:27.770724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.333 qpair failed and we were unable to recover it. 00:25:02.333 [2024-07-24 22:34:27.770888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.333 [2024-07-24 22:34:27.770941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.333 qpair failed and we were unable to recover it. 00:25:02.333 [2024-07-24 22:34:27.771048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.333 [2024-07-24 22:34:27.771076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.333 qpair failed and we were unable to recover it. 00:25:02.333 [2024-07-24 22:34:27.771203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.333 [2024-07-24 22:34:27.771254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.333 qpair failed and we were unable to recover it. 00:25:02.333 [2024-07-24 22:34:27.771354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.333 [2024-07-24 22:34:27.771379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.333 qpair failed and we were unable to recover it. 00:25:02.333 [2024-07-24 22:34:27.771488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.333 [2024-07-24 22:34:27.771521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.333 qpair failed and we were unable to recover it. 00:25:02.333 [2024-07-24 22:34:27.771702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.333 [2024-07-24 22:34:27.771755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.333 qpair failed and we were unable to recover it. 00:25:02.333 [2024-07-24 22:34:27.771906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.333 [2024-07-24 22:34:27.771933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.333 qpair failed and we were unable to recover it. 00:25:02.333 [2024-07-24 22:34:27.772129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.333 [2024-07-24 22:34:27.772180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.333 qpair failed and we were unable to recover it. 00:25:02.333 [2024-07-24 22:34:27.772336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.333 [2024-07-24 22:34:27.772362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.333 qpair failed and we were unable to recover it. 00:25:02.333 [2024-07-24 22:34:27.772536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.333 [2024-07-24 22:34:27.772562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.333 qpair failed and we were unable to recover it. 00:25:02.333 [2024-07-24 22:34:27.772750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.333 [2024-07-24 22:34:27.772807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.333 qpair failed and we were unable to recover it. 00:25:02.333 [2024-07-24 22:34:27.773007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.333 [2024-07-24 22:34:27.773055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.333 qpair failed and we were unable to recover it. 00:25:02.333 [2024-07-24 22:34:27.773192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.333 [2024-07-24 22:34:27.773247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.333 qpair failed and we were unable to recover it. 00:25:02.333 [2024-07-24 22:34:27.773437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.333 [2024-07-24 22:34:27.773494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.333 qpair failed and we were unable to recover it. 00:25:02.333 [2024-07-24 22:34:27.773640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.333 [2024-07-24 22:34:27.773697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.333 qpair failed and we were unable to recover it. 00:25:02.334 [2024-07-24 22:34:27.773882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.334 [2024-07-24 22:34:27.773935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.334 qpair failed and we were unable to recover it. 00:25:02.334 [2024-07-24 22:34:27.774036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.334 [2024-07-24 22:34:27.774063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.334 qpair failed and we were unable to recover it. 00:25:02.334 [2024-07-24 22:34:27.774213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.334 [2024-07-24 22:34:27.774267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.334 qpair failed and we were unable to recover it. 00:25:02.334 [2024-07-24 22:34:27.774457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.334 [2024-07-24 22:34:27.774516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.334 qpair failed and we were unable to recover it. 00:25:02.334 [2024-07-24 22:34:27.774615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.334 [2024-07-24 22:34:27.774641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.334 qpair failed and we were unable to recover it. 00:25:02.334 [2024-07-24 22:34:27.774843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.334 [2024-07-24 22:34:27.774896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.334 qpair failed and we were unable to recover it. 00:25:02.334 [2024-07-24 22:34:27.775037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.334 [2024-07-24 22:34:27.775088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.334 qpair failed and we were unable to recover it. 00:25:02.334 [2024-07-24 22:34:27.775228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.334 [2024-07-24 22:34:27.775284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.334 qpair failed and we were unable to recover it. 00:25:02.334 [2024-07-24 22:34:27.775384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.334 [2024-07-24 22:34:27.775410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.334 qpair failed and we were unable to recover it. 00:25:02.334 [2024-07-24 22:34:27.775606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.334 [2024-07-24 22:34:27.775658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.334 qpair failed and we were unable to recover it. 00:25:02.334 [2024-07-24 22:34:27.775847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.334 [2024-07-24 22:34:27.775896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.334 qpair failed and we were unable to recover it. 00:25:02.334 [2024-07-24 22:34:27.776090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.334 [2024-07-24 22:34:27.776116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.334 qpair failed and we were unable to recover it. 00:25:02.334 [2024-07-24 22:34:27.776241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.334 [2024-07-24 22:34:27.776299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.334 qpair failed and we were unable to recover it. 00:25:02.334 [2024-07-24 22:34:27.776400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.334 [2024-07-24 22:34:27.776426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.334 qpair failed and we were unable to recover it. 00:25:02.334 [2024-07-24 22:34:27.776620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.334 [2024-07-24 22:34:27.776668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.334 qpair failed and we were unable to recover it. 00:25:02.334 [2024-07-24 22:34:27.776770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.334 [2024-07-24 22:34:27.776796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.334 qpair failed and we were unable to recover it. 00:25:02.334 [2024-07-24 22:34:27.776970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.334 [2024-07-24 22:34:27.777025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.334 qpair failed and we were unable to recover it. 00:25:02.334 [2024-07-24 22:34:27.777167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.334 [2024-07-24 22:34:27.777218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.334 qpair failed and we were unable to recover it. 00:25:02.334 [2024-07-24 22:34:27.777319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.334 [2024-07-24 22:34:27.777346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.334 qpair failed and we were unable to recover it. 00:25:02.334 [2024-07-24 22:34:27.777493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.334 [2024-07-24 22:34:27.777546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.334 qpair failed and we were unable to recover it. 00:25:02.334 [2024-07-24 22:34:27.777683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.334 [2024-07-24 22:34:27.777734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.334 qpair failed and we were unable to recover it. 00:25:02.334 [2024-07-24 22:34:27.777930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.334 [2024-07-24 22:34:27.777980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.334 qpair failed and we were unable to recover it. 00:25:02.334 [2024-07-24 22:34:27.778080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.334 [2024-07-24 22:34:27.778107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.334 qpair failed and we were unable to recover it. 00:25:02.334 [2024-07-24 22:34:27.778287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.334 [2024-07-24 22:34:27.778334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.334 qpair failed and we were unable to recover it. 00:25:02.334 [2024-07-24 22:34:27.778497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.334 [2024-07-24 22:34:27.778545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.334 qpair failed and we were unable to recover it. 00:25:02.334 [2024-07-24 22:34:27.778744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.334 [2024-07-24 22:34:27.778795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.334 qpair failed and we were unable to recover it. 00:25:02.334 [2024-07-24 22:34:27.778996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.334 [2024-07-24 22:34:27.779022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.334 qpair failed and we were unable to recover it. 00:25:02.334 [2024-07-24 22:34:27.779124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.334 [2024-07-24 22:34:27.779150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.334 qpair failed and we were unable to recover it. 00:25:02.334 [2024-07-24 22:34:27.779251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.334 [2024-07-24 22:34:27.779278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.334 qpair failed and we were unable to recover it. 00:25:02.334 [2024-07-24 22:34:27.779478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.334 [2024-07-24 22:34:27.779543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.334 qpair failed and we were unable to recover it. 00:25:02.334 [2024-07-24 22:34:27.779747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.334 [2024-07-24 22:34:27.779796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.334 qpair failed and we were unable to recover it. 00:25:02.334 [2024-07-24 22:34:27.779922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.334 [2024-07-24 22:34:27.779975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.334 qpair failed and we were unable to recover it. 00:25:02.334 [2024-07-24 22:34:27.780129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.334 [2024-07-24 22:34:27.780155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.334 qpair failed and we were unable to recover it. 00:25:02.334 [2024-07-24 22:34:27.780353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.334 [2024-07-24 22:34:27.780379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.334 qpair failed and we were unable to recover it. 00:25:02.334 [2024-07-24 22:34:27.780573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.334 [2024-07-24 22:34:27.780623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.334 qpair failed and we were unable to recover it. 00:25:02.334 [2024-07-24 22:34:27.780722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.334 [2024-07-24 22:34:27.780749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.334 qpair failed and we were unable to recover it. 00:25:02.334 [2024-07-24 22:34:27.781005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.334 [2024-07-24 22:34:27.781055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.334 qpair failed and we were unable to recover it. 00:25:02.334 [2024-07-24 22:34:27.781242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.334 [2024-07-24 22:34:27.781291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.335 qpair failed and we were unable to recover it. 00:25:02.335 [2024-07-24 22:34:27.781386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.335 [2024-07-24 22:34:27.781413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.335 qpair failed and we were unable to recover it. 00:25:02.335 [2024-07-24 22:34:27.781620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.335 [2024-07-24 22:34:27.781675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.335 qpair failed and we were unable to recover it. 00:25:02.335 [2024-07-24 22:34:27.781841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.335 [2024-07-24 22:34:27.781895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.335 qpair failed and we were unable to recover it. 00:25:02.335 [2024-07-24 22:34:27.782040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.335 [2024-07-24 22:34:27.782093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.335 qpair failed and we were unable to recover it. 00:25:02.335 [2024-07-24 22:34:27.782249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.335 [2024-07-24 22:34:27.782304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.335 qpair failed and we were unable to recover it. 00:25:02.335 [2024-07-24 22:34:27.782425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.335 [2024-07-24 22:34:27.782490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.335 qpair failed and we were unable to recover it. 00:25:02.335 [2024-07-24 22:34:27.782671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.335 [2024-07-24 22:34:27.782697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.335 qpair failed and we were unable to recover it. 00:25:02.335 [2024-07-24 22:34:27.782892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.335 [2024-07-24 22:34:27.782939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.335 qpair failed and we were unable to recover it. 00:25:02.335 [2024-07-24 22:34:27.783094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.335 [2024-07-24 22:34:27.783120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.335 qpair failed and we were unable to recover it. 00:25:02.335 [2024-07-24 22:34:27.783217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.335 [2024-07-24 22:34:27.783243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.335 qpair failed and we were unable to recover it. 00:25:02.335 [2024-07-24 22:34:27.783394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.335 [2024-07-24 22:34:27.783448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.335 qpair failed and we were unable to recover it. 00:25:02.335 [2024-07-24 22:34:27.783576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.335 [2024-07-24 22:34:27.783633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.335 qpair failed and we were unable to recover it. 00:25:02.335 [2024-07-24 22:34:27.783728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.335 [2024-07-24 22:34:27.783753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.335 qpair failed and we were unable to recover it. 00:25:02.335 [2024-07-24 22:34:27.783890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.335 [2024-07-24 22:34:27.783942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.335 qpair failed and we were unable to recover it. 00:25:02.335 [2024-07-24 22:34:27.784103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.335 [2024-07-24 22:34:27.784135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.335 qpair failed and we were unable to recover it. 00:25:02.335 [2024-07-24 22:34:27.784327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.335 [2024-07-24 22:34:27.784378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.335 qpair failed and we were unable to recover it. 00:25:02.335 [2024-07-24 22:34:27.784524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.335 [2024-07-24 22:34:27.784569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.335 qpair failed and we were unable to recover it. 00:25:02.335 [2024-07-24 22:34:27.784699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.335 [2024-07-24 22:34:27.784755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.335 qpair failed and we were unable to recover it. 00:25:02.335 [2024-07-24 22:34:27.784986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.335 [2024-07-24 22:34:27.785035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.335 qpair failed and we were unable to recover it. 00:25:02.335 [2024-07-24 22:34:27.785225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.335 [2024-07-24 22:34:27.785275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.335 qpair failed and we were unable to recover it. 00:25:02.335 [2024-07-24 22:34:27.785449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.335 [2024-07-24 22:34:27.785476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.335 qpair failed and we were unable to recover it. 00:25:02.335 [2024-07-24 22:34:27.785665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.335 [2024-07-24 22:34:27.785691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.335 qpair failed and we were unable to recover it. 00:25:02.335 [2024-07-24 22:34:27.785866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.335 [2024-07-24 22:34:27.785916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.335 qpair failed and we were unable to recover it. 00:25:02.335 [2024-07-24 22:34:27.786072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.335 [2024-07-24 22:34:27.786097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.335 qpair failed and we were unable to recover it. 00:25:02.335 [2024-07-24 22:34:27.786301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.335 [2024-07-24 22:34:27.786351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.335 qpair failed and we were unable to recover it. 00:25:02.335 [2024-07-24 22:34:27.786544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.335 [2024-07-24 22:34:27.786570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.335 qpair failed and we were unable to recover it. 00:25:02.335 [2024-07-24 22:34:27.786759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.335 [2024-07-24 22:34:27.786814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.335 qpair failed and we were unable to recover it. 00:25:02.335 [2024-07-24 22:34:27.786973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.335 [2024-07-24 22:34:27.787025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.335 qpair failed and we were unable to recover it. 00:25:02.335 [2024-07-24 22:34:27.787233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.335 [2024-07-24 22:34:27.787285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.335 qpair failed and we were unable to recover it. 00:25:02.335 [2024-07-24 22:34:27.787493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.335 [2024-07-24 22:34:27.787537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.335 qpair failed and we were unable to recover it. 00:25:02.335 [2024-07-24 22:34:27.787728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.335 [2024-07-24 22:34:27.787781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.335 qpair failed and we were unable to recover it. 00:25:02.335 [2024-07-24 22:34:27.787884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.335 [2024-07-24 22:34:27.787910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.335 qpair failed and we were unable to recover it. 00:25:02.335 [2024-07-24 22:34:27.788103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.335 [2024-07-24 22:34:27.788150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.335 qpair failed and we were unable to recover it. 00:25:02.335 [2024-07-24 22:34:27.788268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.336 [2024-07-24 22:34:27.788327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.336 qpair failed and we were unable to recover it. 00:25:02.336 [2024-07-24 22:34:27.788544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.336 [2024-07-24 22:34:27.788571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.336 qpair failed and we were unable to recover it. 00:25:02.336 [2024-07-24 22:34:27.788763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.336 [2024-07-24 22:34:27.788812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.336 qpair failed and we were unable to recover it. 00:25:02.336 [2024-07-24 22:34:27.788962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.336 [2024-07-24 22:34:27.789012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.336 qpair failed and we were unable to recover it. 00:25:02.336 [2024-07-24 22:34:27.789251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.336 [2024-07-24 22:34:27.789302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.336 qpair failed and we were unable to recover it. 00:25:02.336 [2024-07-24 22:34:27.789513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.336 [2024-07-24 22:34:27.789554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.336 qpair failed and we were unable to recover it. 00:25:02.336 [2024-07-24 22:34:27.789744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.336 [2024-07-24 22:34:27.789774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.336 qpair failed and we were unable to recover it. 00:25:02.336 [2024-07-24 22:34:27.789881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.336 [2024-07-24 22:34:27.789909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.336 qpair failed and we were unable to recover it. 00:25:02.336 [2024-07-24 22:34:27.790125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.336 [2024-07-24 22:34:27.790182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.336 qpair failed and we were unable to recover it. 00:25:02.336 [2024-07-24 22:34:27.790406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.336 [2024-07-24 22:34:27.790462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.336 qpair failed and we were unable to recover it. 00:25:02.336 [2024-07-24 22:34:27.790683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.336 [2024-07-24 22:34:27.790739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.336 qpair failed and we were unable to recover it. 00:25:02.336 [2024-07-24 22:34:27.790942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.336 [2024-07-24 22:34:27.790996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.336 qpair failed and we were unable to recover it. 00:25:02.336 [2024-07-24 22:34:27.791199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.336 [2024-07-24 22:34:27.791249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.336 qpair failed and we were unable to recover it. 00:25:02.336 [2024-07-24 22:34:27.791401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.336 [2024-07-24 22:34:27.791453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.336 qpair failed and we were unable to recover it. 00:25:02.336 [2024-07-24 22:34:27.791659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.336 [2024-07-24 22:34:27.791706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.336 qpair failed and we were unable to recover it. 00:25:02.336 [2024-07-24 22:34:27.791808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.336 [2024-07-24 22:34:27.791836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.336 qpair failed and we were unable to recover it. 00:25:02.336 [2024-07-24 22:34:27.791978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.336 [2024-07-24 22:34:27.792029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.336 qpair failed and we were unable to recover it. 00:25:02.336 [2024-07-24 22:34:27.792283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.336 [2024-07-24 22:34:27.792331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.336 qpair failed and we were unable to recover it. 00:25:02.336 [2024-07-24 22:34:27.792435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.336 [2024-07-24 22:34:27.792463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.336 qpair failed and we were unable to recover it. 00:25:02.336 [2024-07-24 22:34:27.792670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.336 [2024-07-24 22:34:27.792724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.336 qpair failed and we were unable to recover it. 00:25:02.336 [2024-07-24 22:34:27.792926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.336 [2024-07-24 22:34:27.792976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.336 qpair failed and we were unable to recover it. 00:25:02.336 [2024-07-24 22:34:27.793078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.336 [2024-07-24 22:34:27.793109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.336 qpair failed and we were unable to recover it. 00:25:02.336 [2024-07-24 22:34:27.793216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.336 [2024-07-24 22:34:27.793242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.336 qpair failed and we were unable to recover it. 00:25:02.336 [2024-07-24 22:34:27.793350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.336 [2024-07-24 22:34:27.793379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.336 qpair failed and we were unable to recover it. 00:25:02.336 [2024-07-24 22:34:27.793559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.336 [2024-07-24 22:34:27.793586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.336 qpair failed and we were unable to recover it. 00:25:02.336 [2024-07-24 22:34:27.793791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.336 [2024-07-24 22:34:27.793820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.336 qpair failed and we were unable to recover it. 00:25:02.336 [2024-07-24 22:34:27.794007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.336 [2024-07-24 22:34:27.794057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.336 qpair failed and we were unable to recover it. 00:25:02.336 [2024-07-24 22:34:27.794212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.336 [2024-07-24 22:34:27.794268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.336 qpair failed and we were unable to recover it. 00:25:02.336 [2024-07-24 22:34:27.794434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.336 [2024-07-24 22:34:27.794459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.336 qpair failed and we were unable to recover it. 00:25:02.336 [2024-07-24 22:34:27.794615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.336 [2024-07-24 22:34:27.794671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.336 qpair failed and we were unable to recover it. 00:25:02.336 [2024-07-24 22:34:27.794845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.336 [2024-07-24 22:34:27.794895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.336 qpair failed and we were unable to recover it. 00:25:02.336 [2024-07-24 22:34:27.795097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.336 [2024-07-24 22:34:27.795150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.336 qpair failed and we were unable to recover it. 00:25:02.336 [2024-07-24 22:34:27.795249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.336 [2024-07-24 22:34:27.795275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.336 qpair failed and we were unable to recover it. 00:25:02.336 [2024-07-24 22:34:27.795477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.336 [2024-07-24 22:34:27.795534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.336 qpair failed and we were unable to recover it. 00:25:02.336 [2024-07-24 22:34:27.795679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.336 [2024-07-24 22:34:27.795730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.336 qpair failed and we were unable to recover it. 00:25:02.336 [2024-07-24 22:34:27.795878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.337 [2024-07-24 22:34:27.795934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.337 qpair failed and we were unable to recover it. 00:25:02.337 [2024-07-24 22:34:27.796138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.337 [2024-07-24 22:34:27.796190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.337 qpair failed and we were unable to recover it. 00:25:02.337 [2024-07-24 22:34:27.796297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.337 [2024-07-24 22:34:27.796325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.337 qpair failed and we were unable to recover it. 00:25:02.337 [2024-07-24 22:34:27.796563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.337 [2024-07-24 22:34:27.796595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.337 qpair failed and we were unable to recover it. 00:25:02.337 [2024-07-24 22:34:27.796755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.337 [2024-07-24 22:34:27.796808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.337 qpair failed and we were unable to recover it. 00:25:02.337 [2024-07-24 22:34:27.797008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.337 [2024-07-24 22:34:27.797058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.337 qpair failed and we were unable to recover it. 00:25:02.337 [2024-07-24 22:34:27.797158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.337 [2024-07-24 22:34:27.797184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.337 qpair failed and we were unable to recover it. 00:25:02.337 [2024-07-24 22:34:27.797390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.337 [2024-07-24 22:34:27.797443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.337 qpair failed and we were unable to recover it. 00:25:02.337 [2024-07-24 22:34:27.797679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.337 [2024-07-24 22:34:27.797731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.337 qpair failed and we were unable to recover it. 00:25:02.337 [2024-07-24 22:34:27.797935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.337 [2024-07-24 22:34:27.797984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.337 qpair failed and we were unable to recover it. 00:25:02.337 [2024-07-24 22:34:27.798114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.337 [2024-07-24 22:34:27.798174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.337 qpair failed and we were unable to recover it. 00:25:02.337 [2024-07-24 22:34:27.798430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.337 [2024-07-24 22:34:27.798487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.337 qpair failed and we were unable to recover it. 00:25:02.337 [2024-07-24 22:34:27.798590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.337 [2024-07-24 22:34:27.798616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.337 qpair failed and we were unable to recover it. 00:25:02.337 [2024-07-24 22:34:27.798777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.337 [2024-07-24 22:34:27.798835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.337 qpair failed and we were unable to recover it. 00:25:02.337 [2024-07-24 22:34:27.798958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.337 [2024-07-24 22:34:27.798987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.337 qpair failed and we were unable to recover it. 00:25:02.337 [2024-07-24 22:34:27.799108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.337 [2024-07-24 22:34:27.799170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.337 qpair failed and we were unable to recover it. 00:25:02.337 [2024-07-24 22:34:27.799270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.337 [2024-07-24 22:34:27.799297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.337 qpair failed and we were unable to recover it. 00:25:02.337 [2024-07-24 22:34:27.799532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.337 [2024-07-24 22:34:27.799559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.337 qpair failed and we were unable to recover it. 00:25:02.337 [2024-07-24 22:34:27.799696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.337 [2024-07-24 22:34:27.799725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.337 qpair failed and we were unable to recover it. 00:25:02.337 [2024-07-24 22:34:27.799856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.337 [2024-07-24 22:34:27.799882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.337 qpair failed and we were unable to recover it. 00:25:02.337 [2024-07-24 22:34:27.800073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.337 [2024-07-24 22:34:27.800123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.337 qpair failed and we were unable to recover it. 00:25:02.337 [2024-07-24 22:34:27.800325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.337 [2024-07-24 22:34:27.800373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.337 qpair failed and we were unable to recover it. 00:25:02.337 [2024-07-24 22:34:27.800556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.337 [2024-07-24 22:34:27.800583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.337 qpair failed and we were unable to recover it. 00:25:02.337 [2024-07-24 22:34:27.800753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.337 [2024-07-24 22:34:27.800780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.337 qpair failed and we were unable to recover it. 00:25:02.337 [2024-07-24 22:34:27.800965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.337 [2024-07-24 22:34:27.800992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.337 qpair failed and we were unable to recover it. 00:25:02.337 [2024-07-24 22:34:27.801147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.337 [2024-07-24 22:34:27.801200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.337 qpair failed and we were unable to recover it. 00:25:02.337 [2024-07-24 22:34:27.801320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.337 [2024-07-24 22:34:27.801378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.337 qpair failed and we were unable to recover it. 00:25:02.337 [2024-07-24 22:34:27.801514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.337 [2024-07-24 22:34:27.801567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.337 qpair failed and we were unable to recover it. 00:25:02.337 [2024-07-24 22:34:27.801749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.337 [2024-07-24 22:34:27.801776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.337 qpair failed and we were unable to recover it. 00:25:02.337 [2024-07-24 22:34:27.801875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.337 [2024-07-24 22:34:27.801901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.337 qpair failed and we were unable to recover it. 00:25:02.337 [2024-07-24 22:34:27.801997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.337 [2024-07-24 22:34:27.802023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.337 qpair failed and we were unable to recover it. 00:25:02.337 [2024-07-24 22:34:27.802169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.337 [2024-07-24 22:34:27.802199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.337 qpair failed and we were unable to recover it. 00:25:02.337 [2024-07-24 22:34:27.802308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.337 [2024-07-24 22:34:27.802336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.337 qpair failed and we were unable to recover it. 00:25:02.337 [2024-07-24 22:34:27.802543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.337 [2024-07-24 22:34:27.802570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.337 qpair failed and we were unable to recover it. 00:25:02.337 [2024-07-24 22:34:27.802771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.337 [2024-07-24 22:34:27.802820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.337 qpair failed and we were unable to recover it. 00:25:02.337 [2024-07-24 22:34:27.803007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.337 [2024-07-24 22:34:27.803033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.337 qpair failed and we were unable to recover it. 00:25:02.337 [2024-07-24 22:34:27.803135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.337 [2024-07-24 22:34:27.803163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.337 qpair failed and we were unable to recover it. 00:25:02.337 [2024-07-24 22:34:27.803406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.337 [2024-07-24 22:34:27.803457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.338 qpair failed and we were unable to recover it. 00:25:02.338 [2024-07-24 22:34:27.803569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.338 [2024-07-24 22:34:27.803603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.338 qpair failed and we were unable to recover it. 00:25:02.338 [2024-07-24 22:34:27.803817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.338 [2024-07-24 22:34:27.803864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.338 qpair failed and we were unable to recover it. 00:25:02.338 [2024-07-24 22:34:27.804031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.338 [2024-07-24 22:34:27.804059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.338 qpair failed and we were unable to recover it. 00:25:02.338 [2024-07-24 22:34:27.804166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.338 [2024-07-24 22:34:27.804192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.338 qpair failed and we were unable to recover it. 00:25:02.338 [2024-07-24 22:34:27.804326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.338 [2024-07-24 22:34:27.804376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.338 qpair failed and we were unable to recover it. 00:25:02.338 [2024-07-24 22:34:27.804526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.338 [2024-07-24 22:34:27.804580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.338 qpair failed and we were unable to recover it. 00:25:02.338 [2024-07-24 22:34:27.804744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.338 [2024-07-24 22:34:27.804789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.338 qpair failed and we were unable to recover it. 00:25:02.338 [2024-07-24 22:34:27.804928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.338 [2024-07-24 22:34:27.804982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.338 qpair failed and we were unable to recover it. 00:25:02.338 [2024-07-24 22:34:27.805109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.338 [2024-07-24 22:34:27.805163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.338 qpair failed and we were unable to recover it. 00:25:02.338 [2024-07-24 22:34:27.805356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.338 [2024-07-24 22:34:27.805382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.338 qpair failed and we were unable to recover it. 00:25:02.338 [2024-07-24 22:34:27.805538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.338 [2024-07-24 22:34:27.805565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.338 qpair failed and we were unable to recover it. 00:25:02.338 [2024-07-24 22:34:27.805741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.338 [2024-07-24 22:34:27.805791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.338 qpair failed and we were unable to recover it. 00:25:02.338 [2024-07-24 22:34:27.805890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.338 [2024-07-24 22:34:27.805916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.338 qpair failed and we were unable to recover it. 00:25:02.338 [2024-07-24 22:34:27.806010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.338 [2024-07-24 22:34:27.806036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.338 qpair failed and we were unable to recover it. 00:25:02.338 [2024-07-24 22:34:27.806138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.338 [2024-07-24 22:34:27.806165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.338 qpair failed and we were unable to recover it. 00:25:02.338 [2024-07-24 22:34:27.806363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.338 [2024-07-24 22:34:27.806415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.338 qpair failed and we were unable to recover it. 00:25:02.338 [2024-07-24 22:34:27.806521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.338 [2024-07-24 22:34:27.806549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.338 qpair failed and we were unable to recover it. 00:25:02.338 [2024-07-24 22:34:27.806683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.338 [2024-07-24 22:34:27.806709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.338 qpair failed and we were unable to recover it. 00:25:02.338 [2024-07-24 22:34:27.806839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.338 [2024-07-24 22:34:27.806865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.338 qpair failed and we were unable to recover it. 00:25:02.338 [2024-07-24 22:34:27.807063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.338 [2024-07-24 22:34:27.807113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.338 qpair failed and we were unable to recover it. 00:25:02.338 [2024-07-24 22:34:27.807214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.338 [2024-07-24 22:34:27.807241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.338 qpair failed and we were unable to recover it. 00:25:02.338 [2024-07-24 22:34:27.807427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.338 [2024-07-24 22:34:27.807454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.338 qpair failed and we were unable to recover it. 00:25:02.338 [2024-07-24 22:34:27.807568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.338 [2024-07-24 22:34:27.807600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.338 qpair failed and we were unable to recover it. 00:25:02.338 [2024-07-24 22:34:27.807825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.338 [2024-07-24 22:34:27.807878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.338 qpair failed and we were unable to recover it. 00:25:02.338 [2024-07-24 22:34:27.808022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.338 [2024-07-24 22:34:27.808074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.338 qpair failed and we were unable to recover it. 00:25:02.338 [2024-07-24 22:34:27.808242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.338 [2024-07-24 22:34:27.808292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.338 qpair failed and we were unable to recover it. 00:25:02.338 [2024-07-24 22:34:27.808474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.339 [2024-07-24 22:34:27.808538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.339 qpair failed and we were unable to recover it. 00:25:02.339 [2024-07-24 22:34:27.808724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.339 [2024-07-24 22:34:27.808751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.339 qpair failed and we were unable to recover it. 00:25:02.339 [2024-07-24 22:34:27.808941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.339 [2024-07-24 22:34:27.808991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.339 qpair failed and we were unable to recover it. 00:25:02.339 [2024-07-24 22:34:27.809203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.339 [2024-07-24 22:34:27.809256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.339 qpair failed and we were unable to recover it. 00:25:02.339 [2024-07-24 22:34:27.809357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.339 [2024-07-24 22:34:27.809383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.339 qpair failed and we were unable to recover it. 00:25:02.339 [2024-07-24 22:34:27.809534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.339 [2024-07-24 22:34:27.809593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.339 qpair failed and we were unable to recover it. 00:25:02.339 [2024-07-24 22:34:27.809701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.339 [2024-07-24 22:34:27.809728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.339 qpair failed and we were unable to recover it. 00:25:02.339 [2024-07-24 22:34:27.809926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.339 [2024-07-24 22:34:27.809975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.339 qpair failed and we were unable to recover it. 00:25:02.339 [2024-07-24 22:34:27.810111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.339 [2024-07-24 22:34:27.810138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.339 qpair failed and we were unable to recover it. 00:25:02.339 [2024-07-24 22:34:27.810268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.339 [2024-07-24 22:34:27.810294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.339 qpair failed and we were unable to recover it. 00:25:02.339 [2024-07-24 22:34:27.810460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.339 [2024-07-24 22:34:27.810522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.339 qpair failed and we were unable to recover it. 00:25:02.339 [2024-07-24 22:34:27.810761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.339 [2024-07-24 22:34:27.810809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.339 qpair failed and we were unable to recover it. 00:25:02.339 [2024-07-24 22:34:27.811011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.339 [2024-07-24 22:34:27.811061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.339 qpair failed and we were unable to recover it. 00:25:02.339 [2024-07-24 22:34:27.811193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.339 [2024-07-24 22:34:27.811249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.339 qpair failed and we were unable to recover it. 00:25:02.339 [2024-07-24 22:34:27.811438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.339 [2024-07-24 22:34:27.811466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.339 qpair failed and we were unable to recover it. 00:25:02.339 [2024-07-24 22:34:27.811663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.339 [2024-07-24 22:34:27.811713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.339 qpair failed and we were unable to recover it. 00:25:02.339 [2024-07-24 22:34:27.811902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.339 [2024-07-24 22:34:27.811929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.339 qpair failed and we were unable to recover it. 00:25:02.339 [2024-07-24 22:34:27.812069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.339 [2024-07-24 22:34:27.812122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.339 qpair failed and we were unable to recover it. 00:25:02.339 [2024-07-24 22:34:27.812311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.339 [2024-07-24 22:34:27.812361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.339 qpair failed and we were unable to recover it. 00:25:02.339 [2024-07-24 22:34:27.812534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.339 [2024-07-24 22:34:27.812562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.339 qpair failed and we were unable to recover it. 00:25:02.339 [2024-07-24 22:34:27.812785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.339 [2024-07-24 22:34:27.812813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.339 qpair failed and we were unable to recover it. 00:25:02.339 [2024-07-24 22:34:27.813014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.339 [2024-07-24 22:34:27.813065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.339 qpair failed and we were unable to recover it. 00:25:02.339 [2024-07-24 22:34:27.813171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.339 [2024-07-24 22:34:27.813198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.339 qpair failed and we were unable to recover it. 00:25:02.339 [2024-07-24 22:34:27.813379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.339 [2024-07-24 22:34:27.813405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.339 qpair failed and we were unable to recover it. 00:25:02.339 [2024-07-24 22:34:27.813648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.339 [2024-07-24 22:34:27.813697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.339 qpair failed and we were unable to recover it. 00:25:02.339 [2024-07-24 22:34:27.813852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.339 [2024-07-24 22:34:27.813879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.339 qpair failed and we were unable to recover it. 00:25:02.339 [2024-07-24 22:34:27.814086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.340 [2024-07-24 22:34:27.814138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.340 qpair failed and we were unable to recover it. 00:25:02.340 [2024-07-24 22:34:27.814326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.340 [2024-07-24 22:34:27.814374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.340 qpair failed and we were unable to recover it. 00:25:02.340 [2024-07-24 22:34:27.814530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.340 [2024-07-24 22:34:27.814583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.340 qpair failed and we were unable to recover it. 00:25:02.340 [2024-07-24 22:34:27.814712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.340 [2024-07-24 22:34:27.814768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.340 qpair failed and we were unable to recover it. 00:25:02.340 [2024-07-24 22:34:27.814949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.340 [2024-07-24 22:34:27.814998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.340 qpair failed and we were unable to recover it. 00:25:02.340 [2024-07-24 22:34:27.815190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.340 [2024-07-24 22:34:27.815242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.340 qpair failed and we were unable to recover it. 00:25:02.340 [2024-07-24 22:34:27.815381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.340 [2024-07-24 22:34:27.815434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.340 qpair failed and we were unable to recover it. 00:25:02.340 [2024-07-24 22:34:27.815691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.340 [2024-07-24 22:34:27.815741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.340 qpair failed and we were unable to recover it. 00:25:02.340 [2024-07-24 22:34:27.815882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.340 [2024-07-24 22:34:27.815929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.340 qpair failed and we were unable to recover it. 00:25:02.340 [2024-07-24 22:34:27.816088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.340 [2024-07-24 22:34:27.816118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.340 qpair failed and we were unable to recover it. 00:25:02.340 [2024-07-24 22:34:27.816300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.340 [2024-07-24 22:34:27.816352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.340 qpair failed and we were unable to recover it. 00:25:02.340 [2024-07-24 22:34:27.816596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.340 [2024-07-24 22:34:27.816646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.340 qpair failed and we were unable to recover it. 00:25:02.340 [2024-07-24 22:34:27.816750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.340 [2024-07-24 22:34:27.816777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.340 qpair failed and we were unable to recover it. 00:25:02.340 [2024-07-24 22:34:27.816980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.340 [2024-07-24 22:34:27.817029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.340 qpair failed and we were unable to recover it. 00:25:02.340 [2024-07-24 22:34:27.817231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.340 [2024-07-24 22:34:27.817282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.340 qpair failed and we were unable to recover it. 00:25:02.340 [2024-07-24 22:34:27.817469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.340 [2024-07-24 22:34:27.817523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.340 qpair failed and we were unable to recover it. 00:25:02.340 [2024-07-24 22:34:27.817723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.340 [2024-07-24 22:34:27.817772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.340 qpair failed and we were unable to recover it. 00:25:02.340 [2024-07-24 22:34:27.817881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.340 [2024-07-24 22:34:27.817907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.340 qpair failed and we were unable to recover it. 00:25:02.340 [2024-07-24 22:34:27.818062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.340 [2024-07-24 22:34:27.818088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.340 qpair failed and we were unable to recover it. 00:25:02.340 [2024-07-24 22:34:27.818242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.340 [2024-07-24 22:34:27.818288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.340 qpair failed and we were unable to recover it. 00:25:02.340 [2024-07-24 22:34:27.818468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.340 [2024-07-24 22:34:27.818532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.340 qpair failed and we were unable to recover it. 00:25:02.340 [2024-07-24 22:34:27.818636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.340 [2024-07-24 22:34:27.818663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.340 qpair failed and we were unable to recover it. 00:25:02.340 [2024-07-24 22:34:27.818764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.340 [2024-07-24 22:34:27.818791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.340 qpair failed and we were unable to recover it. 00:25:02.340 [2024-07-24 22:34:27.818936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.340 [2024-07-24 22:34:27.818985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.340 qpair failed and we were unable to recover it. 00:25:02.340 [2024-07-24 22:34:27.819186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.340 [2024-07-24 22:34:27.819213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.340 qpair failed and we were unable to recover it. 00:25:02.340 [2024-07-24 22:34:27.819428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.340 [2024-07-24 22:34:27.819490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.340 qpair failed and we were unable to recover it. 00:25:02.340 [2024-07-24 22:34:27.819689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.341 [2024-07-24 22:34:27.819742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.341 qpair failed and we were unable to recover it. 00:25:02.341 [2024-07-24 22:34:27.819958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.341 [2024-07-24 22:34:27.820016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.341 qpair failed and we were unable to recover it. 00:25:02.341 [2024-07-24 22:34:27.820180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.341 [2024-07-24 22:34:27.820227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.341 qpair failed and we were unable to recover it. 00:25:02.341 [2024-07-24 22:34:27.820352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.341 [2024-07-24 22:34:27.820404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.341 qpair failed and we were unable to recover it. 00:25:02.341 [2024-07-24 22:34:27.820586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.341 [2024-07-24 22:34:27.820637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.341 qpair failed and we were unable to recover it. 00:25:02.341 [2024-07-24 22:34:27.820821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.341 [2024-07-24 22:34:27.820875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.341 qpair failed and we were unable to recover it. 00:25:02.341 [2024-07-24 22:34:27.821028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.341 [2024-07-24 22:34:27.821078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.341 qpair failed and we were unable to recover it. 00:25:02.341 [2024-07-24 22:34:27.821315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.341 [2024-07-24 22:34:27.821363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.341 qpair failed and we were unable to recover it. 00:25:02.341 [2024-07-24 22:34:27.821569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.341 [2024-07-24 22:34:27.821619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.341 qpair failed and we were unable to recover it. 00:25:02.341 [2024-07-24 22:34:27.821753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.341 [2024-07-24 22:34:27.821804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.341 qpair failed and we were unable to recover it. 00:25:02.341 [2024-07-24 22:34:27.822009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.341 [2024-07-24 22:34:27.822061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.341 qpair failed and we were unable to recover it. 00:25:02.341 [2024-07-24 22:34:27.822205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.341 [2024-07-24 22:34:27.822253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.341 qpair failed and we were unable to recover it. 00:25:02.341 [2024-07-24 22:34:27.822438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.341 [2024-07-24 22:34:27.822500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.341 qpair failed and we were unable to recover it. 00:25:02.341 [2024-07-24 22:34:27.822698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.341 [2024-07-24 22:34:27.822748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.341 qpair failed and we were unable to recover it. 00:25:02.341 [2024-07-24 22:34:27.822929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.341 [2024-07-24 22:34:27.822955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.341 qpair failed and we were unable to recover it. 00:25:02.341 [2024-07-24 22:34:27.823141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.341 [2024-07-24 22:34:27.823187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.341 qpair failed and we were unable to recover it. 00:25:02.341 [2024-07-24 22:34:27.823304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.341 [2024-07-24 22:34:27.823364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.341 qpair failed and we were unable to recover it. 00:25:02.341 [2024-07-24 22:34:27.823465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.341 [2024-07-24 22:34:27.823503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.341 qpair failed and we were unable to recover it. 00:25:02.341 [2024-07-24 22:34:27.823700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.341 [2024-07-24 22:34:27.823729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.341 qpair failed and we were unable to recover it. 00:25:02.341 [2024-07-24 22:34:27.823922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.341 [2024-07-24 22:34:27.823971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.341 qpair failed and we were unable to recover it. 00:25:02.341 [2024-07-24 22:34:27.824073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.342 [2024-07-24 22:34:27.824099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.342 qpair failed and we were unable to recover it. 00:25:02.342 [2024-07-24 22:34:27.824358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.342 [2024-07-24 22:34:27.824407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.342 qpair failed and we were unable to recover it. 00:25:02.342 [2024-07-24 22:34:27.824509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.342 [2024-07-24 22:34:27.824536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.342 qpair failed and we were unable to recover it. 00:25:02.342 [2024-07-24 22:34:27.824734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.342 [2024-07-24 22:34:27.824786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.342 qpair failed and we were unable to recover it. 00:25:02.342 [2024-07-24 22:34:27.824892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.342 [2024-07-24 22:34:27.824920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.342 qpair failed and we were unable to recover it. 00:25:02.342 [2024-07-24 22:34:27.825017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.342 [2024-07-24 22:34:27.825043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.342 qpair failed and we were unable to recover it. 00:25:02.342 [2024-07-24 22:34:27.825200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.342 [2024-07-24 22:34:27.825253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.342 qpair failed and we were unable to recover it. 00:25:02.342 [2024-07-24 22:34:27.825437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.342 [2024-07-24 22:34:27.825496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.342 qpair failed and we were unable to recover it. 00:25:02.342 [2024-07-24 22:34:27.825631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.342 [2024-07-24 22:34:27.825682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.342 qpair failed and we were unable to recover it. 00:25:02.342 [2024-07-24 22:34:27.825879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.342 [2024-07-24 22:34:27.825926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.342 qpair failed and we were unable to recover it. 00:25:02.342 [2024-07-24 22:34:27.826027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.342 [2024-07-24 22:34:27.826052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.342 qpair failed and we were unable to recover it. 00:25:02.342 [2024-07-24 22:34:27.826313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.342 [2024-07-24 22:34:27.826363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.342 qpair failed and we were unable to recover it. 00:25:02.342 [2024-07-24 22:34:27.826468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.342 [2024-07-24 22:34:27.826505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.342 qpair failed and we were unable to recover it. 00:25:02.342 [2024-07-24 22:34:27.826699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.342 [2024-07-24 22:34:27.826726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.342 qpair failed and we were unable to recover it. 00:25:02.342 [2024-07-24 22:34:27.826863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.342 [2024-07-24 22:34:27.826916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.342 qpair failed and we were unable to recover it. 00:25:02.342 [2024-07-24 22:34:27.827111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.342 [2024-07-24 22:34:27.827161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.342 qpair failed and we were unable to recover it. 00:25:02.342 [2024-07-24 22:34:27.827319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.342 [2024-07-24 22:34:27.827370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.342 qpair failed and we were unable to recover it. 00:25:02.342 [2024-07-24 22:34:27.827539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.342 [2024-07-24 22:34:27.827570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.342 qpair failed and we were unable to recover it. 00:25:02.342 [2024-07-24 22:34:27.827779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.342 [2024-07-24 22:34:27.827827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.342 qpair failed and we were unable to recover it. 00:25:02.342 [2024-07-24 22:34:27.828032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.342 [2024-07-24 22:34:27.828085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.342 qpair failed and we were unable to recover it. 00:25:02.342 [2024-07-24 22:34:27.828288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.342 [2024-07-24 22:34:27.828316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.342 qpair failed and we were unable to recover it. 00:25:02.342 [2024-07-24 22:34:27.828472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.342 [2024-07-24 22:34:27.828539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.342 qpair failed and we were unable to recover it. 00:25:02.342 [2024-07-24 22:34:27.828647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.342 [2024-07-24 22:34:27.828673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.342 qpair failed and we were unable to recover it. 00:25:02.342 [2024-07-24 22:34:27.828865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.342 [2024-07-24 22:34:27.828913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.342 qpair failed and we were unable to recover it. 00:25:02.342 [2024-07-24 22:34:27.829159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.342 [2024-07-24 22:34:27.829208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.342 qpair failed and we were unable to recover it. 00:25:02.342 [2024-07-24 22:34:27.829362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.342 [2024-07-24 22:34:27.829414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.342 qpair failed and we were unable to recover it. 00:25:02.342 [2024-07-24 22:34:27.829616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.343 [2024-07-24 22:34:27.829666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.343 qpair failed and we were unable to recover it. 00:25:02.343 [2024-07-24 22:34:27.829825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.343 [2024-07-24 22:34:27.829877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.343 qpair failed and we were unable to recover it. 00:25:02.343 [2024-07-24 22:34:27.830077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.343 [2024-07-24 22:34:27.830125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.343 qpair failed and we were unable to recover it. 00:25:02.343 [2024-07-24 22:34:27.830369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.343 [2024-07-24 22:34:27.830417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.343 qpair failed and we were unable to recover it. 00:25:02.343 [2024-07-24 22:34:27.830605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.343 [2024-07-24 22:34:27.830660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.343 qpair failed and we were unable to recover it. 00:25:02.343 [2024-07-24 22:34:27.830762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.343 [2024-07-24 22:34:27.830789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.343 qpair failed and we were unable to recover it. 00:25:02.343 [2024-07-24 22:34:27.830970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.343 [2024-07-24 22:34:27.831020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.343 qpair failed and we were unable to recover it. 00:25:02.343 [2024-07-24 22:34:27.831256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.343 [2024-07-24 22:34:27.831306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.343 qpair failed and we were unable to recover it. 00:25:02.343 [2024-07-24 22:34:27.831531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.343 [2024-07-24 22:34:27.831560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.343 qpair failed and we were unable to recover it. 00:25:02.343 [2024-07-24 22:34:27.831695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.343 [2024-07-24 22:34:27.831747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.343 qpair failed and we were unable to recover it. 00:25:02.343 [2024-07-24 22:34:27.831988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.343 [2024-07-24 22:34:27.832037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.343 qpair failed and we were unable to recover it. 00:25:02.343 [2024-07-24 22:34:27.832191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.343 [2024-07-24 22:34:27.832249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.343 qpair failed and we were unable to recover it. 00:25:02.343 [2024-07-24 22:34:27.832444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.343 [2024-07-24 22:34:27.832508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.343 qpair failed and we were unable to recover it. 00:25:02.343 [2024-07-24 22:34:27.832757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.343 [2024-07-24 22:34:27.832806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.343 qpair failed and we were unable to recover it. 00:25:02.343 [2024-07-24 22:34:27.832943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.343 [2024-07-24 22:34:27.832996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.343 qpair failed and we were unable to recover it. 00:25:02.343 [2024-07-24 22:34:27.833137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.343 [2024-07-24 22:34:27.833184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.343 qpair failed and we were unable to recover it. 00:25:02.343 [2024-07-24 22:34:27.833327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.343 [2024-07-24 22:34:27.833373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.343 qpair failed and we were unable to recover it. 00:25:02.343 [2024-07-24 22:34:27.833567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.343 [2024-07-24 22:34:27.833617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.343 qpair failed and we were unable to recover it. 00:25:02.343 [2024-07-24 22:34:27.833717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.343 [2024-07-24 22:34:27.833744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.343 qpair failed and we were unable to recover it. 00:25:02.343 [2024-07-24 22:34:27.833948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.343 [2024-07-24 22:34:27.833996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.343 qpair failed and we were unable to recover it. 00:25:02.343 [2024-07-24 22:34:27.834182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.343 [2024-07-24 22:34:27.834210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.343 qpair failed and we were unable to recover it. 00:25:02.343 [2024-07-24 22:34:27.834379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.343 [2024-07-24 22:34:27.834428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.343 qpair failed and we were unable to recover it. 00:25:02.343 [2024-07-24 22:34:27.834607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.343 [2024-07-24 22:34:27.834658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.343 qpair failed and we were unable to recover it. 00:25:02.343 [2024-07-24 22:34:27.834827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.343 [2024-07-24 22:34:27.834874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.343 qpair failed and we were unable to recover it. 00:25:02.343 [2024-07-24 22:34:27.834979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.343 [2024-07-24 22:34:27.835008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.343 qpair failed and we were unable to recover it. 00:25:02.343 [2024-07-24 22:34:27.835208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.343 [2024-07-24 22:34:27.835257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.343 qpair failed and we were unable to recover it. 00:25:02.343 [2024-07-24 22:34:27.835386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.343 [2024-07-24 22:34:27.835437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.343 qpair failed and we were unable to recover it. 00:25:02.343 [2024-07-24 22:34:27.835600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.343 [2024-07-24 22:34:27.835626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.343 qpair failed and we were unable to recover it. 00:25:02.343 [2024-07-24 22:34:27.835817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.343 [2024-07-24 22:34:27.835867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.343 qpair failed and we were unable to recover it. 00:25:02.343 [2024-07-24 22:34:27.836052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.343 [2024-07-24 22:34:27.836080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.343 qpair failed and we were unable to recover it. 00:25:02.343 [2024-07-24 22:34:27.836285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.343 [2024-07-24 22:34:27.836311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.343 qpair failed and we were unable to recover it. 00:25:02.343 [2024-07-24 22:34:27.836514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.343 [2024-07-24 22:34:27.836563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.343 qpair failed and we were unable to recover it. 00:25:02.343 [2024-07-24 22:34:27.836669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.343 [2024-07-24 22:34:27.836696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.343 qpair failed and we were unable to recover it. 00:25:02.343 [2024-07-24 22:34:27.836951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.343 [2024-07-24 22:34:27.837002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.343 qpair failed and we were unable to recover it. 00:25:02.343 [2024-07-24 22:34:27.837103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.343 [2024-07-24 22:34:27.837127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.343 qpair failed and we were unable to recover it. 00:25:02.343 [2024-07-24 22:34:27.837383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.343 [2024-07-24 22:34:27.837431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.343 qpair failed and we were unable to recover it. 00:25:02.344 [2024-07-24 22:34:27.837617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.344 [2024-07-24 22:34:27.837664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.344 qpair failed and we were unable to recover it. 00:25:02.344 [2024-07-24 22:34:27.837814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.344 [2024-07-24 22:34:27.837868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.344 qpair failed and we were unable to recover it. 00:25:02.344 [2024-07-24 22:34:27.838046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.344 [2024-07-24 22:34:27.838093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.344 qpair failed and we were unable to recover it. 00:25:02.344 [2024-07-24 22:34:27.838296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.344 [2024-07-24 22:34:27.838347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.344 qpair failed and we were unable to recover it. 00:25:02.344 [2024-07-24 22:34:27.838451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.344 [2024-07-24 22:34:27.838487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.344 qpair failed and we were unable to recover it. 00:25:02.344 [2024-07-24 22:34:27.838621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.344 [2024-07-24 22:34:27.838678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.344 qpair failed and we were unable to recover it. 00:25:02.344 [2024-07-24 22:34:27.838876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.344 [2024-07-24 22:34:27.838901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.344 qpair failed and we were unable to recover it. 00:25:02.344 [2024-07-24 22:34:27.839113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.344 [2024-07-24 22:34:27.839163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.344 qpair failed and we were unable to recover it. 00:25:02.344 [2024-07-24 22:34:27.839351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.344 [2024-07-24 22:34:27.839394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.344 qpair failed and we were unable to recover it. 00:25:02.344 [2024-07-24 22:34:27.839556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.344 [2024-07-24 22:34:27.839606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.344 qpair failed and we were unable to recover it. 00:25:02.344 [2024-07-24 22:34:27.839804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.344 [2024-07-24 22:34:27.839857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.344 qpair failed and we were unable to recover it. 00:25:02.344 [2024-07-24 22:34:27.840051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.344 [2024-07-24 22:34:27.840099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.344 qpair failed and we were unable to recover it. 00:25:02.344 [2024-07-24 22:34:27.840260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.344 [2024-07-24 22:34:27.840289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.344 qpair failed and we were unable to recover it. 00:25:02.344 [2024-07-24 22:34:27.840469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.344 [2024-07-24 22:34:27.840504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.344 qpair failed and we were unable to recover it. 00:25:02.344 [2024-07-24 22:34:27.840632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.344 [2024-07-24 22:34:27.840689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.344 qpair failed and we were unable to recover it. 00:25:02.344 [2024-07-24 22:34:27.840793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.344 [2024-07-24 22:34:27.840824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.344 qpair failed and we were unable to recover it. 00:25:02.344 [2024-07-24 22:34:27.841019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.344 [2024-07-24 22:34:27.841045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.344 qpair failed and we were unable to recover it. 00:25:02.344 [2024-07-24 22:34:27.841245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.344 [2024-07-24 22:34:27.841293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.344 qpair failed and we were unable to recover it. 00:25:02.344 [2024-07-24 22:34:27.841506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.344 [2024-07-24 22:34:27.841569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.344 qpair failed and we were unable to recover it. 00:25:02.344 [2024-07-24 22:34:27.841710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.344 [2024-07-24 22:34:27.841762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.344 qpair failed and we were unable to recover it. 00:25:02.344 [2024-07-24 22:34:27.841922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.344 [2024-07-24 22:34:27.841949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.344 qpair failed and we were unable to recover it. 00:25:02.344 [2024-07-24 22:34:27.842074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.344 [2024-07-24 22:34:27.842135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.344 qpair failed and we were unable to recover it. 00:25:02.344 [2024-07-24 22:34:27.842357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.344 [2024-07-24 22:34:27.842408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.344 qpair failed and we were unable to recover it. 00:25:02.344 [2024-07-24 22:34:27.842516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.344 [2024-07-24 22:34:27.842543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.344 qpair failed and we were unable to recover it. 00:25:02.344 [2024-07-24 22:34:27.842673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.344 [2024-07-24 22:34:27.842729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.344 qpair failed and we were unable to recover it. 00:25:02.344 [2024-07-24 22:34:27.842925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.344 [2024-07-24 22:34:27.842977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.344 qpair failed and we were unable to recover it. 00:25:02.344 [2024-07-24 22:34:27.843144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.344 [2024-07-24 22:34:27.843196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.344 qpair failed and we were unable to recover it. 00:25:02.344 [2024-07-24 22:34:27.843384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.344 [2024-07-24 22:34:27.843410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.344 qpair failed and we were unable to recover it. 00:25:02.344 [2024-07-24 22:34:27.843589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.344 [2024-07-24 22:34:27.843642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.344 qpair failed and we were unable to recover it. 00:25:02.344 [2024-07-24 22:34:27.843863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.344 [2024-07-24 22:34:27.843913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.344 qpair failed and we were unable to recover it. 00:25:02.344 [2024-07-24 22:34:27.844117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.344 [2024-07-24 22:34:27.844171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.344 qpair failed and we were unable to recover it. 00:25:02.344 [2024-07-24 22:34:27.844274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.344 [2024-07-24 22:34:27.844299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.344 qpair failed and we were unable to recover it. 00:25:02.344 [2024-07-24 22:34:27.844511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.344 [2024-07-24 22:34:27.844556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.344 qpair failed and we were unable to recover it. 00:25:02.344 [2024-07-24 22:34:27.844712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.344 [2024-07-24 22:34:27.844772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.344 qpair failed and we were unable to recover it. 00:25:02.344 [2024-07-24 22:34:27.844943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.344 [2024-07-24 22:34:27.844970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.344 qpair failed and we were unable to recover it. 00:25:02.344 [2024-07-24 22:34:27.845153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.344 [2024-07-24 22:34:27.845179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.344 qpair failed and we were unable to recover it. 00:25:02.344 [2024-07-24 22:34:27.845333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.344 [2024-07-24 22:34:27.845385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.344 qpair failed and we were unable to recover it. 00:25:02.344 [2024-07-24 22:34:27.845501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.344 [2024-07-24 22:34:27.845528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.345 qpair failed and we were unable to recover it. 00:25:02.345 [2024-07-24 22:34:27.845767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.345 [2024-07-24 22:34:27.845815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.345 qpair failed and we were unable to recover it. 00:25:02.345 [2024-07-24 22:34:27.845979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.345 [2024-07-24 22:34:27.846034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.345 qpair failed and we were unable to recover it. 00:25:02.345 [2024-07-24 22:34:27.846164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.345 [2024-07-24 22:34:27.846215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.345 qpair failed and we were unable to recover it. 00:25:02.345 [2024-07-24 22:34:27.846402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.345 [2024-07-24 22:34:27.846452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.345 qpair failed and we were unable to recover it. 00:25:02.345 [2024-07-24 22:34:27.846653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.345 [2024-07-24 22:34:27.846704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.345 qpair failed and we were unable to recover it. 00:25:02.345 [2024-07-24 22:34:27.846863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.345 [2024-07-24 22:34:27.846889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.345 qpair failed and we were unable to recover it. 00:25:02.345 [2024-07-24 22:34:27.847025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.345 [2024-07-24 22:34:27.847082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.345 qpair failed and we were unable to recover it. 00:25:02.345 [2024-07-24 22:34:27.847250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.345 [2024-07-24 22:34:27.847297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.345 qpair failed and we were unable to recover it. 00:25:02.345 [2024-07-24 22:34:27.847461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.345 [2024-07-24 22:34:27.847498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.345 qpair failed and we were unable to recover it. 00:25:02.345 [2024-07-24 22:34:27.847759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.345 [2024-07-24 22:34:27.847810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.345 qpair failed and we were unable to recover it. 00:25:02.345 [2024-07-24 22:34:27.847926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.345 [2024-07-24 22:34:27.847987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.345 qpair failed and we were unable to recover it. 00:25:02.345 [2024-07-24 22:34:27.848178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.345 [2024-07-24 22:34:27.848228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.345 qpair failed and we were unable to recover it. 00:25:02.345 [2024-07-24 22:34:27.848414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.345 [2024-07-24 22:34:27.848465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.345 qpair failed and we were unable to recover it. 00:25:02.345 [2024-07-24 22:34:27.848643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.345 [2024-07-24 22:34:27.848696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.345 qpair failed and we were unable to recover it. 00:25:02.345 [2024-07-24 22:34:27.848860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.345 [2024-07-24 22:34:27.848907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.345 qpair failed and we were unable to recover it. 00:25:02.345 [2024-07-24 22:34:27.849010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.345 [2024-07-24 22:34:27.849037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.345 qpair failed and we were unable to recover it. 00:25:02.345 [2024-07-24 22:34:27.849258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.345 [2024-07-24 22:34:27.849312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.345 qpair failed and we were unable to recover it. 00:25:02.345 [2024-07-24 22:34:27.849541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.345 [2024-07-24 22:34:27.849573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.345 qpair failed and we were unable to recover it. 00:25:02.345 [2024-07-24 22:34:27.849815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.345 [2024-07-24 22:34:27.849863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.345 qpair failed and we were unable to recover it. 00:25:02.345 [2024-07-24 22:34:27.850027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.345 [2024-07-24 22:34:27.850053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.345 qpair failed and we were unable to recover it. 00:25:02.345 [2024-07-24 22:34:27.850200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.345 [2024-07-24 22:34:27.850249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.345 qpair failed and we were unable to recover it. 00:25:02.345 [2024-07-24 22:34:27.850428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.345 [2024-07-24 22:34:27.850485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.345 qpair failed and we were unable to recover it. 00:25:02.345 [2024-07-24 22:34:27.850584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.345 [2024-07-24 22:34:27.850610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.345 qpair failed and we were unable to recover it. 00:25:02.345 [2024-07-24 22:34:27.850710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.345 [2024-07-24 22:34:27.850736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.345 qpair failed and we were unable to recover it. 00:25:02.345 [2024-07-24 22:34:27.850923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.345 [2024-07-24 22:34:27.850973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.345 qpair failed and we were unable to recover it. 00:25:02.345 [2024-07-24 22:34:27.851099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.345 [2024-07-24 22:34:27.851151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.345 qpair failed and we were unable to recover it. 00:25:02.345 [2024-07-24 22:34:27.851296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.345 [2024-07-24 22:34:27.851348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.345 qpair failed and we were unable to recover it. 00:25:02.345 [2024-07-24 22:34:27.851453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.345 [2024-07-24 22:34:27.851487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.345 qpair failed and we were unable to recover it. 00:25:02.345 [2024-07-24 22:34:27.851730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.345 [2024-07-24 22:34:27.851779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.345 qpair failed and we were unable to recover it. 00:25:02.345 [2024-07-24 22:34:27.851905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.345 [2024-07-24 22:34:27.851960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.345 qpair failed and we were unable to recover it. 00:25:02.345 [2024-07-24 22:34:27.852068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.345 [2024-07-24 22:34:27.852095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.345 qpair failed and we were unable to recover it. 00:25:02.345 [2024-07-24 22:34:27.852234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.345 [2024-07-24 22:34:27.852291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.345 qpair failed and we were unable to recover it. 00:25:02.345 [2024-07-24 22:34:27.852475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.345 [2024-07-24 22:34:27.852542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.345 qpair failed and we were unable to recover it. 00:25:02.345 [2024-07-24 22:34:27.852703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.345 [2024-07-24 22:34:27.852756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.345 qpair failed and we were unable to recover it. 00:25:02.345 [2024-07-24 22:34:27.852940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.345 [2024-07-24 22:34:27.852966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.345 qpair failed and we were unable to recover it. 00:25:02.345 [2024-07-24 22:34:27.853216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.345 [2024-07-24 22:34:27.853265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.345 qpair failed and we were unable to recover it. 00:25:02.345 [2024-07-24 22:34:27.853499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.345 [2024-07-24 22:34:27.853554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.345 qpair failed and we were unable to recover it. 00:25:02.345 [2024-07-24 22:34:27.853757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.346 [2024-07-24 22:34:27.853812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.346 qpair failed and we were unable to recover it. 00:25:02.346 [2024-07-24 22:34:27.853998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.346 [2024-07-24 22:34:27.854053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.346 qpair failed and we were unable to recover it. 00:25:02.346 [2024-07-24 22:34:27.854196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.346 [2024-07-24 22:34:27.854249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.346 qpair failed and we were unable to recover it. 00:25:02.346 [2024-07-24 22:34:27.854429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.346 [2024-07-24 22:34:27.854455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.346 qpair failed and we were unable to recover it. 00:25:02.346 [2024-07-24 22:34:27.854578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.346 [2024-07-24 22:34:27.854604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.346 qpair failed and we were unable to recover it. 00:25:02.346 [2024-07-24 22:34:27.854781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.346 [2024-07-24 22:34:27.854833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.346 qpair failed and we were unable to recover it. 00:25:02.346 [2024-07-24 22:34:27.855011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.346 [2024-07-24 22:34:27.855061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.346 qpair failed and we were unable to recover it. 00:25:02.346 [2024-07-24 22:34:27.855233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.346 [2024-07-24 22:34:27.855299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.346 qpair failed and we were unable to recover it. 00:25:02.346 [2024-07-24 22:34:27.855489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.346 [2024-07-24 22:34:27.855545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.346 qpair failed and we were unable to recover it. 00:25:02.346 [2024-07-24 22:34:27.855646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.346 [2024-07-24 22:34:27.855672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.346 qpair failed and we were unable to recover it. 00:25:02.346 [2024-07-24 22:34:27.855771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.346 [2024-07-24 22:34:27.855798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.346 qpair failed and we were unable to recover it. 00:25:02.346 [2024-07-24 22:34:27.855970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.346 [2024-07-24 22:34:27.855997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.346 qpair failed and we were unable to recover it. 00:25:02.346 [2024-07-24 22:34:27.856176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.346 [2024-07-24 22:34:27.856229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.346 qpair failed and we were unable to recover it. 00:25:02.346 [2024-07-24 22:34:27.856412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.346 [2024-07-24 22:34:27.856464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.346 qpair failed and we were unable to recover it. 00:25:02.346 [2024-07-24 22:34:27.856626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.346 [2024-07-24 22:34:27.856679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.346 qpair failed and we were unable to recover it. 00:25:02.346 [2024-07-24 22:34:27.856824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.346 [2024-07-24 22:34:27.856878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.346 qpair failed and we were unable to recover it. 00:25:02.346 [2024-07-24 22:34:27.856976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.346 [2024-07-24 22:34:27.857002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.346 qpair failed and we were unable to recover it. 00:25:02.346 [2024-07-24 22:34:27.857194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.346 [2024-07-24 22:34:27.857242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.346 qpair failed and we were unable to recover it. 00:25:02.346 [2024-07-24 22:34:27.857435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.346 [2024-07-24 22:34:27.857498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.346 qpair failed and we were unable to recover it. 00:25:02.346 [2024-07-24 22:34:27.857651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.346 [2024-07-24 22:34:27.857703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.346 qpair failed and we were unable to recover it. 00:25:02.346 [2024-07-24 22:34:27.857892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.346 [2024-07-24 22:34:27.857947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.346 qpair failed and we were unable to recover it. 00:25:02.346 [2024-07-24 22:34:27.858079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.346 [2024-07-24 22:34:27.858131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.346 qpair failed and we were unable to recover it. 00:25:02.346 [2024-07-24 22:34:27.858284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.346 [2024-07-24 22:34:27.858338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.346 qpair failed and we were unable to recover it. 00:25:02.346 [2024-07-24 22:34:27.858436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.346 [2024-07-24 22:34:27.858463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.346 qpair failed and we were unable to recover it. 00:25:02.346 [2024-07-24 22:34:27.858632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.346 [2024-07-24 22:34:27.858683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.346 qpair failed and we were unable to recover it. 00:25:02.346 [2024-07-24 22:34:27.858874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.346 [2024-07-24 22:34:27.858924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.346 qpair failed and we were unable to recover it. 00:25:02.346 [2024-07-24 22:34:27.859125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.346 [2024-07-24 22:34:27.859173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.346 qpair failed and we were unable to recover it. 00:25:02.346 [2024-07-24 22:34:27.859337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.346 [2024-07-24 22:34:27.859391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.346 qpair failed and we were unable to recover it. 00:25:02.346 [2024-07-24 22:34:27.859575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.346 [2024-07-24 22:34:27.859602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.346 qpair failed and we were unable to recover it. 00:25:02.346 [2024-07-24 22:34:27.859797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.346 [2024-07-24 22:34:27.859823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.346 qpair failed and we were unable to recover it. 00:25:02.346 [2024-07-24 22:34:27.859934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.346 [2024-07-24 22:34:27.859960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.346 qpair failed and we were unable to recover it. 00:25:02.346 [2024-07-24 22:34:27.860074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.346 [2024-07-24 22:34:27.860102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.346 qpair failed and we were unable to recover it. 00:25:02.346 [2024-07-24 22:34:27.860207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.347 [2024-07-24 22:34:27.860234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.347 qpair failed and we were unable to recover it. 00:25:02.347 [2024-07-24 22:34:27.860382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.347 [2024-07-24 22:34:27.860437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.347 qpair failed and we were unable to recover it. 00:25:02.347 [2024-07-24 22:34:27.860616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.347 [2024-07-24 22:34:27.860668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.347 qpair failed and we were unable to recover it. 00:25:02.347 [2024-07-24 22:34:27.860874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.347 [2024-07-24 22:34:27.860927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.347 qpair failed and we were unable to recover it. 00:25:02.347 [2024-07-24 22:34:27.861115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.347 [2024-07-24 22:34:27.861167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.347 qpair failed and we were unable to recover it. 00:25:02.347 [2024-07-24 22:34:27.861284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.347 [2024-07-24 22:34:27.861343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.347 qpair failed and we were unable to recover it. 00:25:02.347 [2024-07-24 22:34:27.861548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.347 [2024-07-24 22:34:27.861575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.347 qpair failed and we were unable to recover it. 00:25:02.347 [2024-07-24 22:34:27.861751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.347 [2024-07-24 22:34:27.861809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.347 qpair failed and we were unable to recover it. 00:25:02.347 [2024-07-24 22:34:27.861972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.347 [2024-07-24 22:34:27.862003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.347 qpair failed and we were unable to recover it. 00:25:02.347 [2024-07-24 22:34:27.862146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.347 [2024-07-24 22:34:27.862194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.347 qpair failed and we were unable to recover it. 00:25:02.347 [2024-07-24 22:34:27.862371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.347 [2024-07-24 22:34:27.862421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.347 qpair failed and we were unable to recover it. 00:25:02.347 [2024-07-24 22:34:27.862529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.347 [2024-07-24 22:34:27.862557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.347 qpair failed and we were unable to recover it. 00:25:02.347 [2024-07-24 22:34:27.862729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.347 [2024-07-24 22:34:27.862781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.347 qpair failed and we were unable to recover it. 00:25:02.347 [2024-07-24 22:34:27.862958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.347 [2024-07-24 22:34:27.863016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.347 qpair failed and we were unable to recover it. 00:25:02.347 [2024-07-24 22:34:27.863192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.347 [2024-07-24 22:34:27.863243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.347 qpair failed and we were unable to recover it. 00:25:02.347 [2024-07-24 22:34:27.863423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.347 [2024-07-24 22:34:27.863492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.347 qpair failed and we were unable to recover it. 00:25:02.347 [2024-07-24 22:34:27.863676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.347 [2024-07-24 22:34:27.863704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.347 qpair failed and we were unable to recover it. 00:25:02.347 [2024-07-24 22:34:27.863884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.347 [2024-07-24 22:34:27.863941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.347 qpair failed and we were unable to recover it. 00:25:02.347 [2024-07-24 22:34:27.864088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.347 [2024-07-24 22:34:27.864139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.347 qpair failed and we were unable to recover it. 00:25:02.347 [2024-07-24 22:34:27.864329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.347 [2024-07-24 22:34:27.864379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.347 qpair failed and we were unable to recover it. 00:25:02.347 [2024-07-24 22:34:27.864563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.347 [2024-07-24 22:34:27.864592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.347 qpair failed and we were unable to recover it. 00:25:02.347 [2024-07-24 22:34:27.864696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.347 [2024-07-24 22:34:27.864723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.347 qpair failed and we were unable to recover it. 00:25:02.347 [2024-07-24 22:34:27.864818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.347 [2024-07-24 22:34:27.864844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.347 qpair failed and we were unable to recover it. 00:25:02.347 [2024-07-24 22:34:27.865015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.347 [2024-07-24 22:34:27.865069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.347 qpair failed and we were unable to recover it. 00:25:02.347 [2024-07-24 22:34:27.865244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.347 [2024-07-24 22:34:27.865299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.347 qpair failed and we were unable to recover it. 00:25:02.347 [2024-07-24 22:34:27.865403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.347 [2024-07-24 22:34:27.865429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.347 qpair failed and we were unable to recover it. 00:25:02.347 [2024-07-24 22:34:27.865628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.347 [2024-07-24 22:34:27.865675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.347 qpair failed and we were unable to recover it. 00:25:02.347 [2024-07-24 22:34:27.865779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.347 [2024-07-24 22:34:27.865806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.347 qpair failed and we were unable to recover it. 00:25:02.347 [2024-07-24 22:34:27.865906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.347 [2024-07-24 22:34:27.865938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.347 qpair failed and we were unable to recover it. 00:25:02.347 [2024-07-24 22:34:27.866094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.347 [2024-07-24 22:34:27.866145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.347 qpair failed and we were unable to recover it. 00:25:02.347 [2024-07-24 22:34:27.866308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.347 [2024-07-24 22:34:27.866365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.347 qpair failed and we were unable to recover it. 00:25:02.347 [2024-07-24 22:34:27.866569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.347 [2024-07-24 22:34:27.866620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.347 qpair failed and we were unable to recover it. 00:25:02.347 [2024-07-24 22:34:27.866725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.347 [2024-07-24 22:34:27.866753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.347 qpair failed and we were unable to recover it. 00:25:02.347 [2024-07-24 22:34:27.866886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.347 [2024-07-24 22:34:27.866939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.347 qpair failed and we were unable to recover it. 00:25:02.347 [2024-07-24 22:34:27.867104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.347 [2024-07-24 22:34:27.867156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.347 qpair failed and we were unable to recover it. 00:25:02.347 [2024-07-24 22:34:27.867354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.347 [2024-07-24 22:34:27.867406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.347 qpair failed and we were unable to recover it. 00:25:02.347 [2024-07-24 22:34:27.867521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.347 [2024-07-24 22:34:27.867548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.347 qpair failed and we were unable to recover it. 00:25:02.347 [2024-07-24 22:34:27.867737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.347 [2024-07-24 22:34:27.867787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.348 qpair failed and we were unable to recover it. 00:25:02.348 [2024-07-24 22:34:27.868013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.348 [2024-07-24 22:34:27.868065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.348 qpair failed and we were unable to recover it. 00:25:02.348 [2024-07-24 22:34:27.868266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.348 [2024-07-24 22:34:27.868318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.348 qpair failed and we were unable to recover it. 00:25:02.348 [2024-07-24 22:34:27.868530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.348 [2024-07-24 22:34:27.868556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.348 qpair failed and we were unable to recover it. 00:25:02.348 [2024-07-24 22:34:27.868655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.348 [2024-07-24 22:34:27.868681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.348 qpair failed and we were unable to recover it. 00:25:02.348 [2024-07-24 22:34:27.868860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.348 [2024-07-24 22:34:27.868886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.348 qpair failed and we were unable to recover it. 00:25:02.348 [2024-07-24 22:34:27.869025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.348 [2024-07-24 22:34:27.869078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.348 qpair failed and we were unable to recover it. 00:25:02.348 [2024-07-24 22:34:27.869233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.348 [2024-07-24 22:34:27.869259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.348 qpair failed and we were unable to recover it. 00:25:02.348 [2024-07-24 22:34:27.869415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.348 [2024-07-24 22:34:27.869473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.348 qpair failed and we were unable to recover it. 00:25:02.348 [2024-07-24 22:34:27.869679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.348 [2024-07-24 22:34:27.869733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.348 qpair failed and we were unable to recover it. 00:25:02.348 [2024-07-24 22:34:27.869829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.348 [2024-07-24 22:34:27.869855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.348 qpair failed and we were unable to recover it. 00:25:02.348 [2024-07-24 22:34:27.870010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.348 [2024-07-24 22:34:27.870057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.348 qpair failed and we were unable to recover it. 00:25:02.348 [2024-07-24 22:34:27.870192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.348 [2024-07-24 22:34:27.870246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.348 qpair failed and we were unable to recover it. 00:25:02.348 [2024-07-24 22:34:27.870442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.348 [2024-07-24 22:34:27.870515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.348 qpair failed and we were unable to recover it. 00:25:02.348 [2024-07-24 22:34:27.870648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.348 [2024-07-24 22:34:27.870699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.348 qpair failed and we were unable to recover it. 00:25:02.348 [2024-07-24 22:34:27.870865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.348 [2024-07-24 22:34:27.870917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.348 qpair failed and we were unable to recover it. 00:25:02.348 [2024-07-24 22:34:27.871081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.348 [2024-07-24 22:34:27.871138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.348 qpair failed and we were unable to recover it. 00:25:02.348 [2024-07-24 22:34:27.871342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.348 [2024-07-24 22:34:27.871392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.348 qpair failed and we were unable to recover it. 00:25:02.348 [2024-07-24 22:34:27.871590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.348 [2024-07-24 22:34:27.871623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.348 qpair failed and we were unable to recover it. 00:25:02.348 [2024-07-24 22:34:27.871800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.348 [2024-07-24 22:34:27.871829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.348 qpair failed and we were unable to recover it. 00:25:02.348 [2024-07-24 22:34:27.871987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.348 [2024-07-24 22:34:27.872041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.348 qpair failed and we were unable to recover it. 00:25:02.348 [2024-07-24 22:34:27.872142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.348 [2024-07-24 22:34:27.872173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.348 qpair failed and we were unable to recover it. 00:25:02.348 [2024-07-24 22:34:27.872341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.348 [2024-07-24 22:34:27.872395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.348 qpair failed and we were unable to recover it. 00:25:02.348 [2024-07-24 22:34:27.872576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.348 [2024-07-24 22:34:27.872620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.348 qpair failed and we were unable to recover it. 00:25:02.348 [2024-07-24 22:34:27.872775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.348 [2024-07-24 22:34:27.872830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.348 qpair failed and we were unable to recover it. 00:25:02.348 [2024-07-24 22:34:27.872951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.348 [2024-07-24 22:34:27.873010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.348 qpair failed and we were unable to recover it. 00:25:02.348 [2024-07-24 22:34:27.873194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.348 [2024-07-24 22:34:27.873219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.348 qpair failed and we were unable to recover it. 00:25:02.348 [2024-07-24 22:34:27.873393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.348 [2024-07-24 22:34:27.873451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.348 qpair failed and we were unable to recover it. 00:25:02.348 [2024-07-24 22:34:27.873578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.348 [2024-07-24 22:34:27.873605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.348 qpair failed and we were unable to recover it. 00:25:02.348 [2024-07-24 22:34:27.873702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.348 [2024-07-24 22:34:27.873728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.348 qpair failed and we were unable to recover it. 00:25:02.348 [2024-07-24 22:34:27.873829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.348 [2024-07-24 22:34:27.873910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.348 qpair failed and we were unable to recover it. 00:25:02.348 [2024-07-24 22:34:27.874065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.348 [2024-07-24 22:34:27.874115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.348 qpair failed and we were unable to recover it. 00:25:02.348 [2024-07-24 22:34:27.874271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.348 [2024-07-24 22:34:27.874328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.348 qpair failed and we were unable to recover it. 00:25:02.348 [2024-07-24 22:34:27.874453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.348 [2024-07-24 22:34:27.874485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.348 qpair failed and we were unable to recover it. 00:25:02.348 [2024-07-24 22:34:27.874664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.348 [2024-07-24 22:34:27.874716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.348 qpair failed and we were unable to recover it. 00:25:02.348 [2024-07-24 22:34:27.874895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.348 [2024-07-24 22:34:27.874951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.348 qpair failed and we were unable to recover it. 00:25:02.348 [2024-07-24 22:34:27.875149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.348 [2024-07-24 22:34:27.875199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.348 qpair failed and we were unable to recover it. 00:25:02.348 [2024-07-24 22:34:27.875393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.348 [2024-07-24 22:34:27.875444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.348 qpair failed and we were unable to recover it. 00:25:02.349 [2024-07-24 22:34:27.875560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.349 [2024-07-24 22:34:27.875587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.349 qpair failed and we were unable to recover it. 00:25:02.349 [2024-07-24 22:34:27.875757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.349 [2024-07-24 22:34:27.875810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.349 qpair failed and we were unable to recover it. 00:25:02.349 [2024-07-24 22:34:27.875985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.349 [2024-07-24 22:34:27.876035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.349 qpair failed and we were unable to recover it. 00:25:02.349 [2024-07-24 22:34:27.876187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.349 [2024-07-24 22:34:27.876238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.349 qpair failed and we were unable to recover it. 00:25:02.349 [2024-07-24 22:34:27.876416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.349 [2024-07-24 22:34:27.876469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.349 qpair failed and we were unable to recover it. 00:25:02.349 [2024-07-24 22:34:27.876671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.349 [2024-07-24 22:34:27.876731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.349 qpair failed and we were unable to recover it. 00:25:02.349 [2024-07-24 22:34:27.876928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.349 [2024-07-24 22:34:27.876982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.349 qpair failed and we were unable to recover it. 00:25:02.349 [2024-07-24 22:34:27.877177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.349 [2024-07-24 22:34:27.877229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.349 qpair failed and we were unable to recover it. 00:25:02.349 [2024-07-24 22:34:27.877416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.349 [2024-07-24 22:34:27.877468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.349 qpair failed and we were unable to recover it. 00:25:02.349 [2024-07-24 22:34:27.877697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.349 [2024-07-24 22:34:27.877745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.349 qpair failed and we were unable to recover it. 00:25:02.349 [2024-07-24 22:34:27.877933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.349 [2024-07-24 22:34:27.877988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.349 qpair failed and we were unable to recover it. 00:25:02.349 [2024-07-24 22:34:27.878159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.349 [2024-07-24 22:34:27.878210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.349 qpair failed and we were unable to recover it. 00:25:02.349 [2024-07-24 22:34:27.878363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.349 [2024-07-24 22:34:27.878389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.349 qpair failed and we were unable to recover it. 00:25:02.349 [2024-07-24 22:34:27.878538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.349 [2024-07-24 22:34:27.878598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.349 qpair failed and we were unable to recover it. 00:25:02.349 [2024-07-24 22:34:27.878774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.349 [2024-07-24 22:34:27.878827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.349 qpair failed and we were unable to recover it. 00:25:02.349 [2024-07-24 22:34:27.878971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.349 [2024-07-24 22:34:27.879024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.349 qpair failed and we were unable to recover it. 00:25:02.349 [2024-07-24 22:34:27.879165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.349 [2024-07-24 22:34:27.879218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.349 qpair failed and we were unable to recover it. 00:25:02.349 [2024-07-24 22:34:27.879322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.349 [2024-07-24 22:34:27.879348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.349 qpair failed and we were unable to recover it. 00:25:02.349 [2024-07-24 22:34:27.879535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.349 [2024-07-24 22:34:27.879562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.349 qpair failed and we were unable to recover it. 00:25:02.349 [2024-07-24 22:34:27.879779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.349 [2024-07-24 22:34:27.879828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.349 qpair failed and we were unable to recover it. 00:25:02.349 [2024-07-24 22:34:27.879982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.349 [2024-07-24 22:34:27.880013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.349 qpair failed and we were unable to recover it. 00:25:02.349 [2024-07-24 22:34:27.880185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.349 [2024-07-24 22:34:27.880242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.349 qpair failed and we were unable to recover it. 00:25:02.349 [2024-07-24 22:34:27.880415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.349 [2024-07-24 22:34:27.880443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.349 qpair failed and we were unable to recover it. 00:25:02.349 [2024-07-24 22:34:27.880647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.349 [2024-07-24 22:34:27.880705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.349 qpair failed and we were unable to recover it. 00:25:02.349 [2024-07-24 22:34:27.880867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.349 [2024-07-24 22:34:27.880917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.349 qpair failed and we were unable to recover it. 00:25:02.349 [2024-07-24 22:34:27.881093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.349 [2024-07-24 22:34:27.881147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.349 qpair failed and we were unable to recover it. 00:25:02.349 [2024-07-24 22:34:27.881303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.349 [2024-07-24 22:34:27.881329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.349 qpair failed and we were unable to recover it. 00:25:02.349 [2024-07-24 22:34:27.881533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.349 [2024-07-24 22:34:27.881564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.349 qpair failed and we were unable to recover it. 00:25:02.349 [2024-07-24 22:34:27.881790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.349 [2024-07-24 22:34:27.881839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.349 qpair failed and we were unable to recover it. 00:25:02.349 [2024-07-24 22:34:27.881969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.349 [2024-07-24 22:34:27.882023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.349 qpair failed and we were unable to recover it. 00:25:02.349 [2024-07-24 22:34:27.882193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.349 [2024-07-24 22:34:27.882245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.349 qpair failed and we were unable to recover it. 00:25:02.349 [2024-07-24 22:34:27.882426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.349 [2024-07-24 22:34:27.882490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.349 qpair failed and we were unable to recover it. 00:25:02.349 [2024-07-24 22:34:27.882664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.349 [2024-07-24 22:34:27.882715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.349 qpair failed and we were unable to recover it. 00:25:02.349 [2024-07-24 22:34:27.882817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.349 [2024-07-24 22:34:27.882843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.349 qpair failed and we were unable to recover it. 00:25:02.349 [2024-07-24 22:34:27.883028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.349 [2024-07-24 22:34:27.883085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.349 qpair failed and we were unable to recover it. 00:25:02.349 [2024-07-24 22:34:27.883282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.349 [2024-07-24 22:34:27.883330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.349 qpair failed and we were unable to recover it. 00:25:02.349 [2024-07-24 22:34:27.883526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.349 [2024-07-24 22:34:27.883562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.349 qpair failed and we were unable to recover it. 00:25:02.349 [2024-07-24 22:34:27.883729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.350 [2024-07-24 22:34:27.883757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.350 qpair failed and we were unable to recover it. 00:25:02.350 [2024-07-24 22:34:27.883934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.350 [2024-07-24 22:34:27.883986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.350 qpair failed and we were unable to recover it. 00:25:02.350 [2024-07-24 22:34:27.884188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.350 [2024-07-24 22:34:27.884237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.350 qpair failed and we were unable to recover it. 00:25:02.350 [2024-07-24 22:34:27.884459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.350 [2024-07-24 22:34:27.884520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.350 qpair failed and we were unable to recover it. 00:25:02.350 [2024-07-24 22:34:27.884643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.350 [2024-07-24 22:34:27.884670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.350 qpair failed and we were unable to recover it. 00:25:02.350 [2024-07-24 22:34:27.884813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.350 [2024-07-24 22:34:27.884863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.350 qpair failed and we were unable to recover it. 00:25:02.350 [2024-07-24 22:34:27.884989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.350 [2024-07-24 22:34:27.885043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.350 qpair failed and we were unable to recover it. 00:25:02.350 [2024-07-24 22:34:27.885211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.350 [2024-07-24 22:34:27.885263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.350 qpair failed and we were unable to recover it. 00:25:02.350 [2024-07-24 22:34:27.885447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.350 [2024-07-24 22:34:27.885507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.350 qpair failed and we were unable to recover it. 00:25:02.350 [2024-07-24 22:34:27.885699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.350 [2024-07-24 22:34:27.885752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.350 qpair failed and we were unable to recover it. 00:25:02.350 [2024-07-24 22:34:27.885907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.350 [2024-07-24 22:34:27.885965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.350 qpair failed and we were unable to recover it. 00:25:02.350 [2024-07-24 22:34:27.886068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.350 [2024-07-24 22:34:27.886094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.350 qpair failed and we were unable to recover it. 00:25:02.350 [2024-07-24 22:34:27.886202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.350 [2024-07-24 22:34:27.886228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.350 qpair failed and we were unable to recover it. 00:25:02.350 [2024-07-24 22:34:27.886379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.350 [2024-07-24 22:34:27.886431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.350 qpair failed and we were unable to recover it. 00:25:02.350 [2024-07-24 22:34:27.886587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.350 [2024-07-24 22:34:27.886646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.350 qpair failed and we were unable to recover it. 00:25:02.350 [2024-07-24 22:34:27.886860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.350 [2024-07-24 22:34:27.886911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.350 qpair failed and we were unable to recover it. 00:25:02.350 [2024-07-24 22:34:27.887104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.350 [2024-07-24 22:34:27.887156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.350 qpair failed and we were unable to recover it. 00:25:02.350 [2024-07-24 22:34:27.887343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.350 [2024-07-24 22:34:27.887389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.350 qpair failed and we were unable to recover it. 00:25:02.350 [2024-07-24 22:34:27.887503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.350 [2024-07-24 22:34:27.887531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.350 qpair failed and we were unable to recover it. 00:25:02.350 [2024-07-24 22:34:27.887702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.350 [2024-07-24 22:34:27.887758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.350 qpair failed and we were unable to recover it. 00:25:02.350 [2024-07-24 22:34:27.887903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.350 [2024-07-24 22:34:27.887959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.350 qpair failed and we were unable to recover it. 00:25:02.350 [2024-07-24 22:34:27.888181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.350 [2024-07-24 22:34:27.888235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.350 qpair failed and we were unable to recover it. 00:25:02.350 [2024-07-24 22:34:27.888452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.350 [2024-07-24 22:34:27.888515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.350 qpair failed and we were unable to recover it. 00:25:02.350 [2024-07-24 22:34:27.888637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.350 [2024-07-24 22:34:27.888668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.350 qpair failed and we were unable to recover it. 00:25:02.350 [2024-07-24 22:34:27.888835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.350 [2024-07-24 22:34:27.888886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.350 qpair failed and we were unable to recover it. 00:25:02.350 [2024-07-24 22:34:27.889105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.350 [2024-07-24 22:34:27.889155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.350 qpair failed and we were unable to recover it. 00:25:02.350 [2024-07-24 22:34:27.889355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.350 [2024-07-24 22:34:27.889404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.350 qpair failed and we were unable to recover it. 00:25:02.350 [2024-07-24 22:34:27.889617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.350 [2024-07-24 22:34:27.889671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.350 qpair failed and we were unable to recover it. 00:25:02.350 [2024-07-24 22:34:27.889800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.350 [2024-07-24 22:34:27.889852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.350 qpair failed and we were unable to recover it. 00:25:02.350 [2024-07-24 22:34:27.889973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.350 [2024-07-24 22:34:27.890029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.350 qpair failed and we were unable to recover it. 00:25:02.350 [2024-07-24 22:34:27.890132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.350 [2024-07-24 22:34:27.890158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.350 qpair failed and we were unable to recover it. 00:25:02.350 [2024-07-24 22:34:27.890345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.350 [2024-07-24 22:34:27.890399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.350 qpair failed and we were unable to recover it. 00:25:02.350 [2024-07-24 22:34:27.890555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.350 [2024-07-24 22:34:27.890610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.350 qpair failed and we were unable to recover it. 00:25:02.350 [2024-07-24 22:34:27.890828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.350 [2024-07-24 22:34:27.890878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.350 qpair failed and we were unable to recover it. 00:25:02.350 [2024-07-24 22:34:27.890981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.350 [2024-07-24 22:34:27.891010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.350 qpair failed and we were unable to recover it. 00:25:02.350 [2024-07-24 22:34:27.891225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.350 [2024-07-24 22:34:27.891278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.350 qpair failed and we were unable to recover it. 00:25:02.350 [2024-07-24 22:34:27.891380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.350 [2024-07-24 22:34:27.891406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.350 qpair failed and we were unable to recover it. 00:25:02.351 [2024-07-24 22:34:27.891517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.351 [2024-07-24 22:34:27.891545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.351 qpair failed and we were unable to recover it. 00:25:02.351 [2024-07-24 22:34:27.891654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.351 [2024-07-24 22:34:27.891680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.351 qpair failed and we were unable to recover it. 00:25:02.351 [2024-07-24 22:34:27.891796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.351 [2024-07-24 22:34:27.891822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.351 qpair failed and we were unable to recover it. 00:25:02.351 [2024-07-24 22:34:27.891965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.351 [2024-07-24 22:34:27.892018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.351 qpair failed and we were unable to recover it. 00:25:02.351 [2024-07-24 22:34:27.892182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.351 [2024-07-24 22:34:27.892235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.351 qpair failed and we were unable to recover it. 00:25:02.351 [2024-07-24 22:34:27.892411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.351 [2024-07-24 22:34:27.892464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.351 qpair failed and we were unable to recover it. 00:25:02.351 [2024-07-24 22:34:27.892587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.351 [2024-07-24 22:34:27.892615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.351 qpair failed and we were unable to recover it. 00:25:02.351 [2024-07-24 22:34:27.892728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.351 [2024-07-24 22:34:27.892755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.351 qpair failed and we were unable to recover it. 00:25:02.351 [2024-07-24 22:34:27.892859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.351 [2024-07-24 22:34:27.892885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.351 qpair failed and we were unable to recover it. 00:25:02.351 [2024-07-24 22:34:27.893083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.351 [2024-07-24 22:34:27.893131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.351 qpair failed and we were unable to recover it. 00:25:02.351 [2024-07-24 22:34:27.893231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.351 [2024-07-24 22:34:27.893257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.351 qpair failed and we were unable to recover it. 00:25:02.351 [2024-07-24 22:34:27.893433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.351 [2024-07-24 22:34:27.893458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.351 qpair failed and we were unable to recover it. 00:25:02.351 [2024-07-24 22:34:27.893591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.351 [2024-07-24 22:34:27.893648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.351 qpair failed and we were unable to recover it. 00:25:02.351 [2024-07-24 22:34:27.893822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.351 [2024-07-24 22:34:27.893872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.351 qpair failed and we were unable to recover it. 00:25:02.351 [2024-07-24 22:34:27.894033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.351 [2024-07-24 22:34:27.894059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.351 qpair failed and we were unable to recover it. 00:25:02.351 [2024-07-24 22:34:27.894253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.351 [2024-07-24 22:34:27.894302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.351 qpair failed and we were unable to recover it. 00:25:02.351 [2024-07-24 22:34:27.894472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.351 [2024-07-24 22:34:27.894504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.351 qpair failed and we were unable to recover it. 00:25:02.351 [2024-07-24 22:34:27.894662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.351 [2024-07-24 22:34:27.894714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.351 qpair failed and we were unable to recover it. 00:25:02.351 [2024-07-24 22:34:27.894916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.351 [2024-07-24 22:34:27.894966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.351 qpair failed and we were unable to recover it. 00:25:02.351 [2024-07-24 22:34:27.895142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.351 [2024-07-24 22:34:27.895194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.351 qpair failed and we were unable to recover it. 00:25:02.351 [2024-07-24 22:34:27.895297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.351 [2024-07-24 22:34:27.895325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.351 qpair failed and we were unable to recover it. 00:25:02.351 [2024-07-24 22:34:27.895429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.351 [2024-07-24 22:34:27.895458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.351 qpair failed and we were unable to recover it. 00:25:02.351 [2024-07-24 22:34:27.895577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.351 [2024-07-24 22:34:27.895612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.351 qpair failed and we were unable to recover it. 00:25:02.352 [2024-07-24 22:34:27.895786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.352 [2024-07-24 22:34:27.895843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.352 qpair failed and we were unable to recover it. 00:25:02.352 [2024-07-24 22:34:27.895992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.352 [2024-07-24 22:34:27.896044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.352 qpair failed and we were unable to recover it. 00:25:02.352 [2024-07-24 22:34:27.896237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.352 [2024-07-24 22:34:27.896263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.352 qpair failed and we were unable to recover it. 00:25:02.352 [2024-07-24 22:34:27.896386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.352 [2024-07-24 22:34:27.896443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.352 qpair failed and we were unable to recover it. 00:25:02.352 [2024-07-24 22:34:27.896627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.352 [2024-07-24 22:34:27.896688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.352 qpair failed and we were unable to recover it. 00:25:02.352 [2024-07-24 22:34:27.896854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.352 [2024-07-24 22:34:27.896905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.352 qpair failed and we were unable to recover it. 00:25:02.352 [2024-07-24 22:34:27.897074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.352 [2024-07-24 22:34:27.897101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.352 qpair failed and we were unable to recover it. 00:25:02.352 [2024-07-24 22:34:27.897210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.352 [2024-07-24 22:34:27.897236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.352 qpair failed and we were unable to recover it. 00:25:02.352 [2024-07-24 22:34:27.897428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.352 [2024-07-24 22:34:27.897493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.352 qpair failed and we were unable to recover it. 00:25:02.352 [2024-07-24 22:34:27.897685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.352 [2024-07-24 22:34:27.897734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.352 qpair failed and we were unable to recover it. 00:25:02.352 [2024-07-24 22:34:27.897941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.352 [2024-07-24 22:34:27.897992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.352 qpair failed and we were unable to recover it. 00:25:02.352 [2024-07-24 22:34:27.898153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.352 [2024-07-24 22:34:27.898200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.352 qpair failed and we were unable to recover it. 00:25:02.352 [2024-07-24 22:34:27.898303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.352 [2024-07-24 22:34:27.898329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.352 qpair failed and we were unable to recover it. 00:25:02.352 [2024-07-24 22:34:27.898478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.352 [2024-07-24 22:34:27.898550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.352 qpair failed and we were unable to recover it. 00:25:02.352 [2024-07-24 22:34:27.898727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.352 [2024-07-24 22:34:27.898778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.352 qpair failed and we were unable to recover it. 00:25:02.352 [2024-07-24 22:34:27.898940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.352 [2024-07-24 22:34:27.898997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.352 qpair failed and we were unable to recover it. 00:25:02.352 [2024-07-24 22:34:27.899183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.352 [2024-07-24 22:34:27.899230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.352 qpair failed and we were unable to recover it. 00:25:02.352 [2024-07-24 22:34:27.899409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.352 [2024-07-24 22:34:27.899437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.352 qpair failed and we were unable to recover it. 00:25:02.352 [2024-07-24 22:34:27.899636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.352 [2024-07-24 22:34:27.899687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.352 qpair failed and we were unable to recover it. 00:25:02.352 [2024-07-24 22:34:27.899833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.352 [2024-07-24 22:34:27.899886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.352 qpair failed and we were unable to recover it. 00:25:02.352 [2024-07-24 22:34:27.900094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.352 [2024-07-24 22:34:27.900143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.352 qpair failed and we were unable to recover it. 00:25:02.352 [2024-07-24 22:34:27.900347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.352 [2024-07-24 22:34:27.900395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.352 qpair failed and we were unable to recover it. 00:25:02.352 [2024-07-24 22:34:27.900571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.352 [2024-07-24 22:34:27.900626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.352 qpair failed and we were unable to recover it. 00:25:02.352 [2024-07-24 22:34:27.900795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.352 [2024-07-24 22:34:27.900849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.352 qpair failed and we were unable to recover it. 00:25:02.352 [2024-07-24 22:34:27.901009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.352 [2024-07-24 22:34:27.901036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.352 qpair failed and we were unable to recover it. 00:25:02.352 [2024-07-24 22:34:27.901207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.352 [2024-07-24 22:34:27.901259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.352 qpair failed and we were unable to recover it. 00:25:02.352 [2024-07-24 22:34:27.901367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.352 [2024-07-24 22:34:27.901393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.352 qpair failed and we were unable to recover it. 00:25:02.352 [2024-07-24 22:34:27.901563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.352 [2024-07-24 22:34:27.901589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.352 qpair failed and we were unable to recover it. 00:25:02.352 [2024-07-24 22:34:27.901690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.352 [2024-07-24 22:34:27.901717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.352 qpair failed and we were unable to recover it. 00:25:02.352 [2024-07-24 22:34:27.901896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.352 [2024-07-24 22:34:27.901946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.352 qpair failed and we were unable to recover it. 00:25:02.352 [2024-07-24 22:34:27.902115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.352 [2024-07-24 22:34:27.902173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.352 qpair failed and we were unable to recover it. 00:25:02.352 [2024-07-24 22:34:27.902319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.352 [2024-07-24 22:34:27.902376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.352 qpair failed and we were unable to recover it. 00:25:02.352 [2024-07-24 22:34:27.902486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.352 [2024-07-24 22:34:27.902512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.352 qpair failed and we were unable to recover it. 00:25:02.352 [2024-07-24 22:34:27.902694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.352 [2024-07-24 22:34:27.902722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.352 qpair failed and we were unable to recover it. 00:25:02.352 [2024-07-24 22:34:27.902907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.352 [2024-07-24 22:34:27.902958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.352 qpair failed and we were unable to recover it. 00:25:02.352 [2024-07-24 22:34:27.903166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.352 [2024-07-24 22:34:27.903217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.352 qpair failed and we were unable to recover it. 00:25:02.352 [2024-07-24 22:34:27.903393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.352 [2024-07-24 22:34:27.903446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.353 qpair failed and we were unable to recover it. 00:25:02.353 [2024-07-24 22:34:27.903595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.353 [2024-07-24 22:34:27.903648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.353 qpair failed and we were unable to recover it. 00:25:02.353 [2024-07-24 22:34:27.903807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.353 [2024-07-24 22:34:27.903867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.353 qpair failed and we were unable to recover it. 00:25:02.353 [2024-07-24 22:34:27.903977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.353 [2024-07-24 22:34:27.904004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.353 qpair failed and we were unable to recover it. 00:25:02.353 [2024-07-24 22:34:27.904150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.353 [2024-07-24 22:34:27.904200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.353 qpair failed and we were unable to recover it. 00:25:02.353 [2024-07-24 22:34:27.904343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.353 [2024-07-24 22:34:27.904400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.353 qpair failed and we were unable to recover it. 00:25:02.353 [2024-07-24 22:34:27.904588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.353 [2024-07-24 22:34:27.904637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.353 qpair failed and we were unable to recover it. 00:25:02.353 [2024-07-24 22:34:27.904819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.353 [2024-07-24 22:34:27.904874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.353 qpair failed and we were unable to recover it. 00:25:02.353 [2024-07-24 22:34:27.905038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.353 [2024-07-24 22:34:27.905092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.353 qpair failed and we were unable to recover it. 00:25:02.353 [2024-07-24 22:34:27.905253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.353 [2024-07-24 22:34:27.905280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.353 qpair failed and we were unable to recover it. 00:25:02.353 [2024-07-24 22:34:27.905452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.353 [2024-07-24 22:34:27.905490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.353 qpair failed and we were unable to recover it. 00:25:02.353 [2024-07-24 22:34:27.905634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.353 [2024-07-24 22:34:27.905686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.353 qpair failed and we were unable to recover it. 00:25:02.353 [2024-07-24 22:34:27.905864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.353 [2024-07-24 22:34:27.905918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.353 qpair failed and we were unable to recover it. 00:25:02.353 [2024-07-24 22:34:27.906060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.353 [2024-07-24 22:34:27.906115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.353 qpair failed and we were unable to recover it. 00:25:02.353 [2024-07-24 22:34:27.906243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.353 [2024-07-24 22:34:27.906297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.353 qpair failed and we were unable to recover it. 00:25:02.353 [2024-07-24 22:34:27.906473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.353 [2024-07-24 22:34:27.906507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.353 qpair failed and we were unable to recover it. 00:25:02.353 [2024-07-24 22:34:27.906680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.353 [2024-07-24 22:34:27.906729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.353 qpair failed and we were unable to recover it. 00:25:02.353 [2024-07-24 22:34:27.906895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.353 [2024-07-24 22:34:27.906952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.353 qpair failed and we were unable to recover it. 00:25:02.353 [2024-07-24 22:34:27.907144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.353 [2024-07-24 22:34:27.907195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.353 qpair failed and we were unable to recover it. 00:25:02.353 [2024-07-24 22:34:27.907433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.353 [2024-07-24 22:34:27.907488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.353 qpair failed and we were unable to recover it. 00:25:02.353 [2024-07-24 22:34:27.907685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.353 [2024-07-24 22:34:27.907734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.353 qpair failed and we were unable to recover it. 00:25:02.353 [2024-07-24 22:34:27.907846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.353 [2024-07-24 22:34:27.907874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.353 qpair failed and we were unable to recover it. 00:25:02.353 [2024-07-24 22:34:27.908051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.353 [2024-07-24 22:34:27.908107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.353 qpair failed and we were unable to recover it. 00:25:02.353 [2024-07-24 22:34:27.908265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.353 [2024-07-24 22:34:27.908322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.353 qpair failed and we were unable to recover it. 00:25:02.353 [2024-07-24 22:34:27.908515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.353 [2024-07-24 22:34:27.908560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.353 qpair failed and we were unable to recover it. 00:25:02.353 [2024-07-24 22:34:27.908689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.353 [2024-07-24 22:34:27.908745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.353 qpair failed and we were unable to recover it. 00:25:02.353 [2024-07-24 22:34:27.908913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.353 [2024-07-24 22:34:27.908967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.353 qpair failed and we were unable to recover it. 00:25:02.353 [2024-07-24 22:34:27.909138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.353 [2024-07-24 22:34:27.909191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.353 qpair failed and we were unable to recover it. 00:25:02.353 [2024-07-24 22:34:27.909325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.353 [2024-07-24 22:34:27.909379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.353 qpair failed and we were unable to recover it. 00:25:02.353 [2024-07-24 22:34:27.909535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.353 [2024-07-24 22:34:27.909564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.353 qpair failed and we were unable to recover it. 00:25:02.353 [2024-07-24 22:34:27.909750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.353 [2024-07-24 22:34:27.909802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.353 qpair failed and we were unable to recover it. 00:25:02.353 [2024-07-24 22:34:27.909964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.353 [2024-07-24 22:34:27.910021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.353 qpair failed and we were unable to recover it. 00:25:02.353 [2024-07-24 22:34:27.910255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.353 [2024-07-24 22:34:27.910307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.353 qpair failed and we were unable to recover it. 00:25:02.353 [2024-07-24 22:34:27.910463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.353 [2024-07-24 22:34:27.910527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.353 qpair failed and we were unable to recover it. 00:25:02.353 [2024-07-24 22:34:27.910708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.353 [2024-07-24 22:34:27.910734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.353 qpair failed and we were unable to recover it. 00:25:02.353 [2024-07-24 22:34:27.910882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.353 [2024-07-24 22:34:27.910935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.353 qpair failed and we were unable to recover it. 00:25:02.353 [2024-07-24 22:34:27.911108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.353 [2024-07-24 22:34:27.911160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.353 qpair failed and we were unable to recover it. 00:25:02.353 [2024-07-24 22:34:27.911329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.353 [2024-07-24 22:34:27.911379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.353 qpair failed and we were unable to recover it. 00:25:02.353 [2024-07-24 22:34:27.911555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.353 [2024-07-24 22:34:27.911609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.353 qpair failed and we were unable to recover it. 00:25:02.353 [2024-07-24 22:34:27.911771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.353 [2024-07-24 22:34:27.911823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.353 qpair failed and we were unable to recover it. 00:25:02.353 [2024-07-24 22:34:27.911937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.353 [2024-07-24 22:34:27.911964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.353 qpair failed and we were unable to recover it. 00:25:02.353 [2024-07-24 22:34:27.912116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.353 [2024-07-24 22:34:27.912169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.353 qpair failed and we were unable to recover it. 00:25:02.353 [2024-07-24 22:34:27.912266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.353 [2024-07-24 22:34:27.912292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.354 qpair failed and we were unable to recover it. 00:25:02.354 [2024-07-24 22:34:27.912448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.354 [2024-07-24 22:34:27.912475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.354 qpair failed and we were unable to recover it. 00:25:02.354 [2024-07-24 22:34:27.912635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.354 [2024-07-24 22:34:27.912686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.354 qpair failed and we were unable to recover it. 00:25:02.354 [2024-07-24 22:34:27.912841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.354 [2024-07-24 22:34:27.912866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.354 qpair failed and we were unable to recover it. 00:25:02.354 [2024-07-24 22:34:27.913017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.354 [2024-07-24 22:34:27.913070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.354 qpair failed and we were unable to recover it. 00:25:02.354 [2024-07-24 22:34:27.913256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.354 [2024-07-24 22:34:27.913311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.354 qpair failed and we were unable to recover it. 00:25:02.354 [2024-07-24 22:34:27.913470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.354 [2024-07-24 22:34:27.913535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.354 qpair failed and we were unable to recover it. 00:25:02.354 [2024-07-24 22:34:27.913684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.354 [2024-07-24 22:34:27.913739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.354 qpair failed and we were unable to recover it. 00:25:02.354 [2024-07-24 22:34:27.913945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.354 [2024-07-24 22:34:27.913997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.354 qpair failed and we were unable to recover it. 00:25:02.354 [2024-07-24 22:34:27.914149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.354 [2024-07-24 22:34:27.914175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.354 qpair failed and we were unable to recover it. 00:25:02.354 [2024-07-24 22:34:27.914329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.354 [2024-07-24 22:34:27.914380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.354 qpair failed and we were unable to recover it. 00:25:02.354 [2024-07-24 22:34:27.914492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.354 [2024-07-24 22:34:27.914519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.354 qpair failed and we were unable to recover it. 00:25:02.354 [2024-07-24 22:34:27.914664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.354 [2024-07-24 22:34:27.914720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.354 qpair failed and we were unable to recover it. 00:25:02.354 [2024-07-24 22:34:27.914888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.354 [2024-07-24 22:34:27.914914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.354 qpair failed and we were unable to recover it. 00:25:02.354 [2024-07-24 22:34:27.915064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.354 [2024-07-24 22:34:27.915122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.354 qpair failed and we were unable to recover it. 00:25:02.354 [2024-07-24 22:34:27.915245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.354 [2024-07-24 22:34:27.915270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.354 qpair failed and we were unable to recover it. 00:25:02.354 [2024-07-24 22:34:27.915470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.354 [2024-07-24 22:34:27.915530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.354 qpair failed and we were unable to recover it. 00:25:02.354 [2024-07-24 22:34:27.915669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.354 [2024-07-24 22:34:27.915723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.354 qpair failed and we were unable to recover it. 00:25:02.354 [2024-07-24 22:34:27.915851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.354 [2024-07-24 22:34:27.915902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.354 qpair failed and we were unable to recover it. 00:25:02.354 [2024-07-24 22:34:27.916075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.354 [2024-07-24 22:34:27.916128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.354 qpair failed and we were unable to recover it. 00:25:02.354 [2024-07-24 22:34:27.916228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.354 [2024-07-24 22:34:27.916254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.354 qpair failed and we were unable to recover it. 00:25:02.354 [2024-07-24 22:34:27.916408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.354 [2024-07-24 22:34:27.916434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.354 qpair failed and we were unable to recover it. 00:25:02.354 [2024-07-24 22:34:27.916568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.354 [2024-07-24 22:34:27.916595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.354 qpair failed and we were unable to recover it. 00:25:02.354 [2024-07-24 22:34:27.916818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.354 [2024-07-24 22:34:27.916867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.354 qpair failed and we were unable to recover it. 00:25:02.354 [2024-07-24 22:34:27.916962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.354 [2024-07-24 22:34:27.917040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.354 qpair failed and we were unable to recover it. 00:25:02.354 [2024-07-24 22:34:27.917194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.354 [2024-07-24 22:34:27.917241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.354 qpair failed and we were unable to recover it. 00:25:02.354 [2024-07-24 22:34:27.917442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.354 [2024-07-24 22:34:27.917504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.354 qpair failed and we were unable to recover it. 00:25:02.354 [2024-07-24 22:34:27.917700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.354 [2024-07-24 22:34:27.917760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.354 qpair failed and we were unable to recover it. 00:25:02.354 [2024-07-24 22:34:27.917904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.354 [2024-07-24 22:34:27.917955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.354 qpair failed and we were unable to recover it. 00:25:02.354 [2024-07-24 22:34:27.918102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.354 [2024-07-24 22:34:27.918159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.354 qpair failed and we were unable to recover it. 00:25:02.354 [2024-07-24 22:34:27.918309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.354 [2024-07-24 22:34:27.918350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.354 qpair failed and we were unable to recover it. 00:25:02.354 [2024-07-24 22:34:27.918543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.354 [2024-07-24 22:34:27.918571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.354 qpair failed and we were unable to recover it. 00:25:02.354 [2024-07-24 22:34:27.918748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.354 [2024-07-24 22:34:27.918773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.354 qpair failed and we were unable to recover it. 00:25:02.354 [2024-07-24 22:34:27.918930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.354 [2024-07-24 22:34:27.918979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.354 qpair failed and we were unable to recover it. 00:25:02.354 [2024-07-24 22:34:27.919166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.354 [2024-07-24 22:34:27.919214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.354 qpair failed and we were unable to recover it. 00:25:02.354 [2024-07-24 22:34:27.919331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.354 [2024-07-24 22:34:27.919357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.354 qpair failed and we were unable to recover it. 00:25:02.354 [2024-07-24 22:34:27.919577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.354 [2024-07-24 22:34:27.919630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.354 qpair failed and we were unable to recover it. 00:25:02.354 [2024-07-24 22:34:27.919732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.354 [2024-07-24 22:34:27.919757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.354 qpair failed and we were unable to recover it. 00:25:02.354 [2024-07-24 22:34:27.919950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.354 [2024-07-24 22:34:27.919999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.354 qpair failed and we were unable to recover it. 00:25:02.354 [2024-07-24 22:34:27.920127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.354 [2024-07-24 22:34:27.920181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.354 qpair failed and we were unable to recover it. 00:25:02.354 [2024-07-24 22:34:27.920279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.354 [2024-07-24 22:34:27.920305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.354 qpair failed and we were unable to recover it. 00:25:02.354 [2024-07-24 22:34:27.920430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.354 [2024-07-24 22:34:27.920455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.354 qpair failed and we were unable to recover it. 00:25:02.354 [2024-07-24 22:34:27.920629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.354 [2024-07-24 22:34:27.920657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.354 qpair failed and we were unable to recover it. 00:25:02.354 [2024-07-24 22:34:27.920775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.354 [2024-07-24 22:34:27.920801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.354 qpair failed and we were unable to recover it. 00:25:02.354 [2024-07-24 22:34:27.920986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.355 [2024-07-24 22:34:27.921037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.355 qpair failed and we were unable to recover it. 00:25:02.355 [2024-07-24 22:34:27.921149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.355 [2024-07-24 22:34:27.921181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.355 qpair failed and we were unable to recover it. 00:25:02.355 [2024-07-24 22:34:27.921348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.355 [2024-07-24 22:34:27.921398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.355 qpair failed and we were unable to recover it. 00:25:02.355 [2024-07-24 22:34:27.921508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.355 [2024-07-24 22:34:27.921537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.355 qpair failed and we were unable to recover it. 00:25:02.355 [2024-07-24 22:34:27.921763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.355 [2024-07-24 22:34:27.921812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.355 qpair failed and we were unable to recover it. 00:25:02.355 [2024-07-24 22:34:27.921970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.355 [2024-07-24 22:34:27.922023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.355 qpair failed and we were unable to recover it. 00:25:02.355 [2024-07-24 22:34:27.922190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.355 [2024-07-24 22:34:27.922242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.355 qpair failed and we were unable to recover it. 00:25:02.355 [2024-07-24 22:34:27.922411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.355 [2024-07-24 22:34:27.922461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.355 qpair failed and we were unable to recover it. 00:25:02.355 [2024-07-24 22:34:27.922572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.355 [2024-07-24 22:34:27.922598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.355 qpair failed and we were unable to recover it. 00:25:02.355 [2024-07-24 22:34:27.922749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.355 [2024-07-24 22:34:27.922806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.355 qpair failed and we were unable to recover it. 00:25:02.355 [2024-07-24 22:34:27.922909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.355 [2024-07-24 22:34:27.922935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.355 qpair failed and we were unable to recover it. 00:25:02.355 [2024-07-24 22:34:27.923069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.355 [2024-07-24 22:34:27.923123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.355 qpair failed and we were unable to recover it. 00:25:02.355 [2024-07-24 22:34:27.923315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.355 [2024-07-24 22:34:27.923364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.355 qpair failed and we were unable to recover it. 00:25:02.355 [2024-07-24 22:34:27.923534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.355 [2024-07-24 22:34:27.923562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.355 qpair failed and we were unable to recover it. 00:25:02.355 [2024-07-24 22:34:27.923767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.355 [2024-07-24 22:34:27.923823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.355 qpair failed and we were unable to recover it. 00:25:02.355 [2024-07-24 22:34:27.924090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.355 [2024-07-24 22:34:27.924142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.355 qpair failed and we were unable to recover it. 00:25:02.355 [2024-07-24 22:34:27.924248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.355 [2024-07-24 22:34:27.924275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.355 qpair failed and we were unable to recover it. 00:25:02.355 [2024-07-24 22:34:27.924380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.355 [2024-07-24 22:34:27.924408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.355 qpair failed and we were unable to recover it. 00:25:02.355 [2024-07-24 22:34:27.924514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.355 [2024-07-24 22:34:27.924543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.355 qpair failed and we were unable to recover it. 00:25:02.355 [2024-07-24 22:34:27.924699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.355 [2024-07-24 22:34:27.924757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.355 qpair failed and we were unable to recover it. 00:25:02.355 [2024-07-24 22:34:27.924927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.355 [2024-07-24 22:34:27.924979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.355 qpair failed and we were unable to recover it. 00:25:02.355 [2024-07-24 22:34:27.925118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.355 [2024-07-24 22:34:27.925165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.355 qpair failed and we were unable to recover it. 00:25:02.355 [2024-07-24 22:34:27.925272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.355 [2024-07-24 22:34:27.925297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.355 qpair failed and we were unable to recover it. 00:25:02.355 [2024-07-24 22:34:27.925508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.355 [2024-07-24 22:34:27.925557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.355 qpair failed and we were unable to recover it. 00:25:02.355 [2024-07-24 22:34:27.925737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.355 [2024-07-24 22:34:27.925786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.355 qpair failed and we were unable to recover it. 00:25:02.355 [2024-07-24 22:34:27.925950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.355 [2024-07-24 22:34:27.925976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.355 qpair failed and we were unable to recover it. 00:25:02.355 [2024-07-24 22:34:27.926182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.355 [2024-07-24 22:34:27.926234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.355 qpair failed and we were unable to recover it. 00:25:02.355 [2024-07-24 22:34:27.926417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.355 [2024-07-24 22:34:27.926473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.355 qpair failed and we were unable to recover it. 00:25:02.355 [2024-07-24 22:34:27.926616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.355 [2024-07-24 22:34:27.926690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.355 qpair failed and we were unable to recover it. 00:25:02.355 [2024-07-24 22:34:27.926816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.355 [2024-07-24 22:34:27.926845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.355 qpair failed and we were unable to recover it. 00:25:02.355 [2024-07-24 22:34:27.926993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.355 [2024-07-24 22:34:27.927045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.355 qpair failed and we were unable to recover it. 00:25:02.355 [2024-07-24 22:34:27.927153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.355 [2024-07-24 22:34:27.927179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.355 qpair failed and we were unable to recover it. 00:25:02.355 [2024-07-24 22:34:27.927292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.355 [2024-07-24 22:34:27.927318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.355 qpair failed and we were unable to recover it. 00:25:02.355 [2024-07-24 22:34:27.927475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.355 [2024-07-24 22:34:27.927512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.355 qpair failed and we were unable to recover it. 00:25:02.355 [2024-07-24 22:34:27.927613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.355 [2024-07-24 22:34:27.927639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.355 qpair failed and we were unable to recover it. 00:25:02.355 [2024-07-24 22:34:27.927864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.355 [2024-07-24 22:34:27.927916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.355 qpair failed and we were unable to recover it. 00:25:02.355 [2024-07-24 22:34:27.928085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.355 [2024-07-24 22:34:27.928113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.355 qpair failed and we were unable to recover it. 00:25:02.355 [2024-07-24 22:34:27.928260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.355 [2024-07-24 22:34:27.928314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.355 qpair failed and we were unable to recover it. 00:25:02.355 [2024-07-24 22:34:27.928459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.355 [2024-07-24 22:34:27.928520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.355 qpair failed and we were unable to recover it. 00:25:02.355 [2024-07-24 22:34:27.928628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.355 [2024-07-24 22:34:27.928655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.355 qpair failed and we were unable to recover it. 00:25:02.355 [2024-07-24 22:34:27.928799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.355 [2024-07-24 22:34:27.928850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.355 qpair failed and we were unable to recover it. 00:25:02.356 [2024-07-24 22:34:27.929038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.356 [2024-07-24 22:34:27.929090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.356 qpair failed and we were unable to recover it. 00:25:02.356 [2024-07-24 22:34:27.929284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.356 [2024-07-24 22:34:27.929333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.356 qpair failed and we were unable to recover it. 00:25:02.356 [2024-07-24 22:34:27.929436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.356 [2024-07-24 22:34:27.929462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.356 qpair failed and we were unable to recover it. 00:25:02.356 [2024-07-24 22:34:27.929670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.356 [2024-07-24 22:34:27.929696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.356 qpair failed and we were unable to recover it. 00:25:02.356 [2024-07-24 22:34:27.929903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.356 [2024-07-24 22:34:27.929952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.356 qpair failed and we were unable to recover it. 00:25:02.356 [2024-07-24 22:34:27.930113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.356 [2024-07-24 22:34:27.930167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.356 qpair failed and we were unable to recover it. 00:25:02.356 [2024-07-24 22:34:27.930326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.356 [2024-07-24 22:34:27.930377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.356 qpair failed and we were unable to recover it. 00:25:02.356 [2024-07-24 22:34:27.930544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.356 [2024-07-24 22:34:27.930596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.356 qpair failed and we were unable to recover it. 00:25:02.356 [2024-07-24 22:34:27.930787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.356 [2024-07-24 22:34:27.930837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.356 qpair failed and we were unable to recover it. 00:25:02.356 [2024-07-24 22:34:27.930987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.356 [2024-07-24 22:34:27.931041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.356 qpair failed and we were unable to recover it. 00:25:02.356 [2024-07-24 22:34:27.931242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.356 [2024-07-24 22:34:27.931290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.356 qpair failed and we were unable to recover it. 00:25:02.356 [2024-07-24 22:34:27.931392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.356 [2024-07-24 22:34:27.931420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.356 qpair failed and we were unable to recover it. 00:25:02.356 [2024-07-24 22:34:27.931603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.356 [2024-07-24 22:34:27.931655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.356 qpair failed and we were unable to recover it. 00:25:02.356 [2024-07-24 22:34:27.931807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.356 [2024-07-24 22:34:27.931833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.356 qpair failed and we were unable to recover it. 00:25:02.356 [2024-07-24 22:34:27.932035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.356 [2024-07-24 22:34:27.932082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.356 qpair failed and we were unable to recover it. 00:25:02.356 [2024-07-24 22:34:27.932283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.356 [2024-07-24 22:34:27.932335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.356 qpair failed and we were unable to recover it. 00:25:02.356 [2024-07-24 22:34:27.932557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.356 [2024-07-24 22:34:27.932587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.356 qpair failed and we were unable to recover it. 00:25:02.356 [2024-07-24 22:34:27.932733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.356 [2024-07-24 22:34:27.932786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.356 qpair failed and we were unable to recover it. 00:25:02.356 [2024-07-24 22:34:27.932977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.356 [2024-07-24 22:34:27.933005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.356 qpair failed and we were unable to recover it. 00:25:02.356 [2024-07-24 22:34:27.933226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.356 [2024-07-24 22:34:27.933279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.356 qpair failed and we were unable to recover it. 00:25:02.356 [2024-07-24 22:34:27.933424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.356 [2024-07-24 22:34:27.933478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.356 qpair failed and we were unable to recover it. 00:25:02.356 [2024-07-24 22:34:27.933693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.356 [2024-07-24 22:34:27.933750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.356 qpair failed and we were unable to recover it. 00:25:02.356 [2024-07-24 22:34:27.933944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.356 [2024-07-24 22:34:27.933992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.356 qpair failed and we were unable to recover it. 00:25:02.356 [2024-07-24 22:34:27.934156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.356 [2024-07-24 22:34:27.934210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.356 qpair failed and we were unable to recover it. 00:25:02.356 [2024-07-24 22:34:27.934363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.356 [2024-07-24 22:34:27.934413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.356 qpair failed and we were unable to recover it. 00:25:02.356 [2024-07-24 22:34:27.934605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.356 [2024-07-24 22:34:27.934656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.356 qpair failed and we were unable to recover it. 00:25:02.356 [2024-07-24 22:34:27.934854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.356 [2024-07-24 22:34:27.934905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.356 qpair failed and we were unable to recover it. 00:25:02.356 [2024-07-24 22:34:27.935102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.356 [2024-07-24 22:34:27.935136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.356 qpair failed and we were unable to recover it. 00:25:02.357 [2024-07-24 22:34:27.935316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.357 [2024-07-24 22:34:27.935373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.357 qpair failed and we were unable to recover it. 00:25:02.357 [2024-07-24 22:34:27.935567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.357 [2024-07-24 22:34:27.935619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.357 qpair failed and we were unable to recover it. 00:25:02.357 [2024-07-24 22:34:27.935734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.357 [2024-07-24 22:34:27.935762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.357 qpair failed and we were unable to recover it. 00:25:02.357 [2024-07-24 22:34:27.935964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.357 [2024-07-24 22:34:27.936012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.357 qpair failed and we were unable to recover it. 00:25:02.357 [2024-07-24 22:34:27.936164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.357 [2024-07-24 22:34:27.936220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.357 qpair failed and we were unable to recover it. 00:25:02.357 [2024-07-24 22:34:27.936383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.357 [2024-07-24 22:34:27.936432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.357 qpair failed and we were unable to recover it. 00:25:02.357 [2024-07-24 22:34:27.936556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.357 [2024-07-24 22:34:27.936583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.357 qpair failed and we were unable to recover it. 00:25:02.357 [2024-07-24 22:34:27.936740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.357 [2024-07-24 22:34:27.936794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.357 qpair failed and we were unable to recover it. 00:25:02.357 [2024-07-24 22:34:27.936897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.357 [2024-07-24 22:34:27.936924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.357 qpair failed and we were unable to recover it. 00:25:02.357 [2024-07-24 22:34:27.937096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.357 [2024-07-24 22:34:27.937151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.357 qpair failed and we were unable to recover it. 00:25:02.357 [2024-07-24 22:34:27.937316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.357 [2024-07-24 22:34:27.937367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.357 qpair failed and we were unable to recover it. 00:25:02.357 [2024-07-24 22:34:27.937557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.357 [2024-07-24 22:34:27.937606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.357 qpair failed and we were unable to recover it. 00:25:02.357 [2024-07-24 22:34:27.937757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.357 [2024-07-24 22:34:27.937817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.357 qpair failed and we were unable to recover it. 00:25:02.357 [2024-07-24 22:34:27.937923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.357 [2024-07-24 22:34:27.937949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.357 qpair failed and we were unable to recover it. 00:25:02.357 [2024-07-24 22:34:27.938142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.357 [2024-07-24 22:34:27.938190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.357 qpair failed and we were unable to recover it. 00:25:02.357 [2024-07-24 22:34:27.938412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.357 [2024-07-24 22:34:27.938475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.357 qpair failed and we were unable to recover it. 00:25:02.357 [2024-07-24 22:34:27.938679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.357 [2024-07-24 22:34:27.938706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.357 qpair failed and we were unable to recover it. 00:25:02.357 [2024-07-24 22:34:27.938869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.357 [2024-07-24 22:34:27.938926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.357 qpair failed and we were unable to recover it. 00:25:02.357 [2024-07-24 22:34:27.939060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.357 [2024-07-24 22:34:27.939111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.357 qpair failed and we were unable to recover it. 00:25:02.357 [2024-07-24 22:34:27.939289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.357 [2024-07-24 22:34:27.939339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.357 qpair failed and we were unable to recover it. 00:25:02.357 [2024-07-24 22:34:27.939542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.357 [2024-07-24 22:34:27.939569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.357 qpair failed and we were unable to recover it. 00:25:02.357 [2024-07-24 22:34:27.939742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.357 [2024-07-24 22:34:27.939792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.357 qpair failed and we were unable to recover it. 00:25:02.357 [2024-07-24 22:34:27.939971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.357 [2024-07-24 22:34:27.940023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.357 qpair failed and we were unable to recover it. 00:25:02.357 [2024-07-24 22:34:27.940164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.357 [2024-07-24 22:34:27.940217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.357 qpair failed and we were unable to recover it. 00:25:02.357 [2024-07-24 22:34:27.940357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.357 [2024-07-24 22:34:27.940408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.357 qpair failed and we were unable to recover it. 00:25:02.357 [2024-07-24 22:34:27.940532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.357 [2024-07-24 22:34:27.940559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.357 qpair failed and we were unable to recover it. 00:25:02.357 [2024-07-24 22:34:27.940701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.357 [2024-07-24 22:34:27.940756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.357 qpair failed and we were unable to recover it. 00:25:02.357 [2024-07-24 22:34:27.940914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.357 [2024-07-24 22:34:27.940941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.357 qpair failed and we were unable to recover it. 00:25:02.357 [2024-07-24 22:34:27.941146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.357 [2024-07-24 22:34:27.941200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.357 qpair failed and we were unable to recover it. 00:25:02.357 [2024-07-24 22:34:27.941352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.357 [2024-07-24 22:34:27.941402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.357 qpair failed and we were unable to recover it. 00:25:02.357 [2024-07-24 22:34:27.941516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.357 [2024-07-24 22:34:27.941543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.357 qpair failed and we were unable to recover it. 00:25:02.357 [2024-07-24 22:34:27.941683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.357 [2024-07-24 22:34:27.941737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.357 qpair failed and we were unable to recover it. 00:25:02.357 [2024-07-24 22:34:27.941838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.357 [2024-07-24 22:34:27.941865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.357 qpair failed and we were unable to recover it. 00:25:02.357 [2024-07-24 22:34:27.941965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.357 [2024-07-24 22:34:27.941992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.357 qpair failed and we were unable to recover it. 00:25:02.357 [2024-07-24 22:34:27.942150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.357 [2024-07-24 22:34:27.942201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.357 qpair failed and we were unable to recover it. 00:25:02.357 [2024-07-24 22:34:27.942438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.357 [2024-07-24 22:34:27.942495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.357 qpair failed and we were unable to recover it. 00:25:02.357 [2024-07-24 22:34:27.942645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.357 [2024-07-24 22:34:27.942673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.357 qpair failed and we were unable to recover it. 00:25:02.358 [2024-07-24 22:34:27.942820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.358 [2024-07-24 22:34:27.942869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.358 qpair failed and we were unable to recover it. 00:25:02.358 [2024-07-24 22:34:27.943081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.358 [2024-07-24 22:34:27.943130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.358 qpair failed and we were unable to recover it. 00:25:02.358 [2024-07-24 22:34:27.943334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.358 [2024-07-24 22:34:27.943391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.358 qpair failed and we were unable to recover it. 00:25:02.358 [2024-07-24 22:34:27.943553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.358 [2024-07-24 22:34:27.943608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.358 qpair failed and we were unable to recover it. 00:25:02.358 [2024-07-24 22:34:27.943713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.358 [2024-07-24 22:34:27.943739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.358 qpair failed and we were unable to recover it. 00:25:02.358 [2024-07-24 22:34:27.943875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.358 [2024-07-24 22:34:27.943926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.358 qpair failed and we were unable to recover it. 00:25:02.358 [2024-07-24 22:34:27.944097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.358 [2024-07-24 22:34:27.944148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.358 qpair failed and we were unable to recover it. 00:25:02.358 [2024-07-24 22:34:27.944321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.358 [2024-07-24 22:34:27.944374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.358 qpair failed and we were unable to recover it. 00:25:02.358 [2024-07-24 22:34:27.944580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.358 [2024-07-24 22:34:27.944633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.358 qpair failed and we were unable to recover it. 00:25:02.358 [2024-07-24 22:34:27.944847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.358 [2024-07-24 22:34:27.944898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.358 qpair failed and we were unable to recover it. 00:25:02.358 [2024-07-24 22:34:27.945089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.358 [2024-07-24 22:34:27.945141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.358 qpair failed and we were unable to recover it. 00:25:02.358 [2024-07-24 22:34:27.945296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.358 [2024-07-24 22:34:27.945350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.358 qpair failed and we were unable to recover it. 00:25:02.358 [2024-07-24 22:34:27.945529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.358 [2024-07-24 22:34:27.945566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.358 qpair failed and we were unable to recover it. 00:25:02.358 [2024-07-24 22:34:27.945710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.358 [2024-07-24 22:34:27.945767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.358 qpair failed and we were unable to recover it. 00:25:02.358 [2024-07-24 22:34:27.945987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.358 [2024-07-24 22:34:27.946037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.358 qpair failed and we were unable to recover it. 00:25:02.358 [2024-07-24 22:34:27.946138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.358 [2024-07-24 22:34:27.946170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.358 qpair failed and we were unable to recover it. 00:25:02.358 [2024-07-24 22:34:27.946347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.358 [2024-07-24 22:34:27.946402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.358 qpair failed and we were unable to recover it. 00:25:02.358 [2024-07-24 22:34:27.946510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.358 [2024-07-24 22:34:27.946537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.358 qpair failed and we were unable to recover it. 00:25:02.358 [2024-07-24 22:34:27.946639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.358 [2024-07-24 22:34:27.946666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.358 qpair failed and we were unable to recover it. 00:25:02.358 [2024-07-24 22:34:27.946868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.358 [2024-07-24 22:34:27.946919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.358 qpair failed and we were unable to recover it. 00:25:02.358 [2024-07-24 22:34:27.947065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.358 [2024-07-24 22:34:27.947091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.358 qpair failed and we were unable to recover it. 00:25:02.358 [2024-07-24 22:34:27.947302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.358 [2024-07-24 22:34:27.947353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.358 qpair failed and we were unable to recover it. 00:25:02.358 [2024-07-24 22:34:27.947534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.358 [2024-07-24 22:34:27.947561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.358 qpair failed and we were unable to recover it. 00:25:02.358 [2024-07-24 22:34:27.947719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.358 [2024-07-24 22:34:27.947780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.358 qpair failed and we were unable to recover it. 00:25:02.358 [2024-07-24 22:34:27.947948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.358 [2024-07-24 22:34:27.948003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.358 qpair failed and we were unable to recover it. 00:25:02.358 [2024-07-24 22:34:27.948142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.358 [2024-07-24 22:34:27.948192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.358 qpair failed and we were unable to recover it. 00:25:02.358 [2024-07-24 22:34:27.948364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.358 [2024-07-24 22:34:27.948390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.358 qpair failed and we were unable to recover it. 00:25:02.358 [2024-07-24 22:34:27.948576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.358 [2024-07-24 22:34:27.948631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.358 qpair failed and we were unable to recover it. 00:25:02.358 [2024-07-24 22:34:27.948790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.358 [2024-07-24 22:34:27.948817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.358 qpair failed and we were unable to recover it. 00:25:02.358 [2024-07-24 22:34:27.948921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.358 [2024-07-24 22:34:27.948947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.358 qpair failed and we were unable to recover it. 00:25:02.358 [2024-07-24 22:34:27.949114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.358 [2024-07-24 22:34:27.949165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.358 qpair failed and we were unable to recover it. 00:25:02.358 [2024-07-24 22:34:27.949264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.358 [2024-07-24 22:34:27.949289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.358 qpair failed and we were unable to recover it. 00:25:02.358 [2024-07-24 22:34:27.949391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.358 [2024-07-24 22:34:27.949418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.358 qpair failed and we were unable to recover it. 00:25:02.358 [2024-07-24 22:34:27.949524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.358 [2024-07-24 22:34:27.949551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.358 qpair failed and we were unable to recover it. 00:25:02.358 [2024-07-24 22:34:27.949652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.358 [2024-07-24 22:34:27.949678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.358 qpair failed and we were unable to recover it. 00:25:02.358 [2024-07-24 22:34:27.949816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.358 [2024-07-24 22:34:27.949868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.358 qpair failed and we were unable to recover it. 00:25:02.358 [2024-07-24 22:34:27.950056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.358 [2024-07-24 22:34:27.950105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.358 qpair failed and we were unable to recover it. 00:25:02.358 [2024-07-24 22:34:27.950266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.359 [2024-07-24 22:34:27.950318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.359 qpair failed and we were unable to recover it. 00:25:02.359 [2024-07-24 22:34:27.950421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.359 [2024-07-24 22:34:27.950450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.359 qpair failed and we were unable to recover it. 00:25:02.359 [2024-07-24 22:34:27.950648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.359 [2024-07-24 22:34:27.950699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.359 qpair failed and we were unable to recover it. 00:25:02.359 [2024-07-24 22:34:27.950866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.359 [2024-07-24 22:34:27.950914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.359 qpair failed and we were unable to recover it. 00:25:02.359 [2024-07-24 22:34:27.951100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.359 [2024-07-24 22:34:27.951148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.359 qpair failed and we were unable to recover it. 00:25:02.359 [2024-07-24 22:34:27.951343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.359 [2024-07-24 22:34:27.951376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.359 qpair failed and we were unable to recover it. 00:25:02.359 [2024-07-24 22:34:27.951526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.359 [2024-07-24 22:34:27.951555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.359 qpair failed and we were unable to recover it. 00:25:02.359 [2024-07-24 22:34:27.951752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.359 [2024-07-24 22:34:27.951803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.359 qpair failed and we were unable to recover it. 00:25:02.359 [2024-07-24 22:34:27.951906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.359 [2024-07-24 22:34:27.951933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.359 qpair failed and we were unable to recover it. 00:25:02.359 [2024-07-24 22:34:27.952114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.359 [2024-07-24 22:34:27.952140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.359 qpair failed and we were unable to recover it. 00:25:02.359 [2024-07-24 22:34:27.952302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.359 [2024-07-24 22:34:27.952353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.359 qpair failed and we were unable to recover it. 00:25:02.359 [2024-07-24 22:34:27.952523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.359 [2024-07-24 22:34:27.952552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.359 qpair failed and we were unable to recover it. 00:25:02.359 [2024-07-24 22:34:27.952719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.359 [2024-07-24 22:34:27.952773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.359 qpair failed and we were unable to recover it. 00:25:02.359 [2024-07-24 22:34:27.952942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.359 [2024-07-24 22:34:27.952998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.359 qpair failed and we were unable to recover it. 00:25:02.359 [2024-07-24 22:34:27.953203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.359 [2024-07-24 22:34:27.953251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.359 qpair failed and we were unable to recover it. 00:25:02.359 [2024-07-24 22:34:27.953464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.359 [2024-07-24 22:34:27.953524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.359 qpair failed and we were unable to recover it. 00:25:02.359 [2024-07-24 22:34:27.953665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.359 [2024-07-24 22:34:27.953722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.359 qpair failed and we were unable to recover it. 00:25:02.359 [2024-07-24 22:34:27.953866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.359 [2024-07-24 22:34:27.953920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.359 qpair failed and we were unable to recover it. 00:25:02.359 [2024-07-24 22:34:27.954119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.359 [2024-07-24 22:34:27.954171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.359 qpair failed and we were unable to recover it. 00:25:02.359 [2024-07-24 22:34:27.954327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.359 [2024-07-24 22:34:27.954376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.359 qpair failed and we were unable to recover it. 00:25:02.359 [2024-07-24 22:34:27.954540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.359 [2024-07-24 22:34:27.954568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.359 qpair failed and we were unable to recover it. 00:25:02.359 [2024-07-24 22:34:27.954721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.359 [2024-07-24 22:34:27.954773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.359 qpair failed and we were unable to recover it. 00:25:02.359 [2024-07-24 22:34:27.954970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.359 [2024-07-24 22:34:27.955023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.359 qpair failed and we were unable to recover it. 00:25:02.359 [2024-07-24 22:34:27.955212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.359 [2024-07-24 22:34:27.955264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.359 qpair failed and we were unable to recover it. 00:25:02.359 [2024-07-24 22:34:27.955442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.359 [2024-07-24 22:34:27.955501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.359 qpair failed and we were unable to recover it. 00:25:02.359 [2024-07-24 22:34:27.955681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.359 [2024-07-24 22:34:27.955707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.359 qpair failed and we were unable to recover it. 00:25:02.359 [2024-07-24 22:34:27.955857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.359 [2024-07-24 22:34:27.955911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.359 qpair failed and we were unable to recover it. 00:25:02.359 [2024-07-24 22:34:27.956100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.359 [2024-07-24 22:34:27.956126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.359 qpair failed and we were unable to recover it. 00:25:02.359 [2024-07-24 22:34:27.956297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.359 [2024-07-24 22:34:27.956350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.359 qpair failed and we were unable to recover it. 00:25:02.359 [2024-07-24 22:34:27.956518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.359 [2024-07-24 22:34:27.956562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.359 qpair failed and we were unable to recover it. 00:25:02.359 [2024-07-24 22:34:27.956729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.359 [2024-07-24 22:34:27.956785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.359 qpair failed and we were unable to recover it. 00:25:02.359 [2024-07-24 22:34:27.956896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.359 [2024-07-24 22:34:27.956923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.359 qpair failed and we were unable to recover it. 00:25:02.359 [2024-07-24 22:34:27.957032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.359 [2024-07-24 22:34:27.957059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.359 qpair failed and we were unable to recover it. 00:25:02.359 [2024-07-24 22:34:27.957233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.359 [2024-07-24 22:34:27.957260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.359 qpair failed and we were unable to recover it. 00:25:02.359 [2024-07-24 22:34:27.957371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.359 [2024-07-24 22:34:27.957400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.359 qpair failed and we were unable to recover it. 00:25:02.359 [2024-07-24 22:34:27.957535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.359 [2024-07-24 22:34:27.957591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.359 qpair failed and we were unable to recover it. 00:25:02.359 [2024-07-24 22:34:27.957800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.359 [2024-07-24 22:34:27.957847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.359 qpair failed and we were unable to recover it. 00:25:02.359 [2024-07-24 22:34:27.958046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.359 [2024-07-24 22:34:27.958096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.360 qpair failed and we were unable to recover it. 00:25:02.360 [2024-07-24 22:34:27.958266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.360 [2024-07-24 22:34:27.958326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.360 qpair failed and we were unable to recover it. 00:25:02.360 [2024-07-24 22:34:27.958533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.360 [2024-07-24 22:34:27.958561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.360 qpair failed and we were unable to recover it. 00:25:02.360 [2024-07-24 22:34:27.958784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.360 [2024-07-24 22:34:27.958831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.360 qpair failed and we were unable to recover it. 00:25:02.360 [2024-07-24 22:34:27.958966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.360 [2024-07-24 22:34:27.959028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.360 qpair failed and we were unable to recover it. 00:25:02.360 [2024-07-24 22:34:27.959231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.360 [2024-07-24 22:34:27.959284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.360 qpair failed and we were unable to recover it. 00:25:02.360 [2024-07-24 22:34:27.959385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.360 [2024-07-24 22:34:27.959411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.360 qpair failed and we were unable to recover it. 00:25:02.360 [2024-07-24 22:34:27.959631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.360 [2024-07-24 22:34:27.959682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.360 qpair failed and we were unable to recover it. 00:25:02.360 [2024-07-24 22:34:27.959787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.360 [2024-07-24 22:34:27.959819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.360 qpair failed and we were unable to recover it. 00:25:02.360 [2024-07-24 22:34:27.960020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.360 [2024-07-24 22:34:27.960046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.360 qpair failed and we were unable to recover it. 00:25:02.360 [2024-07-24 22:34:27.960288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.360 [2024-07-24 22:34:27.960336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.360 qpair failed and we were unable to recover it. 00:25:02.360 [2024-07-24 22:34:27.960544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.360 [2024-07-24 22:34:27.960570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.360 qpair failed and we were unable to recover it. 00:25:02.360 [2024-07-24 22:34:27.960741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.360 [2024-07-24 22:34:27.960793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.360 qpair failed and we were unable to recover it. 00:25:02.360 [2024-07-24 22:34:27.961000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.360 [2024-07-24 22:34:27.961048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.360 qpair failed and we were unable to recover it. 00:25:02.360 [2024-07-24 22:34:27.961250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.360 [2024-07-24 22:34:27.961303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.360 qpair failed and we were unable to recover it. 00:25:02.360 [2024-07-24 22:34:27.961403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.360 [2024-07-24 22:34:27.961428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.360 qpair failed and we were unable to recover it. 00:25:02.360 [2024-07-24 22:34:27.961624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.360 [2024-07-24 22:34:27.961677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.360 qpair failed and we were unable to recover it. 00:25:02.360 [2024-07-24 22:34:27.961784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.360 [2024-07-24 22:34:27.961812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.360 qpair failed and we were unable to recover it. 00:25:02.360 [2024-07-24 22:34:27.962007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.360 [2024-07-24 22:34:27.962057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.360 qpair failed and we were unable to recover it. 00:25:02.360 [2024-07-24 22:34:27.962228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.360 [2024-07-24 22:34:27.962286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.360 qpair failed and we were unable to recover it. 00:25:02.360 [2024-07-24 22:34:27.962394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.360 [2024-07-24 22:34:27.962421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.360 qpair failed and we were unable to recover it. 00:25:02.360 [2024-07-24 22:34:27.962620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.360 [2024-07-24 22:34:27.962672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.360 qpair failed and we were unable to recover it. 00:25:02.360 [2024-07-24 22:34:27.962792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.360 [2024-07-24 22:34:27.962819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.360 qpair failed and we were unable to recover it. 00:25:02.360 [2024-07-24 22:34:27.962996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.360 [2024-07-24 22:34:27.963045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.360 qpair failed and we were unable to recover it. 00:25:02.360 [2024-07-24 22:34:27.963179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.360 [2024-07-24 22:34:27.963230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.360 qpair failed and we were unable to recover it. 00:25:02.360 [2024-07-24 22:34:27.963375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.360 [2024-07-24 22:34:27.963423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.360 qpair failed and we were unable to recover it. 00:25:02.360 [2024-07-24 22:34:27.963577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.360 [2024-07-24 22:34:27.963630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.360 qpair failed and we were unable to recover it. 00:25:02.360 [2024-07-24 22:34:27.963794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.360 [2024-07-24 22:34:27.963851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.360 qpair failed and we were unable to recover it. 00:25:02.360 [2024-07-24 22:34:27.964053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.360 [2024-07-24 22:34:27.964101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.360 qpair failed and we were unable to recover it. 00:25:02.360 [2024-07-24 22:34:27.964231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.360 [2024-07-24 22:34:27.964284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.360 qpair failed and we were unable to recover it. 00:25:02.360 [2024-07-24 22:34:27.964449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.361 [2024-07-24 22:34:27.964507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.361 qpair failed and we were unable to recover it. 00:25:02.361 [2024-07-24 22:34:27.964699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.361 [2024-07-24 22:34:27.964753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.361 qpair failed and we were unable to recover it. 00:25:02.361 [2024-07-24 22:34:27.964967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.361 [2024-07-24 22:34:27.965021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.361 qpair failed and we were unable to recover it. 00:25:02.361 [2024-07-24 22:34:27.965267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.361 [2024-07-24 22:34:27.965317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.361 qpair failed and we were unable to recover it. 00:25:02.361 [2024-07-24 22:34:27.965531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.361 [2024-07-24 22:34:27.965558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.361 qpair failed and we were unable to recover it. 00:25:02.361 [2024-07-24 22:34:27.965710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.361 [2024-07-24 22:34:27.965763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.361 qpair failed and we were unable to recover it. 00:25:02.361 [2024-07-24 22:34:27.965922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.361 [2024-07-24 22:34:27.965950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.361 qpair failed and we were unable to recover it. 00:25:02.361 [2024-07-24 22:34:27.966148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.361 [2024-07-24 22:34:27.966198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.361 qpair failed and we were unable to recover it. 00:25:02.361 [2024-07-24 22:34:27.966371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.361 [2024-07-24 22:34:27.966429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.361 qpair failed and we were unable to recover it. 00:25:02.361 [2024-07-24 22:34:27.966626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.361 [2024-07-24 22:34:27.966672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.361 qpair failed and we were unable to recover it. 00:25:02.361 [2024-07-24 22:34:27.966774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.361 [2024-07-24 22:34:27.966800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.361 qpair failed and we were unable to recover it. 00:25:02.361 [2024-07-24 22:34:27.967011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.361 [2024-07-24 22:34:27.967062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.361 qpair failed and we were unable to recover it. 00:25:02.361 [2024-07-24 22:34:27.967222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.361 [2024-07-24 22:34:27.967271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.361 qpair failed and we were unable to recover it. 00:25:02.361 [2024-07-24 22:34:27.967465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.361 [2024-07-24 22:34:27.967523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.361 qpair failed and we were unable to recover it. 00:25:02.361 [2024-07-24 22:34:27.967655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.361 [2024-07-24 22:34:27.967708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.361 qpair failed and we were unable to recover it. 00:25:02.361 [2024-07-24 22:34:27.967914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.361 [2024-07-24 22:34:27.967967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.361 qpair failed and we were unable to recover it. 00:25:02.361 [2024-07-24 22:34:27.968135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.361 [2024-07-24 22:34:27.968189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.361 qpair failed and we were unable to recover it. 00:25:02.361 [2024-07-24 22:34:27.968368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.361 [2024-07-24 22:34:27.968393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.361 qpair failed and we were unable to recover it. 00:25:02.361 [2024-07-24 22:34:27.968539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.361 [2024-07-24 22:34:27.968603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.361 qpair failed and we were unable to recover it. 00:25:02.361 [2024-07-24 22:34:27.968763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.361 [2024-07-24 22:34:27.968814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.361 qpair failed and we were unable to recover it. 00:25:02.361 [2024-07-24 22:34:27.968913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.361 [2024-07-24 22:34:27.968940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.361 qpair failed and we were unable to recover it. 00:25:02.361 [2024-07-24 22:34:27.969088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.361 [2024-07-24 22:34:27.969142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.361 qpair failed and we were unable to recover it. 00:25:02.361 [2024-07-24 22:34:27.969245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.361 [2024-07-24 22:34:27.969272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.361 qpair failed and we were unable to recover it. 00:25:02.361 [2024-07-24 22:34:27.969381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.361 [2024-07-24 22:34:27.969408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.361 qpair failed and we were unable to recover it. 00:25:02.361 [2024-07-24 22:34:27.969591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.361 [2024-07-24 22:34:27.969618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.361 qpair failed and we were unable to recover it. 00:25:02.361 [2024-07-24 22:34:27.969749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.361 [2024-07-24 22:34:27.969776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.361 qpair failed and we were unable to recover it. 00:25:02.361 [2024-07-24 22:34:27.969869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.361 [2024-07-24 22:34:27.969895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.361 qpair failed and we were unable to recover it. 00:25:02.361 [2024-07-24 22:34:27.970102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.361 [2024-07-24 22:34:27.970157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.361 qpair failed and we were unable to recover it. 00:25:02.361 [2024-07-24 22:34:27.970369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.361 [2024-07-24 22:34:27.970426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.361 qpair failed and we were unable to recover it. 00:25:02.361 [2024-07-24 22:34:27.970577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.361 [2024-07-24 22:34:27.970636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.361 qpair failed and we were unable to recover it. 00:25:02.361 [2024-07-24 22:34:27.970825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.361 [2024-07-24 22:34:27.970878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.361 qpair failed and we were unable to recover it. 00:25:02.361 [2024-07-24 22:34:27.971045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.361 [2024-07-24 22:34:27.971071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.361 qpair failed and we were unable to recover it. 00:25:02.361 [2024-07-24 22:34:27.971254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.361 [2024-07-24 22:34:27.971301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.361 qpair failed and we were unable to recover it. 00:25:02.361 [2024-07-24 22:34:27.971470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.361 [2024-07-24 22:34:27.971529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.361 qpair failed and we were unable to recover it. 00:25:02.361 [2024-07-24 22:34:27.971728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.361 [2024-07-24 22:34:27.971779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.361 qpair failed and we were unable to recover it. 00:25:02.361 [2024-07-24 22:34:27.971877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.361 [2024-07-24 22:34:27.971902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.361 qpair failed and we were unable to recover it. 00:25:02.361 [2024-07-24 22:34:27.972110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.361 [2024-07-24 22:34:27.972161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.361 qpair failed and we were unable to recover it. 00:25:02.361 [2024-07-24 22:34:27.972311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.361 [2024-07-24 22:34:27.972361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.362 qpair failed and we were unable to recover it. 00:25:02.362 [2024-07-24 22:34:27.972555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.362 [2024-07-24 22:34:27.972604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.362 qpair failed and we were unable to recover it. 00:25:02.362 [2024-07-24 22:34:27.972771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.362 [2024-07-24 22:34:27.972825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.362 qpair failed and we were unable to recover it. 00:25:02.362 [2024-07-24 22:34:27.973049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.362 [2024-07-24 22:34:27.973106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.362 qpair failed and we were unable to recover it. 00:25:02.362 [2024-07-24 22:34:27.973272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.362 [2024-07-24 22:34:27.973298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.362 qpair failed and we were unable to recover it. 00:25:02.362 [2024-07-24 22:34:27.973462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.362 [2024-07-24 22:34:27.973525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.362 qpair failed and we were unable to recover it. 00:25:02.362 [2024-07-24 22:34:27.973640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.362 [2024-07-24 22:34:27.973668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.362 qpair failed and we were unable to recover it. 00:25:02.362 [2024-07-24 22:34:27.973873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.362 [2024-07-24 22:34:27.973930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.362 qpair failed and we were unable to recover it. 00:25:02.362 [2024-07-24 22:34:27.974081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.362 [2024-07-24 22:34:27.974134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.362 qpair failed and we were unable to recover it. 00:25:02.362 [2024-07-24 22:34:27.974333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.362 [2024-07-24 22:34:27.974359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.362 qpair failed and we were unable to recover it. 00:25:02.362 [2024-07-24 22:34:27.974530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.362 [2024-07-24 22:34:27.974581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.362 qpair failed and we were unable to recover it. 00:25:02.362 [2024-07-24 22:34:27.974771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.362 [2024-07-24 22:34:27.974820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.362 qpair failed and we were unable to recover it. 00:25:02.362 [2024-07-24 22:34:27.975033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.362 [2024-07-24 22:34:27.975088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.362 qpair failed and we were unable to recover it. 00:25:02.362 [2024-07-24 22:34:27.975224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.362 [2024-07-24 22:34:27.975278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.362 qpair failed and we were unable to recover it. 00:25:02.362 [2024-07-24 22:34:27.975387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.362 [2024-07-24 22:34:27.975416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.362 qpair failed and we were unable to recover it. 00:25:02.362 [2024-07-24 22:34:27.975563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.362 [2024-07-24 22:34:27.975618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.362 qpair failed and we were unable to recover it. 00:25:02.362 [2024-07-24 22:34:27.975821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.362 [2024-07-24 22:34:27.975876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.362 qpair failed and we were unable to recover it. 00:25:02.362 [2024-07-24 22:34:27.976024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.362 [2024-07-24 22:34:27.976085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.362 qpair failed and we were unable to recover it. 00:25:02.362 [2024-07-24 22:34:27.976191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.362 [2024-07-24 22:34:27.976217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.362 qpair failed and we were unable to recover it. 00:25:02.362 [2024-07-24 22:34:27.976416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.362 [2024-07-24 22:34:27.976467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.362 qpair failed and we were unable to recover it. 00:25:02.362 [2024-07-24 22:34:27.976683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.362 [2024-07-24 22:34:27.976735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.362 qpair failed and we were unable to recover it. 00:25:02.362 [2024-07-24 22:34:27.976905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.362 [2024-07-24 22:34:27.976938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.362 qpair failed and we were unable to recover it. 00:25:02.362 [2024-07-24 22:34:27.977070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.362 [2024-07-24 22:34:27.977124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.362 qpair failed and we were unable to recover it. 00:25:02.362 [2024-07-24 22:34:27.977322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.362 [2024-07-24 22:34:27.977372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.362 qpair failed and we were unable to recover it. 00:25:02.362 [2024-07-24 22:34:27.977530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.362 [2024-07-24 22:34:27.977556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.362 qpair failed and we were unable to recover it. 00:25:02.362 [2024-07-24 22:34:27.977743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.362 [2024-07-24 22:34:27.977795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.362 qpair failed and we were unable to recover it. 00:25:02.362 [2024-07-24 22:34:27.977901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.362 [2024-07-24 22:34:27.977928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.362 qpair failed and we were unable to recover it. 00:25:02.362 [2024-07-24 22:34:27.978065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.362 [2024-07-24 22:34:27.978120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.362 qpair failed and we were unable to recover it. 00:25:02.362 [2024-07-24 22:34:27.978267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.362 [2024-07-24 22:34:27.978318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.362 qpair failed and we were unable to recover it. 00:25:02.362 [2024-07-24 22:34:27.978443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.362 [2024-07-24 22:34:27.978473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.362 qpair failed and we were unable to recover it. 00:25:02.362 [2024-07-24 22:34:27.978639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.362 [2024-07-24 22:34:27.978679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.362 qpair failed and we were unable to recover it. 00:25:02.362 [2024-07-24 22:34:27.978901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.362 [2024-07-24 22:34:27.978956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.362 qpair failed and we were unable to recover it. 00:25:02.362 [2024-07-24 22:34:27.979184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.362 [2024-07-24 22:34:27.979234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.362 qpair failed and we were unable to recover it. 00:25:02.362 [2024-07-24 22:34:27.979413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.362 [2024-07-24 22:34:27.979440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.362 qpair failed and we were unable to recover it. 00:25:02.362 [2024-07-24 22:34:27.979637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.362 [2024-07-24 22:34:27.979686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.362 qpair failed and we were unable to recover it. 00:25:02.362 [2024-07-24 22:34:27.979800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.362 [2024-07-24 22:34:27.979827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.362 qpair failed and we were unable to recover it. 00:25:02.362 [2024-07-24 22:34:27.979979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.362 [2024-07-24 22:34:27.980030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.362 qpair failed and we were unable to recover it. 00:25:02.362 [2024-07-24 22:34:27.980247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.362 [2024-07-24 22:34:27.980296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.363 qpair failed and we were unable to recover it. 00:25:02.363 [2024-07-24 22:34:27.980525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.363 [2024-07-24 22:34:27.980553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.363 qpair failed and we were unable to recover it. 00:25:02.363 [2024-07-24 22:34:27.980659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.363 [2024-07-24 22:34:27.980686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.363 qpair failed and we were unable to recover it. 00:25:02.363 [2024-07-24 22:34:27.980827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.363 [2024-07-24 22:34:27.980877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.363 qpair failed and we were unable to recover it. 00:25:02.363 [2024-07-24 22:34:27.981042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.363 [2024-07-24 22:34:27.981070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.363 qpair failed and we were unable to recover it. 00:25:02.363 [2024-07-24 22:34:27.981266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.363 [2024-07-24 22:34:27.981315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.363 qpair failed and we were unable to recover it. 00:25:02.363 [2024-07-24 22:34:27.981462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.363 [2024-07-24 22:34:27.981521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.363 qpair failed and we were unable to recover it. 00:25:02.363 [2024-07-24 22:34:27.981623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.363 [2024-07-24 22:34:27.981648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.363 qpair failed and we were unable to recover it. 00:25:02.363 [2024-07-24 22:34:27.981743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.363 [2024-07-24 22:34:27.981770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.363 qpair failed and we were unable to recover it. 00:25:02.363 [2024-07-24 22:34:27.981923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.363 [2024-07-24 22:34:27.981970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.363 qpair failed and we were unable to recover it. 00:25:02.363 [2024-07-24 22:34:27.982166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.363 [2024-07-24 22:34:27.982218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.363 qpair failed and we were unable to recover it. 00:25:02.363 [2024-07-24 22:34:27.982423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.363 [2024-07-24 22:34:27.982478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.363 qpair failed and we were unable to recover it. 00:25:02.363 [2024-07-24 22:34:27.982668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.363 [2024-07-24 22:34:27.982725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.363 qpair failed and we were unable to recover it. 00:25:02.363 [2024-07-24 22:34:27.982885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.363 [2024-07-24 22:34:27.982937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.363 qpair failed and we were unable to recover it. 00:25:02.363 [2024-07-24 22:34:27.983099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.363 [2024-07-24 22:34:27.983125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.363 qpair failed and we were unable to recover it. 00:25:02.363 [2024-07-24 22:34:27.983328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.363 [2024-07-24 22:34:27.983375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.363 qpair failed and we were unable to recover it. 00:25:02.363 [2024-07-24 22:34:27.983491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.363 [2024-07-24 22:34:27.983519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.363 qpair failed and we were unable to recover it. 00:25:02.363 [2024-07-24 22:34:27.983697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.363 [2024-07-24 22:34:27.983749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.363 qpair failed and we were unable to recover it. 00:25:02.363 [2024-07-24 22:34:27.983934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.363 [2024-07-24 22:34:27.983983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.363 qpair failed and we were unable to recover it. 00:25:02.363 [2024-07-24 22:34:27.984174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.649 [2024-07-24 22:34:27.984226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.649 qpair failed and we were unable to recover it. 00:25:02.649 [2024-07-24 22:34:27.984416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.649 [2024-07-24 22:34:27.984445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.649 qpair failed and we were unable to recover it. 00:25:02.649 [2024-07-24 22:34:27.984652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.649 [2024-07-24 22:34:27.984703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.649 qpair failed and we were unable to recover it. 00:25:02.649 [2024-07-24 22:34:27.984908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.649 [2024-07-24 22:34:27.984960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.649 qpair failed and we were unable to recover it. 00:25:02.649 [2024-07-24 22:34:27.985122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.649 [2024-07-24 22:34:27.985179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.649 qpair failed and we were unable to recover it. 00:25:02.649 [2024-07-24 22:34:27.985294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.649 [2024-07-24 22:34:27.985362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.649 qpair failed and we were unable to recover it. 00:25:02.649 [2024-07-24 22:34:27.985461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.649 [2024-07-24 22:34:27.985493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.649 qpair failed and we were unable to recover it. 00:25:02.649 [2024-07-24 22:34:27.985598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.649 [2024-07-24 22:34:27.985626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.649 qpair failed and we were unable to recover it. 00:25:02.649 [2024-07-24 22:34:27.985836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.649 [2024-07-24 22:34:27.985887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.649 qpair failed and we were unable to recover it. 00:25:02.649 [2024-07-24 22:34:27.986003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.649 [2024-07-24 22:34:27.986029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.649 qpair failed and we were unable to recover it. 00:25:02.649 [2024-07-24 22:34:27.986187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.649 [2024-07-24 22:34:27.986242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.649 qpair failed and we were unable to recover it. 00:25:02.649 [2024-07-24 22:34:27.986417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.649 [2024-07-24 22:34:27.986467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.649 qpair failed and we were unable to recover it. 00:25:02.649 [2024-07-24 22:34:27.986602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.649 [2024-07-24 22:34:27.986667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.649 qpair failed and we were unable to recover it. 00:25:02.649 [2024-07-24 22:34:27.986857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.649 [2024-07-24 22:34:27.986904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.649 qpair failed and we were unable to recover it. 00:25:02.649 [2024-07-24 22:34:27.987058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.649 [2024-07-24 22:34:27.987112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.649 qpair failed and we were unable to recover it. 00:25:02.649 [2024-07-24 22:34:27.987215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.649 [2024-07-24 22:34:27.987242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.649 qpair failed and we were unable to recover it. 00:25:02.649 [2024-07-24 22:34:27.987433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.649 [2024-07-24 22:34:27.987490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.649 qpair failed and we were unable to recover it. 00:25:02.649 [2024-07-24 22:34:27.987676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.649 [2024-07-24 22:34:27.987702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.649 qpair failed and we were unable to recover it. 00:25:02.649 [2024-07-24 22:34:27.987802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.649 [2024-07-24 22:34:27.987828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.649 qpair failed and we were unable to recover it. 00:25:02.649 [2024-07-24 22:34:27.988029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.649 [2024-07-24 22:34:27.988078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.649 qpair failed and we were unable to recover it. 00:25:02.649 [2024-07-24 22:34:27.988213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.649 [2024-07-24 22:34:27.988261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.649 qpair failed and we were unable to recover it. 00:25:02.649 [2024-07-24 22:34:27.988455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.649 [2024-07-24 22:34:27.988489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.649 qpair failed and we were unable to recover it. 00:25:02.649 [2024-07-24 22:34:27.988690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.649 [2024-07-24 22:34:27.988740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.649 qpair failed and we were unable to recover it. 00:25:02.649 [2024-07-24 22:34:27.988885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.649 [2024-07-24 22:34:27.988939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.649 qpair failed and we were unable to recover it. 00:25:02.649 [2024-07-24 22:34:27.989040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.649 [2024-07-24 22:34:27.989066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.649 qpair failed and we were unable to recover it. 00:25:02.649 [2024-07-24 22:34:27.989261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.649 [2024-07-24 22:34:27.989315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.649 qpair failed and we were unable to recover it. 00:25:02.649 [2024-07-24 22:34:27.989418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.649 [2024-07-24 22:34:27.989445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.649 qpair failed and we were unable to recover it. 00:25:02.649 [2024-07-24 22:34:27.989602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.649 [2024-07-24 22:34:27.989653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.649 qpair failed and we were unable to recover it. 00:25:02.649 [2024-07-24 22:34:27.989791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.649 [2024-07-24 22:34:27.989844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.649 qpair failed and we were unable to recover it. 00:25:02.649 [2024-07-24 22:34:27.990037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.649 [2024-07-24 22:34:27.990063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.649 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-24 22:34:27.990256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-24 22:34:27.990308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-24 22:34:27.990446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-24 22:34:27.990505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-24 22:34:27.990644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-24 22:34:27.990697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-24 22:34:27.990798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-24 22:34:27.990823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-24 22:34:27.990996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-24 22:34:27.991022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-24 22:34:27.991219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-24 22:34:27.991269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-24 22:34:27.991455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-24 22:34:27.991514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-24 22:34:27.991666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-24 22:34:27.991727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-24 22:34:27.991936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-24 22:34:27.991988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-24 22:34:27.992087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-24 22:34:27.992112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-24 22:34:27.992244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-24 22:34:27.992295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-24 22:34:27.992466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-24 22:34:27.992503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-24 22:34:27.992733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-24 22:34:27.992790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-24 22:34:27.992947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-24 22:34:27.992994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-24 22:34:27.993131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-24 22:34:27.993186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-24 22:34:27.993373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-24 22:34:27.993427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-24 22:34:27.993619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-24 22:34:27.993672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-24 22:34:27.993775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-24 22:34:27.993801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-24 22:34:27.993950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-24 22:34:27.994003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-24 22:34:27.994163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-24 22:34:27.994218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-24 22:34:27.994369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-24 22:34:27.994394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-24 22:34:27.994519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-24 22:34:27.994548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-24 22:34:27.994706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-24 22:34:27.994763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-24 22:34:27.994906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-24 22:34:27.994949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-24 22:34:27.995087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-24 22:34:27.995140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-24 22:34:27.995242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-24 22:34:27.995268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-24 22:34:27.995502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-24 22:34:27.995548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-24 22:34:27.995687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-24 22:34:27.995740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-24 22:34:27.995968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-24 22:34:27.996024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-24 22:34:27.996194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-24 22:34:27.996247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-24 22:34:27.996450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-24 22:34:27.996520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-24 22:34:27.996672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-24 22:34:27.996727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-24 22:34:27.996913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-24 22:34:27.996961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-24 22:34:27.997070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-24 22:34:27.997098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-24 22:34:27.997297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-24 22:34:27.997348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-24 22:34:27.997475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.650 [2024-07-24 22:34:27.997538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.650 qpair failed and we were unable to recover it. 00:25:02.650 [2024-07-24 22:34:27.997679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.651 [2024-07-24 22:34:27.997730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.651 qpair failed and we were unable to recover it. 00:25:02.651 [2024-07-24 22:34:27.997876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.651 [2024-07-24 22:34:27.997933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.651 qpair failed and we were unable to recover it. 00:25:02.651 [2024-07-24 22:34:27.998058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.651 [2024-07-24 22:34:27.998110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.651 qpair failed and we were unable to recover it. 00:25:02.651 [2024-07-24 22:34:27.998244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.651 [2024-07-24 22:34:27.998296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.651 qpair failed and we were unable to recover it. 00:25:02.651 [2024-07-24 22:34:27.998532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.651 [2024-07-24 22:34:27.998559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.651 qpair failed and we were unable to recover it. 00:25:02.651 [2024-07-24 22:34:27.998736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.651 [2024-07-24 22:34:27.998790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.651 qpair failed and we were unable to recover it. 00:25:02.651 [2024-07-24 22:34:27.998968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.651 [2024-07-24 22:34:27.999028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.651 qpair failed and we were unable to recover it. 00:25:02.651 [2024-07-24 22:34:27.999238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.651 [2024-07-24 22:34:27.999289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.651 qpair failed and we were unable to recover it. 00:25:02.651 [2024-07-24 22:34:27.999438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.651 [2024-07-24 22:34:27.999496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.651 qpair failed and we were unable to recover it. 00:25:02.651 [2024-07-24 22:34:27.999664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.651 [2024-07-24 22:34:27.999690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.651 qpair failed and we were unable to recover it. 00:25:02.651 [2024-07-24 22:34:27.999844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.651 [2024-07-24 22:34:27.999898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.651 qpair failed and we were unable to recover it. 00:25:02.651 [2024-07-24 22:34:28.000005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.651 [2024-07-24 22:34:28.000032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.651 qpair failed and we were unable to recover it. 00:25:02.651 [2024-07-24 22:34:28.000176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.651 [2024-07-24 22:34:28.000231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.651 qpair failed and we were unable to recover it. 00:25:02.651 [2024-07-24 22:34:28.000345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.651 [2024-07-24 22:34:28.000372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.651 qpair failed and we were unable to recover it. 00:25:02.651 [2024-07-24 22:34:28.000575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.651 [2024-07-24 22:34:28.000625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.651 qpair failed and we were unable to recover it. 00:25:02.651 [2024-07-24 22:34:28.000767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.651 [2024-07-24 22:34:28.000816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.651 qpair failed and we were unable to recover it. 00:25:02.651 [2024-07-24 22:34:28.000977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.651 [2024-07-24 22:34:28.001029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.651 qpair failed and we were unable to recover it. 00:25:02.651 [2024-07-24 22:34:28.001235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.651 [2024-07-24 22:34:28.001289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.651 qpair failed and we were unable to recover it. 00:25:02.651 [2024-07-24 22:34:28.001384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.651 [2024-07-24 22:34:28.001410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.651 qpair failed and we were unable to recover it. 00:25:02.651 [2024-07-24 22:34:28.001558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.651 [2024-07-24 22:34:28.001609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.651 qpair failed and we were unable to recover it. 00:25:02.651 [2024-07-24 22:34:28.001789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.651 [2024-07-24 22:34:28.001842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.651 qpair failed and we were unable to recover it. 00:25:02.651 [2024-07-24 22:34:28.002001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.651 [2024-07-24 22:34:28.002055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.651 qpair failed and we were unable to recover it. 00:25:02.651 [2024-07-24 22:34:28.002164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.651 [2024-07-24 22:34:28.002190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.651 qpair failed and we were unable to recover it. 00:25:02.651 [2024-07-24 22:34:28.002331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.651 [2024-07-24 22:34:28.002385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.651 qpair failed and we were unable to recover it. 00:25:02.651 [2024-07-24 22:34:28.002488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.651 [2024-07-24 22:34:28.002515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.651 qpair failed and we were unable to recover it. 00:25:02.651 [2024-07-24 22:34:28.002723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.651 [2024-07-24 22:34:28.002776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.651 qpair failed and we were unable to recover it. 00:25:02.651 [2024-07-24 22:34:28.002928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.651 [2024-07-24 22:34:28.002979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.651 qpair failed and we were unable to recover it. 00:25:02.651 [2024-07-24 22:34:28.003147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.651 [2024-07-24 22:34:28.003173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.651 qpair failed and we were unable to recover it. 00:25:02.651 [2024-07-24 22:34:28.003346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.651 [2024-07-24 22:34:28.003399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.651 qpair failed and we were unable to recover it. 00:25:02.651 [2024-07-24 22:34:28.003516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.651 [2024-07-24 22:34:28.003544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.651 qpair failed and we were unable to recover it. 00:25:02.651 [2024-07-24 22:34:28.003658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.651 [2024-07-24 22:34:28.003685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.651 qpair failed and we were unable to recover it. 00:25:02.651 [2024-07-24 22:34:28.003837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.651 [2024-07-24 22:34:28.003863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.651 qpair failed and we were unable to recover it. 00:25:02.651 [2024-07-24 22:34:28.003974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.651 [2024-07-24 22:34:28.004039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.651 qpair failed and we were unable to recover it. 00:25:02.651 [2024-07-24 22:34:28.004140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.651 [2024-07-24 22:34:28.004166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.651 qpair failed and we were unable to recover it. 00:25:02.651 [2024-07-24 22:34:28.004354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.651 [2024-07-24 22:34:28.004402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.651 qpair failed and we were unable to recover it. 00:25:02.651 [2024-07-24 22:34:28.004507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.651 [2024-07-24 22:34:28.004535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.651 qpair failed and we were unable to recover it. 00:25:02.651 [2024-07-24 22:34:28.004656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.651 [2024-07-24 22:34:28.004682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.651 qpair failed and we were unable to recover it. 00:25:02.651 [2024-07-24 22:34:28.004818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.652 [2024-07-24 22:34:28.004872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.652 qpair failed and we were unable to recover it. 00:25:02.652 [2024-07-24 22:34:28.004974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.652 [2024-07-24 22:34:28.005001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.652 qpair failed and we were unable to recover it. 00:25:02.652 [2024-07-24 22:34:28.005158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.652 [2024-07-24 22:34:28.005214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.652 qpair failed and we were unable to recover it. 00:25:02.652 [2024-07-24 22:34:28.005320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.652 [2024-07-24 22:34:28.005348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.652 qpair failed and we were unable to recover it. 00:25:02.652 [2024-07-24 22:34:28.005496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.652 [2024-07-24 22:34:28.005545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.652 qpair failed and we were unable to recover it. 00:25:02.652 [2024-07-24 22:34:28.005704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.652 [2024-07-24 22:34:28.005730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.652 qpair failed and we were unable to recover it. 00:25:02.652 [2024-07-24 22:34:28.005826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.652 [2024-07-24 22:34:28.005853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.652 qpair failed and we were unable to recover it. 00:25:02.652 [2024-07-24 22:34:28.006055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.652 [2024-07-24 22:34:28.006113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.652 qpair failed and we were unable to recover it. 00:25:02.652 [2024-07-24 22:34:28.006289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.652 [2024-07-24 22:34:28.006338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.652 qpair failed and we were unable to recover it. 00:25:02.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3928731 Killed "${NVMF_APP[@]}" "$@" 00:25:02.652 [2024-07-24 22:34:28.006471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.652 [2024-07-24 22:34:28.006532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.652 qpair failed and we were unable to recover it. 00:25:02.652 [2024-07-24 22:34:28.006655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.652 [2024-07-24 22:34:28.006709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.652 qpair failed and we were unable to recover it. 00:25:02.652 [2024-07-24 22:34:28.006891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.652 [2024-07-24 22:34:28.006945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.652 qpair failed and we were unable to recover it. 00:25:02.652 22:34:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:25:02.652 [2024-07-24 22:34:28.007113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.652 [2024-07-24 22:34:28.007157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.652 qpair failed and we were unable to recover it. 00:25:02.652 22:34:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:25:02.652 [2024-07-24 22:34:28.007333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.652 [2024-07-24 22:34:28.007385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.652 qpair failed and we were unable to recover it. 00:25:02.652 22:34:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:02.652 [2024-07-24 22:34:28.007572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.652 [2024-07-24 22:34:28.007619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.652 qpair failed and we were unable to recover it. 00:25:02.652 22:34:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:02.652 [2024-07-24 22:34:28.007769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.652 [2024-07-24 22:34:28.007825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.652 qpair failed and we were unable to recover it. 00:25:02.652 22:34:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:02.652 [2024-07-24 22:34:28.007946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.652 [2024-07-24 22:34:28.008000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.652 qpair failed and we were unable to recover it. 00:25:02.652 [2024-07-24 22:34:28.008157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.652 [2024-07-24 22:34:28.008183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.652 qpair failed and we were unable to recover it. 00:25:02.652 [2024-07-24 22:34:28.008319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.652 [2024-07-24 22:34:28.008371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.652 qpair failed and we were unable to recover it. 00:25:02.652 [2024-07-24 22:34:28.008469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.652 [2024-07-24 22:34:28.008501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.652 qpair failed and we were unable to recover it. 00:25:02.652 [2024-07-24 22:34:28.008642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.652 [2024-07-24 22:34:28.008694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.652 qpair failed and we were unable to recover it. 00:25:02.652 [2024-07-24 22:34:28.008857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.652 [2024-07-24 22:34:28.008883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.652 qpair failed and we were unable to recover it. 00:25:02.652 [2024-07-24 22:34:28.009045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.652 [2024-07-24 22:34:28.009095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.652 qpair failed and we were unable to recover it. 00:25:02.652 [2024-07-24 22:34:28.009263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.652 [2024-07-24 22:34:28.009318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.652 qpair failed and we were unable to recover it. 00:25:02.652 [2024-07-24 22:34:28.009419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.652 [2024-07-24 22:34:28.009445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.652 qpair failed and we were unable to recover it. 00:25:02.652 [2024-07-24 22:34:28.009615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.652 [2024-07-24 22:34:28.009669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.652 qpair failed and we were unable to recover it. 00:25:02.652 [2024-07-24 22:34:28.009858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.652 [2024-07-24 22:34:28.009884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.652 qpair failed and we were unable to recover it. 00:25:02.652 [2024-07-24 22:34:28.010096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.652 [2024-07-24 22:34:28.010145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.652 qpair failed and we were unable to recover it. 00:25:02.652 [2024-07-24 22:34:28.010274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.652 [2024-07-24 22:34:28.010327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.652 qpair failed and we were unable to recover it. 00:25:02.652 [2024-07-24 22:34:28.010507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.652 [2024-07-24 22:34:28.010566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.652 qpair failed and we were unable to recover it. 00:25:02.652 [2024-07-24 22:34:28.010692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.652 [2024-07-24 22:34:28.010746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.652 qpair failed and we were unable to recover it. 00:25:02.652 [2024-07-24 22:34:28.010897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.652 [2024-07-24 22:34:28.010949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.652 qpair failed and we were unable to recover it. 00:25:02.652 [2024-07-24 22:34:28.011082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.652 [2024-07-24 22:34:28.011136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.652 qpair failed and we were unable to recover it. 00:25:02.652 [2024-07-24 22:34:28.011247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.652 [2024-07-24 22:34:28.011277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.652 qpair failed and we were unable to recover it. 00:25:02.652 [2024-07-24 22:34:28.011444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.652 [2024-07-24 22:34:28.011507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.652 qpair failed and we were unable to recover it. 00:25:02.652 [2024-07-24 22:34:28.011708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.652 [2024-07-24 22:34:28.011760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.653 qpair failed and we were unable to recover it. 00:25:02.653 [2024-07-24 22:34:28.011890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.653 [2024-07-24 22:34:28.011944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.653 qpair failed and we were unable to recover it. 00:25:02.653 [2024-07-24 22:34:28.012042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.653 [2024-07-24 22:34:28.012069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.653 qpair failed and we were unable to recover it. 00:25:02.653 [2024-07-24 22:34:28.012169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.653 [2024-07-24 22:34:28.012195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.653 qpair failed and we were unable to recover it. 00:25:02.653 [2024-07-24 22:34:28.012365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.653 [2024-07-24 22:34:28.012411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.653 qpair failed and we were unable to recover it. 00:25:02.653 [2024-07-24 22:34:28.012546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.653 [2024-07-24 22:34:28.012605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.653 qpair failed and we were unable to recover it. 00:25:02.653 [2024-07-24 22:34:28.012788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.653 [2024-07-24 22:34:28.012839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.653 qpair failed and we were unable to recover it. 00:25:02.653 [2024-07-24 22:34:28.012986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.653 [2024-07-24 22:34:28.013039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.653 qpair failed and we were unable to recover it. 00:25:02.653 [2024-07-24 22:34:28.013142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.653 [2024-07-24 22:34:28.013170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.653 qpair failed and we were unable to recover it. 00:25:02.653 [2024-07-24 22:34:28.013299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.653 [2024-07-24 22:34:28.013357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.653 qpair failed and we were unable to recover it. 00:25:02.653 [2024-07-24 22:34:28.013455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.653 [2024-07-24 22:34:28.013485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.653 qpair failed and we were unable to recover it. 00:25:02.653 [2024-07-24 22:34:28.013613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.653 22:34:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3929157 00:25:02.653 [2024-07-24 22:34:28.013674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.653 qpair failed and we were unable to recover it. 00:25:02.653 22:34:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:25:02.653 22:34:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3929157 00:25:02.653 [2024-07-24 22:34:28.013840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.653 [2024-07-24 22:34:28.013893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.653 qpair failed and we were unable to recover it. 00:25:02.653 22:34:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3929157 ']' 00:25:02.653 [2024-07-24 22:34:28.014068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.653 [2024-07-24 22:34:28.014097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.653 qpair failed and we were unable to recover it. 00:25:02.653 22:34:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:02.653 22:34:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:02.653 [2024-07-24 22:34:28.014307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.653 [2024-07-24 22:34:28.014359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.653 qpair failed and we were unable to recover it. 00:25:02.653 22:34:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:02.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:02.653 [2024-07-24 22:34:28.014543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.653 22:34:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:02.653 [2024-07-24 22:34:28.014570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.653 qpair failed and we were unable to recover it. 00:25:02.653 22:34:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:02.653 [2024-07-24 22:34:28.014767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.653 [2024-07-24 22:34:28.014824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.653 qpair failed and we were unable to recover it. 00:25:02.653 [2024-07-24 22:34:28.014970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.653 [2024-07-24 22:34:28.015022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.653 qpair failed and we were unable to recover it. 00:25:02.653 [2024-07-24 22:34:28.015151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.653 [2024-07-24 22:34:28.015177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.653 qpair failed and we were unable to recover it. 00:25:02.653 [2024-07-24 22:34:28.015283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.653 [2024-07-24 22:34:28.015310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.653 qpair failed and we were unable to recover it. 00:25:02.653 [2024-07-24 22:34:28.015448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.653 [2024-07-24 22:34:28.015474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.653 qpair failed and we were unable to recover it. 00:25:02.653 [2024-07-24 22:34:28.015644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.653 [2024-07-24 22:34:28.015698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.653 qpair failed and we were unable to recover it. 00:25:02.653 [2024-07-24 22:34:28.015899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.653 [2024-07-24 22:34:28.015925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.653 qpair failed and we were unable to recover it. 00:25:02.653 [2024-07-24 22:34:28.016096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.653 [2024-07-24 22:34:28.016148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.653 qpair failed and we were unable to recover it. 00:25:02.653 [2024-07-24 22:34:28.016344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.653 [2024-07-24 22:34:28.016371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.653 qpair failed and we were unable to recover it. 00:25:02.653 [2024-07-24 22:34:28.016552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.653 [2024-07-24 22:34:28.016580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.653 qpair failed and we were unable to recover it. 00:25:02.653 [2024-07-24 22:34:28.016785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-24 22:34:28.016840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-24 22:34:28.016998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-24 22:34:28.017024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-24 22:34:28.017208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-24 22:34:28.017258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-24 22:34:28.017397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-24 22:34:28.017456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-24 22:34:28.017699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-24 22:34:28.017754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-24 22:34:28.017866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-24 22:34:28.017931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-24 22:34:28.018101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-24 22:34:28.018159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-24 22:34:28.018310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-24 22:34:28.018368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-24 22:34:28.018533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-24 22:34:28.018560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-24 22:34:28.018720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-24 22:34:28.018771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-24 22:34:28.018933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-24 22:34:28.018988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-24 22:34:28.019205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-24 22:34:28.019257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-24 22:34:28.019408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-24 22:34:28.019452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-24 22:34:28.019651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-24 22:34:28.019704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-24 22:34:28.019909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-24 22:34:28.019962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-24 22:34:28.020137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-24 22:34:28.020182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-24 22:34:28.020333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-24 22:34:28.020386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-24 22:34:28.020568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-24 22:34:28.020595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-24 22:34:28.020729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-24 22:34:28.020781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-24 22:34:28.020960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-24 22:34:28.021009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-24 22:34:28.021144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-24 22:34:28.021197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-24 22:34:28.021375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-24 22:34:28.021403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-24 22:34:28.021602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-24 22:34:28.021653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-24 22:34:28.021847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-24 22:34:28.021897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-24 22:34:28.022097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-24 22:34:28.022148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-24 22:34:28.022314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-24 22:34:28.022371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-24 22:34:28.022475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-24 22:34:28.022515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-24 22:34:28.022673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-24 22:34:28.022726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-24 22:34:28.022929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-24 22:34:28.022982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-24 22:34:28.023144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-24 22:34:28.023200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-24 22:34:28.023356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-24 22:34:28.023409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-24 22:34:28.023558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-24 22:34:28.023615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-24 22:34:28.023753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-24 22:34:28.023778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-24 22:34:28.023938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-24 22:34:28.023964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-24 22:34:28.024189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-24 22:34:28.024237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-24 22:34:28.024402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-24 22:34:28.024451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-24 22:34:28.024644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-24 22:34:28.024671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.654 [2024-07-24 22:34:28.024919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.654 [2024-07-24 22:34:28.024969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.654 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-24 22:34:28.025141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-24 22:34:28.025192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-24 22:34:28.025352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-24 22:34:28.025406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-24 22:34:28.025552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-24 22:34:28.025604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-24 22:34:28.025749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-24 22:34:28.025802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-24 22:34:28.025946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-24 22:34:28.025997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-24 22:34:28.026158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-24 22:34:28.026184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-24 22:34:28.026337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-24 22:34:28.026365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-24 22:34:28.026513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-24 22:34:28.026560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-24 22:34:28.026686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-24 22:34:28.026740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-24 22:34:28.026844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-24 22:34:28.026875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-24 22:34:28.027022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-24 22:34:28.027073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-24 22:34:28.027231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-24 22:34:28.027256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-24 22:34:28.027406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-24 22:34:28.027463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-24 22:34:28.027623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-24 22:34:28.027681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-24 22:34:28.027782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-24 22:34:28.027808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-24 22:34:28.028003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-24 22:34:28.028054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-24 22:34:28.028189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-24 22:34:28.028239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-24 22:34:28.028362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-24 22:34:28.028390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-24 22:34:28.028582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-24 22:34:28.028635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-24 22:34:28.028784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-24 22:34:28.028837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-24 22:34:28.028964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-24 22:34:28.029017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-24 22:34:28.029122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-24 22:34:28.029150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-24 22:34:28.029293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-24 22:34:28.029346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-24 22:34:28.029543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-24 22:34:28.029570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-24 22:34:28.029709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-24 22:34:28.029761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-24 22:34:28.029910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-24 22:34:28.029963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-24 22:34:28.030065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-24 22:34:28.030091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-24 22:34:28.030240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-24 22:34:28.030292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-24 22:34:28.030395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-24 22:34:28.030420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-24 22:34:28.030525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-24 22:34:28.030551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-24 22:34:28.030723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-24 22:34:28.030777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-24 22:34:28.030958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-24 22:34:28.031003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-24 22:34:28.031152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-24 22:34:28.031205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-24 22:34:28.031988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-24 22:34:28.032018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-24 22:34:28.032215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-24 22:34:28.032264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-24 22:34:28.032471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-24 22:34:28.032530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-24 22:34:28.032770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.655 [2024-07-24 22:34:28.032849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.655 qpair failed and we were unable to recover it. 00:25:02.655 [2024-07-24 22:34:28.033063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.656 [2024-07-24 22:34:28.033125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.656 qpair failed and we were unable to recover it. 00:25:02.656 [2024-07-24 22:34:28.033320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.656 [2024-07-24 22:34:28.033379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.656 qpair failed and we were unable to recover it. 00:25:02.656 [2024-07-24 22:34:28.033605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.656 [2024-07-24 22:34:28.033667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.656 qpair failed and we were unable to recover it. 00:25:02.656 [2024-07-24 22:34:28.033869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.656 [2024-07-24 22:34:28.033928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.656 qpair failed and we were unable to recover it. 00:25:02.656 [2024-07-24 22:34:28.034154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.656 [2024-07-24 22:34:28.034211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.656 qpair failed and we were unable to recover it. 00:25:02.656 [2024-07-24 22:34:28.034379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.656 [2024-07-24 22:34:28.034432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.656 qpair failed and we were unable to recover it. 00:25:02.656 [2024-07-24 22:34:28.034599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.656 [2024-07-24 22:34:28.034643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.656 qpair failed and we were unable to recover it. 00:25:02.656 [2024-07-24 22:34:28.034763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.656 [2024-07-24 22:34:28.034807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.656 qpair failed and we were unable to recover it. 00:25:02.656 [2024-07-24 22:34:28.034928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.656 [2024-07-24 22:34:28.034974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.656 qpair failed and we were unable to recover it. 00:25:02.656 [2024-07-24 22:34:28.035103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.656 [2024-07-24 22:34:28.035149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.656 qpair failed and we were unable to recover it. 00:25:02.656 [2024-07-24 22:34:28.035275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.656 [2024-07-24 22:34:28.035311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.656 qpair failed and we were unable to recover it. 00:25:02.656 [2024-07-24 22:34:28.035441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.656 [2024-07-24 22:34:28.035491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.656 qpair failed and we were unable to recover it. 00:25:02.656 [2024-07-24 22:34:28.035616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.656 [2024-07-24 22:34:28.035648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.656 qpair failed and we were unable to recover it. 00:25:02.656 [2024-07-24 22:34:28.035788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.656 [2024-07-24 22:34:28.035815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.656 qpair failed and we were unable to recover it. 00:25:02.656 [2024-07-24 22:34:28.035969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.656 [2024-07-24 22:34:28.036013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.656 qpair failed and we were unable to recover it. 00:25:02.656 [2024-07-24 22:34:28.036144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.656 [2024-07-24 22:34:28.036170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.656 qpair failed and we were unable to recover it. 00:25:02.656 [2024-07-24 22:34:28.036302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.656 [2024-07-24 22:34:28.036328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.656 qpair failed and we were unable to recover it. 00:25:02.656 [2024-07-24 22:34:28.036429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.656 [2024-07-24 22:34:28.036456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.656 qpair failed and we were unable to recover it. 00:25:02.656 [2024-07-24 22:34:28.036625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.656 [2024-07-24 22:34:28.036688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.656 qpair failed and we were unable to recover it. 00:25:02.656 [2024-07-24 22:34:28.036878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.656 [2024-07-24 22:34:28.036940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.656 qpair failed and we were unable to recover it. 00:25:02.656 [2024-07-24 22:34:28.038297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.656 [2024-07-24 22:34:28.038337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.656 qpair failed and we were unable to recover it. 00:25:02.656 [2024-07-24 22:34:28.038505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.656 [2024-07-24 22:34:28.038562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.656 qpair failed and we were unable to recover it. 00:25:02.656 [2024-07-24 22:34:28.038722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.656 [2024-07-24 22:34:28.038777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.656 qpair failed and we were unable to recover it. 00:25:02.656 [2024-07-24 22:34:28.038966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.656 [2024-07-24 22:34:28.039013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.656 qpair failed and we were unable to recover it. 00:25:02.656 [2024-07-24 22:34:28.039154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.656 [2024-07-24 22:34:28.039208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.656 qpair failed and we were unable to recover it. 00:25:02.656 [2024-07-24 22:34:28.039348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.656 [2024-07-24 22:34:28.039403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.656 qpair failed and we were unable to recover it. 00:25:02.656 [2024-07-24 22:34:28.039530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.656 [2024-07-24 22:34:28.039572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.656 qpair failed and we were unable to recover it. 00:25:02.656 [2024-07-24 22:34:28.039708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.656 [2024-07-24 22:34:28.039747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.656 qpair failed and we were unable to recover it. 00:25:02.656 [2024-07-24 22:34:28.039872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.656 [2024-07-24 22:34:28.039897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.656 qpair failed and we were unable to recover it. 00:25:02.656 [2024-07-24 22:34:28.040062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.656 [2024-07-24 22:34:28.040105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.656 qpair failed and we were unable to recover it. 00:25:02.656 [2024-07-24 22:34:28.040256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.656 [2024-07-24 22:34:28.040303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.656 qpair failed and we were unable to recover it. 00:25:02.656 [2024-07-24 22:34:28.040477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.656 [2024-07-24 22:34:28.040530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.656 qpair failed and we were unable to recover it. 00:25:02.656 [2024-07-24 22:34:28.040648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.656 [2024-07-24 22:34:28.040675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.656 qpair failed and we were unable to recover it. 00:25:02.656 [2024-07-24 22:34:28.040798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.656 [2024-07-24 22:34:28.040842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.656 qpair failed and we were unable to recover it. 00:25:02.656 [2024-07-24 22:34:28.040974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.656 [2024-07-24 22:34:28.041015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.656 qpair failed and we were unable to recover it. 00:25:02.656 [2024-07-24 22:34:28.041147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.656 [2024-07-24 22:34:28.041178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.656 qpair failed and we were unable to recover it. 00:25:02.656 [2024-07-24 22:34:28.041319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.656 [2024-07-24 22:34:28.041362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.656 qpair failed and we were unable to recover it. 00:25:02.656 [2024-07-24 22:34:28.041493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.656 [2024-07-24 22:34:28.041538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.656 qpair failed and we were unable to recover it. 00:25:02.656 [2024-07-24 22:34:28.041674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.657 [2024-07-24 22:34:28.041716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.657 qpair failed and we were unable to recover it. 00:25:02.657 [2024-07-24 22:34:28.041852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.657 [2024-07-24 22:34:28.041882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.657 qpair failed and we were unable to recover it. 00:25:02.657 [2024-07-24 22:34:28.042072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.657 [2024-07-24 22:34:28.042126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.657 qpair failed and we were unable to recover it. 00:25:02.657 [2024-07-24 22:34:28.042296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.657 [2024-07-24 22:34:28.042349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.657 qpair failed and we were unable to recover it. 00:25:02.657 [2024-07-24 22:34:28.042457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.657 [2024-07-24 22:34:28.042491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.657 qpair failed and we were unable to recover it. 00:25:02.657 [2024-07-24 22:34:28.042615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.657 [2024-07-24 22:34:28.042643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.657 qpair failed and we were unable to recover it. 00:25:02.657 [2024-07-24 22:34:28.042781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.657 [2024-07-24 22:34:28.042807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.657 qpair failed and we were unable to recover it. 00:25:02.657 [2024-07-24 22:34:28.042920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.657 [2024-07-24 22:34:28.042946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.657 qpair failed and we were unable to recover it. 00:25:02.657 [2024-07-24 22:34:28.043125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.657 [2024-07-24 22:34:28.043167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.657 qpair failed and we were unable to recover it. 00:25:02.657 [2024-07-24 22:34:28.043325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.657 [2024-07-24 22:34:28.043378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.657 qpair failed and we were unable to recover it. 00:25:02.657 [2024-07-24 22:34:28.043505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.657 [2024-07-24 22:34:28.043546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.657 qpair failed and we were unable to recover it. 00:25:02.657 [2024-07-24 22:34:28.043664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.657 [2024-07-24 22:34:28.043729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.657 qpair failed and we were unable to recover it. 00:25:02.657 [2024-07-24 22:34:28.043902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.657 [2024-07-24 22:34:28.043955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.657 qpair failed and we were unable to recover it. 00:25:02.657 [2024-07-24 22:34:28.044073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.657 [2024-07-24 22:34:28.044134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.657 qpair failed and we were unable to recover it. 00:25:02.657 [2024-07-24 22:34:28.044291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.657 [2024-07-24 22:34:28.044318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.657 qpair failed and we were unable to recover it. 00:25:02.657 [2024-07-24 22:34:28.044475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.657 [2024-07-24 22:34:28.044547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.657 qpair failed and we were unable to recover it. 00:25:02.657 [2024-07-24 22:34:28.044648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.657 [2024-07-24 22:34:28.044674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.657 qpair failed and we were unable to recover it. 00:25:02.657 [2024-07-24 22:34:28.044847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.657 [2024-07-24 22:34:28.044907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.657 qpair failed and we were unable to recover it. 00:25:02.657 [2024-07-24 22:34:28.045075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.657 [2024-07-24 22:34:28.045132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.657 qpair failed and we were unable to recover it. 00:25:02.657 [2024-07-24 22:34:28.045266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.657 [2024-07-24 22:34:28.045347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.657 qpair failed and we were unable to recover it. 00:25:02.657 [2024-07-24 22:34:28.045453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.657 [2024-07-24 22:34:28.045490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.657 qpair failed and we were unable to recover it. 00:25:02.657 [2024-07-24 22:34:28.045653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.657 [2024-07-24 22:34:28.045719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.657 qpair failed and we were unable to recover it. 00:25:02.657 [2024-07-24 22:34:28.045902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.657 [2024-07-24 22:34:28.045944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.657 qpair failed and we were unable to recover it. 00:25:02.657 [2024-07-24 22:34:28.046097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.657 [2024-07-24 22:34:28.046153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.657 qpair failed and we were unable to recover it. 00:25:02.657 [2024-07-24 22:34:28.046323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.657 [2024-07-24 22:34:28.046350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.657 qpair failed and we were unable to recover it. 00:25:02.657 [2024-07-24 22:34:28.046533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.657 [2024-07-24 22:34:28.046560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.657 qpair failed and we were unable to recover it. 00:25:02.657 [2024-07-24 22:34:28.046712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.657 [2024-07-24 22:34:28.046766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.657 qpair failed and we were unable to recover it. 00:25:02.657 [2024-07-24 22:34:28.046906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.657 [2024-07-24 22:34:28.046954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.657 qpair failed and we were unable to recover it. 00:25:02.657 [2024-07-24 22:34:28.047120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.657 [2024-07-24 22:34:28.047181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.657 qpair failed and we were unable to recover it. 00:25:02.657 [2024-07-24 22:34:28.047346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.657 [2024-07-24 22:34:28.047403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.657 qpair failed and we were unable to recover it. 00:25:02.657 [2024-07-24 22:34:28.047558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.657 [2024-07-24 22:34:28.047605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.657 qpair failed and we were unable to recover it. 00:25:02.657 [2024-07-24 22:34:28.047779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.657 [2024-07-24 22:34:28.047809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.657 qpair failed and we were unable to recover it. 00:25:02.657 [2024-07-24 22:34:28.048005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.657 [2024-07-24 22:34:28.048048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.658 qpair failed and we were unable to recover it. 00:25:02.658 [2024-07-24 22:34:28.048168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.658 [2024-07-24 22:34:28.048212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.658 qpair failed and we were unable to recover it. 00:25:02.658 [2024-07-24 22:34:28.048358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.658 [2024-07-24 22:34:28.048413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.658 qpair failed and we were unable to recover it. 00:25:02.658 [2024-07-24 22:34:28.048533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.658 [2024-07-24 22:34:28.048560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.658 qpair failed and we were unable to recover it. 00:25:02.658 [2024-07-24 22:34:28.048727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.658 [2024-07-24 22:34:28.048773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.658 qpair failed and we were unable to recover it. 00:25:02.658 [2024-07-24 22:34:28.048928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.658 [2024-07-24 22:34:28.048981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.658 qpair failed and we were unable to recover it. 00:25:02.658 [2024-07-24 22:34:28.049112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.658 [2024-07-24 22:34:28.049165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.658 qpair failed and we were unable to recover it. 00:25:02.658 [2024-07-24 22:34:28.049318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.658 [2024-07-24 22:34:28.049399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.658 qpair failed and we were unable to recover it. 00:25:02.658 [2024-07-24 22:34:28.049508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.658 [2024-07-24 22:34:28.049547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.658 qpair failed and we were unable to recover it. 00:25:02.658 [2024-07-24 22:34:28.049684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.658 [2024-07-24 22:34:28.049735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.658 qpair failed and we were unable to recover it. 00:25:02.658 [2024-07-24 22:34:28.049897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.658 [2024-07-24 22:34:28.049924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.658 qpair failed and we were unable to recover it. 00:25:02.658 [2024-07-24 22:34:28.050061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.658 [2024-07-24 22:34:28.050146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.658 qpair failed and we were unable to recover it. 00:25:02.658 [2024-07-24 22:34:28.050313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.658 [2024-07-24 22:34:28.050362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.658 qpair failed and we were unable to recover it. 00:25:02.658 [2024-07-24 22:34:28.050524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.658 [2024-07-24 22:34:28.050575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.658 qpair failed and we were unable to recover it. 00:25:02.658 [2024-07-24 22:34:28.050722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.658 [2024-07-24 22:34:28.050781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.658 qpair failed and we were unable to recover it. 00:25:02.658 [2024-07-24 22:34:28.050976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.658 [2024-07-24 22:34:28.051019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.658 qpair failed and we were unable to recover it. 00:25:02.658 [2024-07-24 22:34:28.051162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.658 [2024-07-24 22:34:28.051219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.658 qpair failed and we were unable to recover it. 00:25:02.658 [2024-07-24 22:34:28.051395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.658 [2024-07-24 22:34:28.051454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.658 qpair failed and we were unable to recover it. 00:25:02.658 [2024-07-24 22:34:28.051646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.658 [2024-07-24 22:34:28.051677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.658 qpair failed and we were unable to recover it. 00:25:02.658 [2024-07-24 22:34:28.051837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.658 [2024-07-24 22:34:28.051892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.658 qpair failed and we were unable to recover it. 00:25:02.658 [2024-07-24 22:34:28.052031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.658 [2024-07-24 22:34:28.052072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.658 qpair failed and we were unable to recover it. 00:25:02.658 [2024-07-24 22:34:28.052245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.658 [2024-07-24 22:34:28.052272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.658 qpair failed and we were unable to recover it. 00:25:02.658 [2024-07-24 22:34:28.052420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.659 [2024-07-24 22:34:28.052509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.659 qpair failed and we were unable to recover it. 00:25:02.659 [2024-07-24 22:34:28.052620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.659 [2024-07-24 22:34:28.052645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.659 qpair failed and we were unable to recover it. 00:25:02.659 [2024-07-24 22:34:28.052756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.659 [2024-07-24 22:34:28.052838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.659 qpair failed and we were unable to recover it. 00:25:02.659 [2024-07-24 22:34:28.052992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.659 [2024-07-24 22:34:28.053047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.659 qpair failed and we were unable to recover it. 00:25:02.659 [2024-07-24 22:34:28.053228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.659 [2024-07-24 22:34:28.053290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.659 qpair failed and we were unable to recover it. 00:25:02.659 [2024-07-24 22:34:28.053486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.659 [2024-07-24 22:34:28.053531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.659 qpair failed and we were unable to recover it. 00:25:02.659 [2024-07-24 22:34:28.053677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.659 [2024-07-24 22:34:28.053730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.659 qpair failed and we were unable to recover it. 00:25:02.659 [2024-07-24 22:34:28.053843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.659 [2024-07-24 22:34:28.053872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.659 qpair failed and we were unable to recover it. 00:25:02.659 [2024-07-24 22:34:28.053988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.659 [2024-07-24 22:34:28.054014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.659 qpair failed and we were unable to recover it. 00:25:02.659 [2024-07-24 22:34:28.054159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.659 [2024-07-24 22:34:28.054216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.659 qpair failed and we were unable to recover it. 00:25:02.659 [2024-07-24 22:34:28.054382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.659 [2024-07-24 22:34:28.054449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.659 qpair failed and we were unable to recover it. 00:25:02.659 [2024-07-24 22:34:28.054635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.659 [2024-07-24 22:34:28.054692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.659 qpair failed and we were unable to recover it. 00:25:02.659 [2024-07-24 22:34:28.054861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.659 [2024-07-24 22:34:28.054914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.659 qpair failed and we were unable to recover it. 00:25:02.659 [2024-07-24 22:34:28.055068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.659 [2024-07-24 22:34:28.055119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.659 qpair failed and we were unable to recover it. 00:25:02.659 [2024-07-24 22:34:28.055259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.659 [2024-07-24 22:34:28.055318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.659 qpair failed and we were unable to recover it. 00:25:02.659 [2024-07-24 22:34:28.055507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.659 [2024-07-24 22:34:28.055556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.659 qpair failed and we were unable to recover it. 00:25:02.659 [2024-07-24 22:34:28.055701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.659 [2024-07-24 22:34:28.055746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.659 qpair failed and we were unable to recover it. 00:25:02.659 [2024-07-24 22:34:28.055865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.659 [2024-07-24 22:34:28.055893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.659 qpair failed and we were unable to recover it. 00:25:02.659 [2024-07-24 22:34:28.056007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.659 [2024-07-24 22:34:28.056047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.659 qpair failed and we were unable to recover it. 00:25:02.659 [2024-07-24 22:34:28.056212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.659 [2024-07-24 22:34:28.056272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.659 qpair failed and we were unable to recover it. 00:25:02.659 [2024-07-24 22:34:28.056452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.659 [2024-07-24 22:34:28.056513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.659 qpair failed and we were unable to recover it. 00:25:02.659 [2024-07-24 22:34:28.056703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.659 [2024-07-24 22:34:28.056758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.659 qpair failed and we were unable to recover it. 00:25:02.659 [2024-07-24 22:34:28.056911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.659 [2024-07-24 22:34:28.056964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.659 qpair failed and we were unable to recover it. 00:25:02.659 [2024-07-24 22:34:28.057141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.659 [2024-07-24 22:34:28.057166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.659 qpair failed and we were unable to recover it. 00:25:02.659 [2024-07-24 22:34:28.057338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.659 [2024-07-24 22:34:28.057380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.660 qpair failed and we were unable to recover it. 00:25:02.660 [2024-07-24 22:34:28.057566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.660 [2024-07-24 22:34:28.057610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.660 qpair failed and we were unable to recover it. 00:25:02.660 [2024-07-24 22:34:28.057775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.660 [2024-07-24 22:34:28.057801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.660 qpair failed and we were unable to recover it. 00:25:02.660 [2024-07-24 22:34:28.057975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.660 [2024-07-24 22:34:28.058002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.660 qpair failed and we were unable to recover it. 00:25:02.660 [2024-07-24 22:34:28.058226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.660 [2024-07-24 22:34:28.058269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.660 qpair failed and we were unable to recover it. 00:25:02.660 [2024-07-24 22:34:28.058450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.660 [2024-07-24 22:34:28.058486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.660 qpair failed and we were unable to recover it. 00:25:02.660 [2024-07-24 22:34:28.058665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.660 [2024-07-24 22:34:28.058712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.660 qpair failed and we were unable to recover it. 00:25:02.660 [2024-07-24 22:34:28.058901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.660 [2024-07-24 22:34:28.058930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.660 qpair failed and we were unable to recover it. 00:25:02.660 [2024-07-24 22:34:28.059081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.660 [2024-07-24 22:34:28.059149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.660 qpair failed and we were unable to recover it. 00:25:02.660 [2024-07-24 22:34:28.059331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.660 [2024-07-24 22:34:28.059385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.660 qpair failed and we were unable to recover it. 00:25:02.660 [2024-07-24 22:34:28.059492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.660 [2024-07-24 22:34:28.059519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.660 qpair failed and we were unable to recover it. 00:25:02.660 [2024-07-24 22:34:28.059647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.660 [2024-07-24 22:34:28.059698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.660 qpair failed and we were unable to recover it. 00:25:02.660 [2024-07-24 22:34:28.059803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.660 [2024-07-24 22:34:28.059831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.660 qpair failed and we were unable to recover it. 00:25:02.660 [2024-07-24 22:34:28.059981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.660 [2024-07-24 22:34:28.060031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.660 qpair failed and we were unable to recover it. 00:25:02.660 [2024-07-24 22:34:28.060192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.660 [2024-07-24 22:34:28.060220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.660 qpair failed and we were unable to recover it. 00:25:02.660 [2024-07-24 22:34:28.060373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.660 [2024-07-24 22:34:28.060422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.660 qpair failed and we were unable to recover it. 00:25:02.660 [2024-07-24 22:34:28.060562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.660 [2024-07-24 22:34:28.060617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.660 qpair failed and we were unable to recover it. 00:25:02.660 [2024-07-24 22:34:28.060770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.660 [2024-07-24 22:34:28.060824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.660 qpair failed and we were unable to recover it. 00:25:02.660 [2024-07-24 22:34:28.060940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.660 [2024-07-24 22:34:28.060967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.660 qpair failed and we were unable to recover it. 00:25:02.660 [2024-07-24 22:34:28.061131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.660 [2024-07-24 22:34:28.061159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.660 qpair failed and we were unable to recover it. 00:25:02.660 [2024-07-24 22:34:28.061270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.660 [2024-07-24 22:34:28.061296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.660 qpair failed and we were unable to recover it. 00:25:02.660 [2024-07-24 22:34:28.061437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.660 [2024-07-24 22:34:28.061476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.660 qpair failed and we were unable to recover it. 00:25:02.660 [2024-07-24 22:34:28.061667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.660 [2024-07-24 22:34:28.061718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.660 qpair failed and we were unable to recover it. 00:25:02.660 [2024-07-24 22:34:28.061869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.660 [2024-07-24 22:34:28.061923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.660 qpair failed and we were unable to recover it. 00:25:02.660 [2024-07-24 22:34:28.062105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.660 [2024-07-24 22:34:28.062159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.660 qpair failed and we were unable to recover it. 00:25:02.660 [2024-07-24 22:34:28.062306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.660 [2024-07-24 22:34:28.062357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.660 qpair failed and we were unable to recover it. 00:25:02.660 [2024-07-24 22:34:28.062471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.660 [2024-07-24 22:34:28.062528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.661 qpair failed and we were unable to recover it. 00:25:02.661 [2024-07-24 22:34:28.062727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.661 [2024-07-24 22:34:28.062769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.661 qpair failed and we were unable to recover it. 00:25:02.661 [2024-07-24 22:34:28.062919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.661 [2024-07-24 22:34:28.062961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.661 qpair failed and we were unable to recover it. 00:25:02.661 [2024-07-24 22:34:28.063100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.661 [2024-07-24 22:34:28.063153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.661 qpair failed and we were unable to recover it. 00:25:02.661 [2024-07-24 22:34:28.063315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.661 [2024-07-24 22:34:28.063366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.661 qpair failed and we were unable to recover it. 00:25:02.661 [2024-07-24 22:34:28.063522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.661 [2024-07-24 22:34:28.063580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.661 qpair failed and we were unable to recover it. 00:25:02.661 [2024-07-24 22:34:28.063711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.661 [2024-07-24 22:34:28.063763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.661 qpair failed and we were unable to recover it. 00:25:02.661 [2024-07-24 22:34:28.063922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.661 [2024-07-24 22:34:28.063975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.661 qpair failed and we were unable to recover it. 00:25:02.661 [2024-07-24 22:34:28.064138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.661 [2024-07-24 22:34:28.064188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.661 qpair failed and we were unable to recover it. 00:25:02.661 [2024-07-24 22:34:28.064341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.661 [2024-07-24 22:34:28.064396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.661 qpair failed and we were unable to recover it. 00:25:02.661 [2024-07-24 22:34:28.064498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.661 [2024-07-24 22:34:28.064525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.661 qpair failed and we were unable to recover it. 00:25:02.661 [2024-07-24 22:34:28.064677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.661 [2024-07-24 22:34:28.064729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.661 qpair failed and we were unable to recover it. 00:25:02.661 [2024-07-24 22:34:28.064887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.661 [2024-07-24 22:34:28.064916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.661 qpair failed and we were unable to recover it. 00:25:02.661 [2024-07-24 22:34:28.065077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.661 [2024-07-24 22:34:28.065131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.661 qpair failed and we were unable to recover it. 00:25:02.661 [2024-07-24 22:34:28.065232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.661 [2024-07-24 22:34:28.065258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.661 qpair failed and we were unable to recover it. 00:25:02.661 [2024-07-24 22:34:28.065384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.661 [2024-07-24 22:34:28.065442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.661 qpair failed and we were unable to recover it. 00:25:02.661 [2024-07-24 22:34:28.065602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.661 [2024-07-24 22:34:28.065652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.661 qpair failed and we were unable to recover it. 00:25:02.661 [2024-07-24 22:34:28.065794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.661 [2024-07-24 22:34:28.065858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.661 qpair failed and we were unable to recover it. 00:25:02.661 [2024-07-24 22:34:28.066029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.661 [2024-07-24 22:34:28.066084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.661 qpair failed and we were unable to recover it. 00:25:02.661 [2024-07-24 22:34:28.066202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.661 [2024-07-24 22:34:28.066240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.661 qpair failed and we were unable to recover it. 00:25:02.661 [2024-07-24 22:34:28.066393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.661 [2024-07-24 22:34:28.066447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.661 qpair failed and we were unable to recover it. 00:25:02.661 [2024-07-24 22:34:28.066573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.661 [2024-07-24 22:34:28.066628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.661 qpair failed and we were unable to recover it. 00:25:02.661 [2024-07-24 22:34:28.066792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.661 [2024-07-24 22:34:28.066842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.661 qpair failed and we were unable to recover it. 00:25:02.661 [2024-07-24 22:34:28.067000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.661 [2024-07-24 22:34:28.067027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.661 qpair failed and we were unable to recover it. 00:25:02.661 [2024-07-24 22:34:28.067158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.661 [2024-07-24 22:34:28.067187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.661 qpair failed and we were unable to recover it. 00:25:02.661 [2024-07-24 22:34:28.067362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.661 [2024-07-24 22:34:28.067432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.661 qpair failed and we were unable to recover it. 00:25:02.661 [2024-07-24 22:34:28.067596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.661 [2024-07-24 22:34:28.067650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.661 qpair failed and we were unable to recover it. 00:25:02.662 [2024-07-24 22:34:28.067754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.662 [2024-07-24 22:34:28.067780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.662 qpair failed and we were unable to recover it. 00:25:02.662 [2024-07-24 22:34:28.067914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.662 [2024-07-24 22:34:28.067968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.662 qpair failed and we were unable to recover it. 00:25:02.662 [2024-07-24 22:34:28.068113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.662 [2024-07-24 22:34:28.068166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.662 qpair failed and we were unable to recover it. 00:25:02.662 [2024-07-24 22:34:28.068325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.662 [2024-07-24 22:34:28.068382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.662 qpair failed and we were unable to recover it. 00:25:02.662 [2024-07-24 22:34:28.068530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.662 [2024-07-24 22:34:28.068586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.662 qpair failed and we were unable to recover it. 00:25:02.662 [2024-07-24 22:34:28.068727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.662 [2024-07-24 22:34:28.068787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.662 qpair failed and we were unable to recover it. 00:25:02.662 [2024-07-24 22:34:28.068971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.662 [2024-07-24 22:34:28.069025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.662 qpair failed and we were unable to recover it. 00:25:02.662 [2024-07-24 22:34:28.069181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.662 [2024-07-24 22:34:28.069206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.662 qpair failed and we were unable to recover it. 00:25:02.662 [2024-07-24 22:34:28.069376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.662 [2024-07-24 22:34:28.069372] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:25:02.662 [2024-07-24 22:34:28.069437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.662 qpair failed and we were unable to recover it. 00:25:02.662 [2024-07-24 22:34:28.069487] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:02.662 [2024-07-24 22:34:28.069598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.662 [2024-07-24 22:34:28.069654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.662 qpair failed and we were unable to recover it. 00:25:02.662 [2024-07-24 22:34:28.069835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.662 [2024-07-24 22:34:28.069875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.662 qpair failed and we were unable to recover it. 00:25:02.662 [2024-07-24 22:34:28.070034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.662 [2024-07-24 22:34:28.070083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.662 qpair failed and we were unable to recover it. 00:25:02.662 [2024-07-24 22:34:28.070215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.662 [2024-07-24 22:34:28.070264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.662 qpair failed and we were unable to recover it. 00:25:02.662 [2024-07-24 22:34:28.070377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.662 [2024-07-24 22:34:28.070405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.662 qpair failed and we were unable to recover it. 00:25:02.662 [2024-07-24 22:34:28.070564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.662 [2024-07-24 22:34:28.070605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.662 qpair failed and we were unable to recover it. 00:25:02.662 [2024-07-24 22:34:28.070765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.662 [2024-07-24 22:34:28.070815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.662 qpair failed and we were unable to recover it. 00:25:02.662 [2024-07-24 22:34:28.070950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.662 [2024-07-24 22:34:28.071018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.662 qpair failed and we were unable to recover it. 00:25:02.662 [2024-07-24 22:34:28.071188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.662 [2024-07-24 22:34:28.071244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.662 qpair failed and we were unable to recover it. 00:25:02.662 [2024-07-24 22:34:28.071409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.662 [2024-07-24 22:34:28.071438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.662 qpair failed and we were unable to recover it. 00:25:02.662 [2024-07-24 22:34:28.071563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.662 [2024-07-24 22:34:28.071624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.662 qpair failed and we were unable to recover it. 00:25:02.662 [2024-07-24 22:34:28.071819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.662 [2024-07-24 22:34:28.071876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.662 qpair failed and we were unable to recover it. 00:25:02.662 [2024-07-24 22:34:28.071994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.662 [2024-07-24 22:34:28.072025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.662 qpair failed and we were unable to recover it. 00:25:02.662 [2024-07-24 22:34:28.072182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.662 [2024-07-24 22:34:28.072235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.662 qpair failed and we were unable to recover it. 00:25:02.662 [2024-07-24 22:34:28.072382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.662 [2024-07-24 22:34:28.072439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.662 qpair failed and we were unable to recover it. 00:25:02.662 [2024-07-24 22:34:28.072654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.662 [2024-07-24 22:34:28.072708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.662 qpair failed and we were unable to recover it. 00:25:02.662 [2024-07-24 22:34:28.072839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.662 [2024-07-24 22:34:28.072869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.662 qpair failed and we were unable to recover it. 00:25:02.662 [2024-07-24 22:34:28.073031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.662 [2024-07-24 22:34:28.073084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.662 qpair failed and we were unable to recover it. 00:25:02.662 [2024-07-24 22:34:28.073241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.662 [2024-07-24 22:34:28.073296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.662 qpair failed and we were unable to recover it. 00:25:02.662 [2024-07-24 22:34:28.073443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.662 [2024-07-24 22:34:28.073503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.662 qpair failed and we were unable to recover it. 00:25:02.662 [2024-07-24 22:34:28.073651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.662 [2024-07-24 22:34:28.073691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.662 qpair failed and we were unable to recover it. 00:25:02.662 [2024-07-24 22:34:28.073880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.662 [2024-07-24 22:34:28.073927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-24 22:34:28.074070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-24 22:34:28.074121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-24 22:34:28.074277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-24 22:34:28.074331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-24 22:34:28.074533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-24 22:34:28.074560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-24 22:34:28.074713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-24 22:34:28.074770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-24 22:34:28.074916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-24 22:34:28.074969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-24 22:34:28.075130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-24 22:34:28.075157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-24 22:34:28.075363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-24 22:34:28.075406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-24 22:34:28.075540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-24 22:34:28.075583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-24 22:34:28.075699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-24 22:34:28.075739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-24 22:34:28.075889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-24 22:34:28.075940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-24 22:34:28.076092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-24 22:34:28.076148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-24 22:34:28.076309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-24 22:34:28.076364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-24 22:34:28.076545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-24 22:34:28.076575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-24 22:34:28.076760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-24 22:34:28.076811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-24 22:34:28.076960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-24 22:34:28.077015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-24 22:34:28.077198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-24 22:34:28.077241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-24 22:34:28.077345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-24 22:34:28.077372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-24 22:34:28.077550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-24 22:34:28.077615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-24 22:34:28.077746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-24 22:34:28.077799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-24 22:34:28.077938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-24 22:34:28.077993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-24 22:34:28.078113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-24 22:34:28.078174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-24 22:34:28.078328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-24 22:34:28.078361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-24 22:34:28.078561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-24 22:34:28.078615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-24 22:34:28.078766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-24 22:34:28.078818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-24 22:34:28.078952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-24 22:34:28.079005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-24 22:34:28.079120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-24 22:34:28.079152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-24 22:34:28.079311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-24 22:34:28.079374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-24 22:34:28.079545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-24 22:34:28.079591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-24 22:34:28.079772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-24 22:34:28.079815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-24 22:34:28.079965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-24 22:34:28.080016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-24 22:34:28.080178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-24 22:34:28.080244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-24 22:34:28.080410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-24 22:34:28.080459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-24 22:34:28.080623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-24 22:34:28.080676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-24 22:34:28.080848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-24 22:34:28.080900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-24 22:34:28.081017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-24 22:34:28.081047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-24 22:34:28.081226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-24 22:34:28.081290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.663 [2024-07-24 22:34:28.081425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.663 [2024-07-24 22:34:28.081454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.663 qpair failed and we were unable to recover it. 00:25:02.664 [2024-07-24 22:34:28.081618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.664 [2024-07-24 22:34:28.081647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.664 qpair failed and we were unable to recover it. 00:25:02.664 [2024-07-24 22:34:28.081799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.664 [2024-07-24 22:34:28.081839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.664 qpair failed and we were unable to recover it. 00:25:02.664 [2024-07-24 22:34:28.082006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.664 [2024-07-24 22:34:28.082038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.664 qpair failed and we were unable to recover it. 00:25:02.664 [2024-07-24 22:34:28.082211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.664 [2024-07-24 22:34:28.082263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.664 qpair failed and we were unable to recover it. 00:25:02.664 [2024-07-24 22:34:28.082369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.664 [2024-07-24 22:34:28.082398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.664 qpair failed and we were unable to recover it. 00:25:02.664 [2024-07-24 22:34:28.082733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.664 [2024-07-24 22:34:28.082762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.664 qpair failed and we were unable to recover it. 00:25:02.664 [2024-07-24 22:34:28.082932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.664 [2024-07-24 22:34:28.082960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.664 qpair failed and we were unable to recover it. 00:25:02.664 [2024-07-24 22:34:28.083135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.664 [2024-07-24 22:34:28.083191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.664 qpair failed and we were unable to recover it. 00:25:02.664 [2024-07-24 22:34:28.083342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.664 [2024-07-24 22:34:28.083394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.664 qpair failed and we were unable to recover it. 00:25:02.664 [2024-07-24 22:34:28.083505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.664 [2024-07-24 22:34:28.083533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.664 qpair failed and we were unable to recover it. 00:25:02.664 [2024-07-24 22:34:28.083707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.664 [2024-07-24 22:34:28.083761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.664 qpair failed and we were unable to recover it. 00:25:02.664 [2024-07-24 22:34:28.083925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.664 [2024-07-24 22:34:28.083980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.664 qpair failed and we were unable to recover it. 00:25:02.664 [2024-07-24 22:34:28.084150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.664 [2024-07-24 22:34:28.084178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.664 qpair failed and we were unable to recover it. 00:25:02.664 [2024-07-24 22:34:28.084340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.664 [2024-07-24 22:34:28.084398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.664 qpair failed and we were unable to recover it. 00:25:02.664 [2024-07-24 22:34:28.084584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.664 [2024-07-24 22:34:28.084638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.664 qpair failed and we were unable to recover it. 00:25:02.664 [2024-07-24 22:34:28.084784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.664 [2024-07-24 22:34:28.084826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.664 qpair failed and we were unable to recover it. 00:25:02.664 [2024-07-24 22:34:28.084935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.664 [2024-07-24 22:34:28.084962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.664 qpair failed and we were unable to recover it. 00:25:02.664 [2024-07-24 22:34:28.085104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.664 [2024-07-24 22:34:28.085156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.664 qpair failed and we were unable to recover it. 00:25:02.664 [2024-07-24 22:34:28.085325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.664 [2024-07-24 22:34:28.085388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.664 qpair failed and we were unable to recover it. 00:25:02.664 [2024-07-24 22:34:28.085537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.664 [2024-07-24 22:34:28.085597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.664 qpair failed and we were unable to recover it. 00:25:02.664 [2024-07-24 22:34:28.085716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.664 [2024-07-24 22:34:28.085756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.664 qpair failed and we were unable to recover it. 00:25:02.664 [2024-07-24 22:34:28.085890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.664 [2024-07-24 22:34:28.085945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.664 qpair failed and we were unable to recover it. 00:25:02.664 [2024-07-24 22:34:28.086136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.664 [2024-07-24 22:34:28.086177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.664 qpair failed and we were unable to recover it. 00:25:02.664 [2024-07-24 22:34:28.086325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.664 [2024-07-24 22:34:28.086368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.664 qpair failed and we were unable to recover it. 00:25:02.664 [2024-07-24 22:34:28.086539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.664 [2024-07-24 22:34:28.086567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.664 qpair failed and we were unable to recover it. 00:25:02.664 [2024-07-24 22:34:28.086744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.664 [2024-07-24 22:34:28.086803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.664 qpair failed and we were unable to recover it. 00:25:02.664 [2024-07-24 22:34:28.086987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.664 [2024-07-24 22:34:28.087048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.664 qpair failed and we were unable to recover it. 00:25:02.664 [2024-07-24 22:34:28.087177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.664 [2024-07-24 22:34:28.087205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.664 qpair failed and we were unable to recover it. 00:25:02.664 [2024-07-24 22:34:28.087357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.664 [2024-07-24 22:34:28.087415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.664 qpair failed and we were unable to recover it. 00:25:02.664 [2024-07-24 22:34:28.087549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.664 [2024-07-24 22:34:28.087619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.664 qpair failed and we were unable to recover it. 00:25:02.664 [2024-07-24 22:34:28.087818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.664 [2024-07-24 22:34:28.087870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.664 qpair failed and we were unable to recover it. 00:25:02.664 [2024-07-24 22:34:28.088041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.664 [2024-07-24 22:34:28.088093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.664 qpair failed and we were unable to recover it. 00:25:02.665 [2024-07-24 22:34:28.088215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.665 [2024-07-24 22:34:28.088245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.665 qpair failed and we were unable to recover it. 00:25:02.665 [2024-07-24 22:34:28.088427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.665 [2024-07-24 22:34:28.088471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.665 qpair failed and we were unable to recover it. 00:25:02.665 [2024-07-24 22:34:28.088585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.665 [2024-07-24 22:34:28.088613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.665 qpair failed and we were unable to recover it. 00:25:02.665 [2024-07-24 22:34:28.088777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.665 [2024-07-24 22:34:28.088828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.665 qpair failed and we were unable to recover it. 00:25:02.665 [2024-07-24 22:34:28.088925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.665 [2024-07-24 22:34:28.088951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.665 qpair failed and we were unable to recover it. 00:25:02.665 [2024-07-24 22:34:28.089109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.665 [2024-07-24 22:34:28.089158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.665 qpair failed and we were unable to recover it. 00:25:02.665 [2024-07-24 22:34:28.089260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.665 [2024-07-24 22:34:28.089287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.665 qpair failed and we were unable to recover it. 00:25:02.665 [2024-07-24 22:34:28.089459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.665 [2024-07-24 22:34:28.089518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.665 qpair failed and we were unable to recover it. 00:25:02.665 [2024-07-24 22:34:28.089658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.665 [2024-07-24 22:34:28.089713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.665 qpair failed and we were unable to recover it. 00:25:02.665 [2024-07-24 22:34:28.089891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.665 [2024-07-24 22:34:28.089944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.665 qpair failed and we were unable to recover it. 00:25:02.665 [2024-07-24 22:34:28.090122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.665 [2024-07-24 22:34:28.090163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.665 qpair failed and we were unable to recover it. 00:25:02.665 [2024-07-24 22:34:28.090276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.665 [2024-07-24 22:34:28.090306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.665 qpair failed and we were unable to recover it. 00:25:02.665 [2024-07-24 22:34:28.090415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.665 [2024-07-24 22:34:28.090442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.665 qpair failed and we were unable to recover it. 00:25:02.665 [2024-07-24 22:34:28.090616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.665 [2024-07-24 22:34:28.090672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.665 qpair failed and we were unable to recover it. 00:25:02.665 [2024-07-24 22:34:28.090820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.665 [2024-07-24 22:34:28.090900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.665 qpair failed and we were unable to recover it. 00:25:02.665 [2024-07-24 22:34:28.091047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.665 [2024-07-24 22:34:28.091089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.665 qpair failed and we were unable to recover it. 00:25:02.665 [2024-07-24 22:34:28.091254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.665 [2024-07-24 22:34:28.091307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.665 qpair failed and we were unable to recover it. 00:25:02.665 [2024-07-24 22:34:28.091440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.665 [2024-07-24 22:34:28.091502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.665 qpair failed and we were unable to recover it. 00:25:02.665 [2024-07-24 22:34:28.091680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.665 [2024-07-24 22:34:28.091736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.665 qpair failed and we were unable to recover it. 00:25:02.665 [2024-07-24 22:34:28.091841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.665 [2024-07-24 22:34:28.091871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.665 qpair failed and we were unable to recover it. 00:25:02.665 [2024-07-24 22:34:28.092024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.665 [2024-07-24 22:34:28.092077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.665 qpair failed and we were unable to recover it. 00:25:02.665 [2024-07-24 22:34:28.092253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.665 [2024-07-24 22:34:28.092316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.665 qpair failed and we were unable to recover it. 00:25:02.665 [2024-07-24 22:34:28.092448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.665 [2024-07-24 22:34:28.092508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.665 qpair failed and we were unable to recover it. 00:25:02.665 [2024-07-24 22:34:28.092654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.665 [2024-07-24 22:34:28.092712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.665 qpair failed and we were unable to recover it. 00:25:02.665 [2024-07-24 22:34:28.092931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.665 [2024-07-24 22:34:28.092976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.665 qpair failed and we were unable to recover it. 00:25:02.665 [2024-07-24 22:34:28.093134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.665 [2024-07-24 22:34:28.093187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.665 qpair failed and we were unable to recover it. 00:25:02.665 [2024-07-24 22:34:28.093372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.665 [2024-07-24 22:34:28.093400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.665 qpair failed and we were unable to recover it. 00:25:02.665 [2024-07-24 22:34:28.093562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.665 [2024-07-24 22:34:28.093605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.665 qpair failed and we were unable to recover it. 00:25:02.665 [2024-07-24 22:34:28.093750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.665 [2024-07-24 22:34:28.093801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.665 qpair failed and we were unable to recover it. 00:25:02.665 [2024-07-24 22:34:28.093931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.665 [2024-07-24 22:34:28.093985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.665 qpair failed and we were unable to recover it. 00:25:02.665 [2024-07-24 22:34:28.094187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.665 [2024-07-24 22:34:28.094253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.665 qpair failed and we were unable to recover it. 00:25:02.665 [2024-07-24 22:34:28.094425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.665 [2024-07-24 22:34:28.094492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.666 qpair failed and we were unable to recover it. 00:25:02.666 [2024-07-24 22:34:28.094690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.666 [2024-07-24 22:34:28.094733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.666 qpair failed and we were unable to recover it. 00:25:02.666 [2024-07-24 22:34:28.094853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.666 [2024-07-24 22:34:28.094892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.666 qpair failed and we were unable to recover it. 00:25:02.666 [2024-07-24 22:34:28.095040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.666 [2024-07-24 22:34:28.095093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.666 qpair failed and we were unable to recover it. 00:25:02.666 [2024-07-24 22:34:28.095289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.666 [2024-07-24 22:34:28.095345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.666 qpair failed and we were unable to recover it. 00:25:02.666 [2024-07-24 22:34:28.095508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.666 [2024-07-24 22:34:28.095558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.666 qpair failed and we were unable to recover it. 00:25:02.666 [2024-07-24 22:34:28.095700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.666 [2024-07-24 22:34:28.095754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.666 qpair failed and we were unable to recover it. 00:25:02.666 [2024-07-24 22:34:28.095915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.666 [2024-07-24 22:34:28.095970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.666 qpair failed and we were unable to recover it. 00:25:02.666 [2024-07-24 22:34:28.096132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.666 [2024-07-24 22:34:28.096162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.666 qpair failed and we were unable to recover it. 00:25:02.666 [2024-07-24 22:34:28.096323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.666 [2024-07-24 22:34:28.096367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.666 qpair failed and we were unable to recover it. 00:25:02.666 [2024-07-24 22:34:28.096560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.666 [2024-07-24 22:34:28.096605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.666 qpair failed and we were unable to recover it. 00:25:02.666 [2024-07-24 22:34:28.096735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.666 [2024-07-24 22:34:28.096789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.666 qpair failed and we were unable to recover it. 00:25:02.666 [2024-07-24 22:34:28.096891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.666 [2024-07-24 22:34:28.096917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.666 qpair failed and we were unable to recover it. 00:25:02.666 [2024-07-24 22:34:28.097110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.666 [2024-07-24 22:34:28.097164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.666 qpair failed and we were unable to recover it. 00:25:02.666 [2024-07-24 22:34:28.097307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.666 [2024-07-24 22:34:28.097354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.666 qpair failed and we were unable to recover it. 00:25:02.666 [2024-07-24 22:34:28.097512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.666 [2024-07-24 22:34:28.097561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.666 qpair failed and we were unable to recover it. 00:25:02.666 [2024-07-24 22:34:28.097723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.666 [2024-07-24 22:34:28.097750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.666 qpair failed and we were unable to recover it. 00:25:02.666 [2024-07-24 22:34:28.097929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.666 [2024-07-24 22:34:28.097985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.666 qpair failed and we were unable to recover it. 00:25:02.666 [2024-07-24 22:34:28.098138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.666 [2024-07-24 22:34:28.098193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.666 qpair failed and we were unable to recover it. 00:25:02.666 [2024-07-24 22:34:28.098309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.666 [2024-07-24 22:34:28.098389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.666 qpair failed and we were unable to recover it. 00:25:02.666 [2024-07-24 22:34:28.098575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.666 [2024-07-24 22:34:28.098616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.666 qpair failed and we were unable to recover it. 00:25:02.666 [2024-07-24 22:34:28.098781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.666 [2024-07-24 22:34:28.098837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.666 qpair failed and we were unable to recover it. 00:25:02.666 [2024-07-24 22:34:28.098988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.666 [2024-07-24 22:34:28.099030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.666 qpair failed and we were unable to recover it. 00:25:02.666 [2024-07-24 22:34:28.099178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.666 [2024-07-24 22:34:28.099232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.666 qpair failed and we were unable to recover it. 00:25:02.666 [2024-07-24 22:34:28.099379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.666 [2024-07-24 22:34:28.099429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.666 qpair failed and we were unable to recover it. 00:25:02.666 [2024-07-24 22:34:28.099582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.666 [2024-07-24 22:34:28.099643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.666 qpair failed and we were unable to recover it. 00:25:02.666 [2024-07-24 22:34:28.099816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.666 [2024-07-24 22:34:28.099842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.666 qpair failed and we were unable to recover it. 00:25:02.666 [2024-07-24 22:34:28.100014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.666 [2024-07-24 22:34:28.100066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.666 qpair failed and we were unable to recover it. 00:25:02.666 [2024-07-24 22:34:28.100250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.666 [2024-07-24 22:34:28.100310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.666 qpair failed and we were unable to recover it. 00:25:02.666 [2024-07-24 22:34:28.100502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.666 [2024-07-24 22:34:28.100532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.666 qpair failed and we were unable to recover it. 00:25:02.666 [2024-07-24 22:34:28.100637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.666 [2024-07-24 22:34:28.100663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.666 qpair failed and we were unable to recover it. 00:25:02.666 [2024-07-24 22:34:28.100826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.666 [2024-07-24 22:34:28.100880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.666 qpair failed and we were unable to recover it. 00:25:02.666 [2024-07-24 22:34:28.101034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.666 [2024-07-24 22:34:28.101115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.666 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-24 22:34:28.101271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-24 22:34:28.101306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-24 22:34:28.101474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-24 22:34:28.101534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-24 22:34:28.101718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-24 22:34:28.101767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-24 22:34:28.101920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-24 22:34:28.101951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-24 22:34:28.102153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-24 22:34:28.102194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-24 22:34:28.102342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-24 22:34:28.102394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-24 22:34:28.102561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-24 22:34:28.102620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-24 22:34:28.102787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-24 22:34:28.102842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-24 22:34:28.103001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-24 22:34:28.103055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-24 22:34:28.103195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-24 22:34:28.103246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-24 22:34:28.103392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-24 22:34:28.103423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-24 22:34:28.103543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-24 22:34:28.103570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-24 22:34:28.103737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-24 22:34:28.103789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-24 22:34:28.103890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-24 22:34:28.103917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-24 22:34:28.104021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-24 22:34:28.104047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-24 22:34:28.104146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-24 22:34:28.104172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-24 22:34:28.104271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-24 22:34:28.104297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-24 22:34:28.104428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-24 22:34:28.104499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-24 22:34:28.104627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-24 22:34:28.104667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-24 22:34:28.104809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-24 22:34:28.104837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-24 22:34:28.104975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-24 22:34:28.105031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-24 22:34:28.105179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-24 22:34:28.105232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-24 22:34:28.105414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-24 22:34:28.105442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-24 22:34:28.105578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-24 22:34:28.105604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-24 22:34:28.105749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-24 22:34:28.105802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-24 22:34:28.105980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-24 22:34:28.106023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-24 22:34:28.106169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-24 22:34:28.106222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-24 22:34:28.106375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-24 22:34:28.106438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-24 22:34:28.106615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-24 22:34:28.106662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-24 22:34:28.106798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-24 22:34:28.106853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-24 22:34:28.106958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-24 22:34:28.106985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-24 22:34:28.107101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-24 22:34:28.107140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-24 22:34:28.107284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-24 22:34:28.107330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-24 22:34:28.107442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-24 22:34:28.107471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-24 22:34:28.107595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-24 22:34:28.107622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-24 22:34:28.107774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-24 22:34:28.107801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-24 22:34:28.107896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-24 22:34:28.107922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.667 qpair failed and we were unable to recover it. 00:25:02.667 [2024-07-24 22:34:28.108084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.667 [2024-07-24 22:34:28.108144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.668 qpair failed and we were unable to recover it. 00:25:02.668 [2024-07-24 22:34:28.108263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.668 [2024-07-24 22:34:28.108293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.668 qpair failed and we were unable to recover it. 00:25:02.668 [2024-07-24 22:34:28.108443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.668 [2024-07-24 22:34:28.108502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.668 qpair failed and we were unable to recover it. 00:25:02.668 [2024-07-24 22:34:28.108625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.668 [2024-07-24 22:34:28.108652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.668 qpair failed and we were unable to recover it. 00:25:02.668 EAL: No free 2048 kB hugepages reported on node 1 00:25:02.668 [2024-07-24 22:34:28.108821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.668 [2024-07-24 22:34:28.108868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.668 qpair failed and we were unable to recover it. 00:25:02.668 [2024-07-24 22:34:28.109004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.668 [2024-07-24 22:34:28.109060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.668 qpair failed and we were unable to recover it. 00:25:02.668 [2024-07-24 22:34:28.109196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.668 [2024-07-24 22:34:28.109250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.668 qpair failed and we were unable to recover it. 00:25:02.668 [2024-07-24 22:34:28.109385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.668 [2024-07-24 22:34:28.109440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.668 qpair failed and we were unable to recover it. 00:25:02.668 [2024-07-24 22:34:28.109616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.668 [2024-07-24 22:34:28.109645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.668 qpair failed and we were unable to recover it. 00:25:02.668 [2024-07-24 22:34:28.109812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.668 [2024-07-24 22:34:28.109866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.668 qpair failed and we were unable to recover it. 00:25:02.668 [2024-07-24 22:34:28.110032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.668 [2024-07-24 22:34:28.110085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.668 qpair failed and we were unable to recover it. 00:25:02.668 [2024-07-24 22:34:28.110288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.668 [2024-07-24 22:34:28.110339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.668 qpair failed and we were unable to recover it. 00:25:02.668 [2024-07-24 22:34:28.110541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.668 [2024-07-24 22:34:28.110570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.668 qpair failed and we were unable to recover it. 00:25:02.668 [2024-07-24 22:34:28.110675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.668 [2024-07-24 22:34:28.110702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.668 qpair failed and we were unable to recover it. 00:25:02.668 [2024-07-24 22:34:28.110815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.668 [2024-07-24 22:34:28.110843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.668 qpair failed and we were unable to recover it. 00:25:02.668 [2024-07-24 22:34:28.111009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.668 [2024-07-24 22:34:28.111060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.668 qpair failed and we were unable to recover it. 00:25:02.668 [2024-07-24 22:34:28.111162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.668 [2024-07-24 22:34:28.111188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.668 qpair failed and we were unable to recover it. 00:25:02.668 [2024-07-24 22:34:28.111345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.668 [2024-07-24 22:34:28.111401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.668 qpair failed and we were unable to recover it. 00:25:02.668 [2024-07-24 22:34:28.111509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.668 [2024-07-24 22:34:28.111536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.668 qpair failed and we were unable to recover it. 00:25:02.668 [2024-07-24 22:34:28.111648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.668 [2024-07-24 22:34:28.111675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.668 qpair failed and we were unable to recover it. 00:25:02.668 [2024-07-24 22:34:28.111849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.668 [2024-07-24 22:34:28.111902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.668 qpair failed and we were unable to recover it. 00:25:02.668 [2024-07-24 22:34:28.112060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.668 [2024-07-24 22:34:28.112114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.668 qpair failed and we were unable to recover it. 00:25:02.668 [2024-07-24 22:34:28.112249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.668 [2024-07-24 22:34:28.112278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.668 qpair failed and we were unable to recover it. 00:25:02.668 [2024-07-24 22:34:28.112403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.668 [2024-07-24 22:34:28.112429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.668 qpair failed and we were unable to recover it. 00:25:02.668 [2024-07-24 22:34:28.112542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.668 [2024-07-24 22:34:28.112571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.668 qpair failed and we were unable to recover it. 00:25:02.668 [2024-07-24 22:34:28.112683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.668 [2024-07-24 22:34:28.112710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.668 qpair failed and we were unable to recover it. 00:25:02.668 [2024-07-24 22:34:28.112811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.668 [2024-07-24 22:34:28.112838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.668 qpair failed and we were unable to recover it. 00:25:02.668 [2024-07-24 22:34:28.112944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.668 [2024-07-24 22:34:28.112971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.668 qpair failed and we were unable to recover it. 00:25:02.668 [2024-07-24 22:34:28.113086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.668 [2024-07-24 22:34:28.113112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.668 qpair failed and we were unable to recover it. 00:25:02.668 [2024-07-24 22:34:28.113222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.668 [2024-07-24 22:34:28.113250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.668 qpair failed and we were unable to recover it. 00:25:02.668 [2024-07-24 22:34:28.113366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.668 [2024-07-24 22:34:28.113399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.668 qpair failed and we were unable to recover it. 00:25:02.668 [2024-07-24 22:34:28.113533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.668 [2024-07-24 22:34:28.113560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.668 qpair failed and we were unable to recover it. 00:25:02.668 [2024-07-24 22:34:28.113665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.668 [2024-07-24 22:34:28.113692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.668 qpair failed and we were unable to recover it. 00:25:02.668 [2024-07-24 22:34:28.113792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.668 [2024-07-24 22:34:28.113818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.668 qpair failed and we were unable to recover it. 00:25:02.668 [2024-07-24 22:34:28.113949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.668 [2024-07-24 22:34:28.113975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.668 qpair failed and we were unable to recover it. 00:25:02.668 [2024-07-24 22:34:28.114083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.668 [2024-07-24 22:34:28.114110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.668 qpair failed and we were unable to recover it. 00:25:02.668 [2024-07-24 22:34:28.114242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.668 [2024-07-24 22:34:28.114271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.668 qpair failed and we were unable to recover it. 00:25:02.668 [2024-07-24 22:34:28.114388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.668 [2024-07-24 22:34:28.114415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.668 qpair failed and we were unable to recover it. 00:25:02.668 [2024-07-24 22:34:28.114519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.669 [2024-07-24 22:34:28.114547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.669 qpair failed and we were unable to recover it. 00:25:02.669 [2024-07-24 22:34:28.114649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.669 [2024-07-24 22:34:28.114675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.669 qpair failed and we were unable to recover it. 00:25:02.669 [2024-07-24 22:34:28.114795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.669 [2024-07-24 22:34:28.114821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.669 qpair failed and we were unable to recover it. 00:25:02.669 [2024-07-24 22:34:28.114930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.669 [2024-07-24 22:34:28.114956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.669 qpair failed and we were unable to recover it. 00:25:02.669 [2024-07-24 22:34:28.115069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.669 [2024-07-24 22:34:28.115095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.669 qpair failed and we were unable to recover it. 00:25:02.669 [2024-07-24 22:34:28.115198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.669 [2024-07-24 22:34:28.115224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.669 qpair failed and we were unable to recover it. 00:25:02.669 [2024-07-24 22:34:28.115332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.669 [2024-07-24 22:34:28.115361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.669 qpair failed and we were unable to recover it. 00:25:02.669 [2024-07-24 22:34:28.115498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.669 [2024-07-24 22:34:28.115526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.669 qpair failed and we were unable to recover it. 00:25:02.669 [2024-07-24 22:34:28.115634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.669 [2024-07-24 22:34:28.115660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.669 qpair failed and we were unable to recover it. 00:25:02.669 [2024-07-24 22:34:28.115788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.669 [2024-07-24 22:34:28.115813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.669 qpair failed and we were unable to recover it. 00:25:02.669 [2024-07-24 22:34:28.115912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.669 [2024-07-24 22:34:28.115937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.669 qpair failed and we were unable to recover it. 00:25:02.669 [2024-07-24 22:34:28.116038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.669 [2024-07-24 22:34:28.116063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.669 qpair failed and we were unable to recover it. 00:25:02.669 [2024-07-24 22:34:28.116193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.669 [2024-07-24 22:34:28.116220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.669 qpair failed and we were unable to recover it. 00:25:02.669 [2024-07-24 22:34:28.116341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.669 [2024-07-24 22:34:28.116368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.669 qpair failed and we were unable to recover it. 00:25:02.669 [2024-07-24 22:34:28.116475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.669 [2024-07-24 22:34:28.116506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.669 qpair failed and we were unable to recover it. 00:25:02.669 [2024-07-24 22:34:28.116654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.669 [2024-07-24 22:34:28.116680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.669 qpair failed and we were unable to recover it. 00:25:02.669 [2024-07-24 22:34:28.116800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.669 [2024-07-24 22:34:28.116829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.669 qpair failed and we were unable to recover it. 00:25:02.669 [2024-07-24 22:34:28.116932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.669 [2024-07-24 22:34:28.116958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.669 qpair failed and we were unable to recover it. 00:25:02.669 [2024-07-24 22:34:28.117060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.669 [2024-07-24 22:34:28.117087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.669 qpair failed and we were unable to recover it. 00:25:02.669 [2024-07-24 22:34:28.117199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.669 [2024-07-24 22:34:28.117226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.669 qpair failed and we were unable to recover it. 00:25:02.669 [2024-07-24 22:34:28.117359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.669 [2024-07-24 22:34:28.117386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.669 qpair failed and we were unable to recover it. 00:25:02.669 [2024-07-24 22:34:28.117498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.669 [2024-07-24 22:34:28.117526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.669 qpair failed and we were unable to recover it. 00:25:02.669 [2024-07-24 22:34:28.117646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.669 [2024-07-24 22:34:28.117671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.669 qpair failed and we were unable to recover it. 00:25:02.669 [2024-07-24 22:34:28.117771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.669 [2024-07-24 22:34:28.117797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.669 qpair failed and we were unable to recover it. 00:25:02.669 [2024-07-24 22:34:28.117914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.669 [2024-07-24 22:34:28.117941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.669 qpair failed and we were unable to recover it. 00:25:02.669 [2024-07-24 22:34:28.118060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.669 [2024-07-24 22:34:28.118089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.669 qpair failed and we were unable to recover it. 00:25:02.669 [2024-07-24 22:34:28.118225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.669 [2024-07-24 22:34:28.118255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.669 qpair failed and we were unable to recover it. 00:25:02.669 [2024-07-24 22:34:28.118366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.669 [2024-07-24 22:34:28.118395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.669 qpair failed and we were unable to recover it. 00:25:02.669 [2024-07-24 22:34:28.118504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.669 [2024-07-24 22:34:28.118530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.669 qpair failed and we were unable to recover it. 00:25:02.669 [2024-07-24 22:34:28.118645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.669 [2024-07-24 22:34:28.118671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.669 qpair failed and we were unable to recover it. 00:25:02.669 [2024-07-24 22:34:28.118783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.669 [2024-07-24 22:34:28.118810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.669 qpair failed and we were unable to recover it. 00:25:02.669 [2024-07-24 22:34:28.118911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.669 [2024-07-24 22:34:28.118937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.669 qpair failed and we were unable to recover it. 00:25:02.669 [2024-07-24 22:34:28.119039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.670 [2024-07-24 22:34:28.119070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.670 qpair failed and we were unable to recover it. 00:25:02.670 [2024-07-24 22:34:28.119178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.670 [2024-07-24 22:34:28.119205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.670 qpair failed and we were unable to recover it. 00:25:02.670 [2024-07-24 22:34:28.119311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.670 [2024-07-24 22:34:28.119340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.670 qpair failed and we were unable to recover it. 00:25:02.670 [2024-07-24 22:34:28.119449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.670 [2024-07-24 22:34:28.119478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.670 qpair failed and we were unable to recover it. 00:25:02.670 [2024-07-24 22:34:28.119598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.670 [2024-07-24 22:34:28.119626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.670 qpair failed and we were unable to recover it. 00:25:02.670 [2024-07-24 22:34:28.119731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.670 [2024-07-24 22:34:28.119757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.670 qpair failed and we were unable to recover it. 00:25:02.670 [2024-07-24 22:34:28.119858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.670 [2024-07-24 22:34:28.119885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.670 qpair failed and we were unable to recover it. 00:25:02.670 [2024-07-24 22:34:28.120003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.670 [2024-07-24 22:34:28.120031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.670 qpair failed and we were unable to recover it. 00:25:02.670 [2024-07-24 22:34:28.120152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.670 [2024-07-24 22:34:28.120178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.670 qpair failed and we were unable to recover it. 00:25:02.670 [2024-07-24 22:34:28.120295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.670 [2024-07-24 22:34:28.120323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.670 qpair failed and we were unable to recover it. 00:25:02.670 [2024-07-24 22:34:28.120430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.670 [2024-07-24 22:34:28.120456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.670 qpair failed and we were unable to recover it. 00:25:02.670 [2024-07-24 22:34:28.120563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.670 [2024-07-24 22:34:28.120589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.670 qpair failed and we were unable to recover it. 00:25:02.670 [2024-07-24 22:34:28.120722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.670 [2024-07-24 22:34:28.120749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.670 qpair failed and we were unable to recover it. 00:25:02.670 [2024-07-24 22:34:28.120855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.670 [2024-07-24 22:34:28.120885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.670 qpair failed and we were unable to recover it. 00:25:02.670 [2024-07-24 22:34:28.121011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.670 [2024-07-24 22:34:28.121040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.670 qpair failed and we were unable to recover it. 00:25:02.670 [2024-07-24 22:34:28.121165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.670 [2024-07-24 22:34:28.121192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.670 qpair failed and we were unable to recover it. 00:25:02.670 [2024-07-24 22:34:28.121310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.670 [2024-07-24 22:34:28.121336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.670 qpair failed and we were unable to recover it. 00:25:02.670 [2024-07-24 22:34:28.121438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.670 [2024-07-24 22:34:28.121464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.670 qpair failed and we were unable to recover it. 00:25:02.670 [2024-07-24 22:34:28.121576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.670 [2024-07-24 22:34:28.121604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.670 qpair failed and we were unable to recover it. 00:25:02.670 [2024-07-24 22:34:28.121708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.670 [2024-07-24 22:34:28.121735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.670 qpair failed and we were unable to recover it. 00:25:02.670 [2024-07-24 22:34:28.121839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.670 [2024-07-24 22:34:28.121866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.670 qpair failed and we were unable to recover it. 00:25:02.670 [2024-07-24 22:34:28.122000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.670 [2024-07-24 22:34:28.122026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.670 qpair failed and we were unable to recover it. 00:25:02.670 [2024-07-24 22:34:28.122126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.670 [2024-07-24 22:34:28.122153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.670 qpair failed and we were unable to recover it. 00:25:02.670 [2024-07-24 22:34:28.122286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.670 [2024-07-24 22:34:28.122311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.670 qpair failed and we were unable to recover it. 00:25:02.670 [2024-07-24 22:34:28.122431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.670 [2024-07-24 22:34:28.122459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.670 qpair failed and we were unable to recover it. 00:25:02.670 [2024-07-24 22:34:28.122575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.670 [2024-07-24 22:34:28.122602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.670 qpair failed and we were unable to recover it. 00:25:02.670 [2024-07-24 22:34:28.122715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.670 [2024-07-24 22:34:28.122741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.670 qpair failed and we were unable to recover it. 00:25:02.670 [2024-07-24 22:34:28.122843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.670 [2024-07-24 22:34:28.122875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.670 qpair failed and we were unable to recover it. 00:25:02.670 [2024-07-24 22:34:28.122994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.670 [2024-07-24 22:34:28.123021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.670 qpair failed and we were unable to recover it. 00:25:02.670 [2024-07-24 22:34:28.123146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.670 [2024-07-24 22:34:28.123175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.670 qpair failed and we were unable to recover it. 00:25:02.670 [2024-07-24 22:34:28.123279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.670 [2024-07-24 22:34:28.123307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.670 qpair failed and we were unable to recover it. 00:25:02.670 [2024-07-24 22:34:28.123444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.670 [2024-07-24 22:34:28.123469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.670 qpair failed and we were unable to recover it. 00:25:02.670 [2024-07-24 22:34:28.123587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.670 [2024-07-24 22:34:28.123613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.670 qpair failed and we were unable to recover it. 00:25:02.670 [2024-07-24 22:34:28.123713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.670 [2024-07-24 22:34:28.123740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.670 qpair failed and we were unable to recover it. 00:25:02.670 [2024-07-24 22:34:28.123839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.670 [2024-07-24 22:34:28.123865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.670 qpair failed and we were unable to recover it. 00:25:02.670 [2024-07-24 22:34:28.123962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.670 [2024-07-24 22:34:28.123988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.670 qpair failed and we were unable to recover it. 00:25:02.670 [2024-07-24 22:34:28.124121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.670 [2024-07-24 22:34:28.124147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.670 qpair failed and we were unable to recover it. 00:25:02.670 [2024-07-24 22:34:28.124264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.670 [2024-07-24 22:34:28.124289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.670 qpair failed and we were unable to recover it. 00:25:02.671 [2024-07-24 22:34:28.124403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.671 [2024-07-24 22:34:28.124429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.671 qpair failed and we were unable to recover it. 00:25:02.671 [2024-07-24 22:34:28.124539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.671 [2024-07-24 22:34:28.124566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.671 qpair failed and we were unable to recover it. 00:25:02.671 [2024-07-24 22:34:28.124683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.671 [2024-07-24 22:34:28.124710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.671 qpair failed and we were unable to recover it. 00:25:02.671 [2024-07-24 22:34:28.124835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.671 [2024-07-24 22:34:28.124861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.671 qpair failed and we were unable to recover it. 00:25:02.671 [2024-07-24 22:34:28.124962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.671 [2024-07-24 22:34:28.124987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.671 qpair failed and we were unable to recover it. 00:25:02.671 [2024-07-24 22:34:28.125091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.671 [2024-07-24 22:34:28.125117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.671 qpair failed and we were unable to recover it. 00:25:02.671 [2024-07-24 22:34:28.125252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.671 [2024-07-24 22:34:28.125277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.671 qpair failed and we were unable to recover it. 00:25:02.671 [2024-07-24 22:34:28.125416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.671 [2024-07-24 22:34:28.125445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.671 qpair failed and we were unable to recover it. 00:25:02.671 [2024-07-24 22:34:28.125559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.671 [2024-07-24 22:34:28.125589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.671 qpair failed and we were unable to recover it. 00:25:02.671 [2024-07-24 22:34:28.125695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.671 [2024-07-24 22:34:28.125721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.671 qpair failed and we were unable to recover it. 00:25:02.671 [2024-07-24 22:34:28.125841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.671 [2024-07-24 22:34:28.125867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.671 qpair failed and we were unable to recover it. 00:25:02.671 [2024-07-24 22:34:28.125998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.671 [2024-07-24 22:34:28.126024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.671 qpair failed and we were unable to recover it. 00:25:02.671 [2024-07-24 22:34:28.126137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.671 [2024-07-24 22:34:28.126164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.671 qpair failed and we were unable to recover it. 00:25:02.671 [2024-07-24 22:34:28.126296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.671 [2024-07-24 22:34:28.126323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.671 qpair failed and we were unable to recover it. 00:25:02.671 [2024-07-24 22:34:28.126443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.671 [2024-07-24 22:34:28.126469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.671 qpair failed and we were unable to recover it. 00:25:02.671 [2024-07-24 22:34:28.126588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.671 [2024-07-24 22:34:28.126615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.671 qpair failed and we were unable to recover it. 00:25:02.671 [2024-07-24 22:34:28.126734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.671 [2024-07-24 22:34:28.126764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.671 qpair failed and we were unable to recover it. 00:25:02.671 [2024-07-24 22:34:28.126898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.671 [2024-07-24 22:34:28.126924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.671 qpair failed and we were unable to recover it. 00:25:02.671 [2024-07-24 22:34:28.127023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.671 [2024-07-24 22:34:28.127049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.671 qpair failed and we were unable to recover it. 00:25:02.671 [2024-07-24 22:34:28.127182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.671 [2024-07-24 22:34:28.127209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.671 qpair failed and we were unable to recover it. 00:25:02.671 [2024-07-24 22:34:28.127312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.671 [2024-07-24 22:34:28.127337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.671 qpair failed and we were unable to recover it. 00:25:02.671 [2024-07-24 22:34:28.127436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.671 [2024-07-24 22:34:28.127461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.671 qpair failed and we were unable to recover it. 00:25:02.671 [2024-07-24 22:34:28.127584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.671 [2024-07-24 22:34:28.127610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.671 qpair failed and we were unable to recover it. 00:25:02.671 [2024-07-24 22:34:28.127724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.671 [2024-07-24 22:34:28.127750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.671 qpair failed and we were unable to recover it. 00:25:02.671 [2024-07-24 22:34:28.127848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.671 [2024-07-24 22:34:28.127874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.671 qpair failed and we were unable to recover it. 00:25:02.671 [2024-07-24 22:34:28.128002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.671 [2024-07-24 22:34:28.128027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.671 qpair failed and we were unable to recover it. 00:25:02.671 [2024-07-24 22:34:28.128135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.671 [2024-07-24 22:34:28.128165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.671 qpair failed and we were unable to recover it. 00:25:02.671 [2024-07-24 22:34:28.128270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.671 [2024-07-24 22:34:28.128299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.671 qpair failed and we were unable to recover it. 00:25:02.671 [2024-07-24 22:34:28.128418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.671 [2024-07-24 22:34:28.128444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.671 qpair failed and we were unable to recover it. 00:25:02.671 [2024-07-24 22:34:28.128591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.671 [2024-07-24 22:34:28.128618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.671 qpair failed and we were unable to recover it. 00:25:02.671 [2024-07-24 22:34:28.128754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.671 [2024-07-24 22:34:28.128780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.671 qpair failed and we were unable to recover it. 00:25:02.671 [2024-07-24 22:34:28.128893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.671 [2024-07-24 22:34:28.128922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.671 qpair failed and we were unable to recover it. 00:25:02.671 [2024-07-24 22:34:28.129025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.671 [2024-07-24 22:34:28.129051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.671 qpair failed and we were unable to recover it. 00:25:02.671 [2024-07-24 22:34:28.129159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.671 [2024-07-24 22:34:28.129185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.671 qpair failed and we were unable to recover it. 00:25:02.671 [2024-07-24 22:34:28.129293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.671 [2024-07-24 22:34:28.129320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.671 qpair failed and we were unable to recover it. 00:25:02.671 [2024-07-24 22:34:28.129422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.671 [2024-07-24 22:34:28.129448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.671 qpair failed and we were unable to recover it. 00:25:02.671 [2024-07-24 22:34:28.129556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.671 [2024-07-24 22:34:28.129582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.671 qpair failed and we were unable to recover it. 00:25:02.672 [2024-07-24 22:34:28.129719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.672 [2024-07-24 22:34:28.129745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.672 qpair failed and we were unable to recover it. 00:25:02.672 [2024-07-24 22:34:28.129847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.672 [2024-07-24 22:34:28.129873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.672 qpair failed and we were unable to recover it. 00:25:02.672 [2024-07-24 22:34:28.129986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.672 [2024-07-24 22:34:28.130012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.672 qpair failed and we were unable to recover it. 00:25:02.672 [2024-07-24 22:34:28.130115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.672 [2024-07-24 22:34:28.130143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.672 qpair failed and we were unable to recover it. 00:25:02.672 [2024-07-24 22:34:28.130243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.672 [2024-07-24 22:34:28.130269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.672 qpair failed and we were unable to recover it. 00:25:02.672 [2024-07-24 22:34:28.130372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.672 [2024-07-24 22:34:28.130401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.672 qpair failed and we were unable to recover it. 00:25:02.672 [2024-07-24 22:34:28.130515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.672 [2024-07-24 22:34:28.130542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.672 qpair failed and we were unable to recover it. 00:25:02.672 [2024-07-24 22:34:28.130643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.672 [2024-07-24 22:34:28.130669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.672 qpair failed and we were unable to recover it. 00:25:02.672 [2024-07-24 22:34:28.130769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.672 [2024-07-24 22:34:28.130795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.672 qpair failed and we were unable to recover it. 00:25:02.672 [2024-07-24 22:34:28.130920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.672 [2024-07-24 22:34:28.130946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.672 qpair failed and we were unable to recover it. 00:25:02.672 [2024-07-24 22:34:28.131075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.672 [2024-07-24 22:34:28.131103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.672 qpair failed and we were unable to recover it. 00:25:02.672 [2024-07-24 22:34:28.131204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.672 [2024-07-24 22:34:28.131229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.672 qpair failed and we were unable to recover it. 00:25:02.672 [2024-07-24 22:34:28.131356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.672 [2024-07-24 22:34:28.131382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.672 qpair failed and we were unable to recover it. 00:25:02.672 [2024-07-24 22:34:28.131484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.672 [2024-07-24 22:34:28.131511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.672 qpair failed and we were unable to recover it. 00:25:02.672 [2024-07-24 22:34:28.131617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.672 [2024-07-24 22:34:28.131645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.672 qpair failed and we were unable to recover it. 00:25:02.672 [2024-07-24 22:34:28.131763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.672 [2024-07-24 22:34:28.131789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.672 qpair failed and we were unable to recover it. 00:25:02.672 [2024-07-24 22:34:28.131903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.672 [2024-07-24 22:34:28.131929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.672 qpair failed and we were unable to recover it. 00:25:02.672 [2024-07-24 22:34:28.132047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.672 [2024-07-24 22:34:28.132075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.672 qpair failed and we were unable to recover it. 00:25:02.672 [2024-07-24 22:34:28.132176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.672 [2024-07-24 22:34:28.132203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.672 qpair failed and we were unable to recover it. 00:25:02.672 [2024-07-24 22:34:28.132304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.672 [2024-07-24 22:34:28.132334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.672 qpair failed and we were unable to recover it. 00:25:02.672 [2024-07-24 22:34:28.132441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.672 [2024-07-24 22:34:28.132466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.672 qpair failed and we were unable to recover it. 00:25:02.672 [2024-07-24 22:34:28.132570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.672 [2024-07-24 22:34:28.132597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.672 qpair failed and we were unable to recover it. 00:25:02.672 [2024-07-24 22:34:28.132695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.672 [2024-07-24 22:34:28.132721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.672 qpair failed and we were unable to recover it. 00:25:02.672 [2024-07-24 22:34:28.132825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.672 [2024-07-24 22:34:28.132852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.672 qpair failed and we were unable to recover it. 00:25:02.672 [2024-07-24 22:34:28.132957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.672 [2024-07-24 22:34:28.132983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.672 qpair failed and we were unable to recover it. 00:25:02.672 [2024-07-24 22:34:28.133091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.672 [2024-07-24 22:34:28.133117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.672 qpair failed and we were unable to recover it. 00:25:02.672 [2024-07-24 22:34:28.133214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.672 [2024-07-24 22:34:28.133239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.672 qpair failed and we were unable to recover it. 00:25:02.672 [2024-07-24 22:34:28.133351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.672 [2024-07-24 22:34:28.133380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.672 qpair failed and we were unable to recover it. 00:25:02.672 [2024-07-24 22:34:28.133495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.672 [2024-07-24 22:34:28.133527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.672 qpair failed and we were unable to recover it. 00:25:02.672 [2024-07-24 22:34:28.133634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.672 [2024-07-24 22:34:28.133662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.672 qpair failed and we were unable to recover it. 00:25:02.672 [2024-07-24 22:34:28.133771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.672 [2024-07-24 22:34:28.133799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.672 qpair failed and we were unable to recover it. 00:25:02.672 [2024-07-24 22:34:28.133904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.672 [2024-07-24 22:34:28.133930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.672 qpair failed and we were unable to recover it. 00:25:02.672 [2024-07-24 22:34:28.134034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.672 [2024-07-24 22:34:28.134061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.672 qpair failed and we were unable to recover it. 00:25:02.672 [2024-07-24 22:34:28.134170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.672 [2024-07-24 22:34:28.134197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.672 qpair failed and we were unable to recover it. 00:25:02.672 [2024-07-24 22:34:28.134319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.672 [2024-07-24 22:34:28.134346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.672 qpair failed and we were unable to recover it. 00:25:02.672 [2024-07-24 22:34:28.134463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.672 [2024-07-24 22:34:28.134498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.672 qpair failed and we were unable to recover it. 00:25:02.672 [2024-07-24 22:34:28.134607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.672 [2024-07-24 22:34:28.134633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.672 qpair failed and we were unable to recover it. 00:25:02.672 [2024-07-24 22:34:28.134739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.672 [2024-07-24 22:34:28.134768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.673 qpair failed and we were unable to recover it. 00:25:02.673 [2024-07-24 22:34:28.134865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.673 [2024-07-24 22:34:28.134891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.673 qpair failed and we were unable to recover it. 00:25:02.673 [2024-07-24 22:34:28.134991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.673 [2024-07-24 22:34:28.135019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.673 qpair failed and we were unable to recover it. 00:25:02.673 [2024-07-24 22:34:28.135130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.673 [2024-07-24 22:34:28.135156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.673 qpair failed and we were unable to recover it. 00:25:02.673 [2024-07-24 22:34:28.135265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.673 [2024-07-24 22:34:28.135292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.673 qpair failed and we were unable to recover it. 00:25:02.673 [2024-07-24 22:34:28.135393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.673 [2024-07-24 22:34:28.135418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.673 qpair failed and we were unable to recover it. 00:25:02.673 [2024-07-24 22:34:28.135518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.673 [2024-07-24 22:34:28.135544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.673 qpair failed and we were unable to recover it. 00:25:02.673 [2024-07-24 22:34:28.135648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.673 [2024-07-24 22:34:28.135673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.673 qpair failed and we were unable to recover it. 00:25:02.673 [2024-07-24 22:34:28.135787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.673 [2024-07-24 22:34:28.135812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.673 qpair failed and we were unable to recover it. 00:25:02.673 [2024-07-24 22:34:28.135911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.673 [2024-07-24 22:34:28.135941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.673 qpair failed and we were unable to recover it. 00:25:02.673 [2024-07-24 22:34:28.136048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.673 [2024-07-24 22:34:28.136076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.673 qpair failed and we were unable to recover it. 00:25:02.673 [2024-07-24 22:34:28.136177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.673 [2024-07-24 22:34:28.136203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.673 qpair failed and we were unable to recover it. 00:25:02.673 [2024-07-24 22:34:28.136306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.673 [2024-07-24 22:34:28.136332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.673 qpair failed and we were unable to recover it. 00:25:02.673 [2024-07-24 22:34:28.136452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.673 [2024-07-24 22:34:28.136485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.673 qpair failed and we were unable to recover it. 00:25:02.673 [2024-07-24 22:34:28.136585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.673 [2024-07-24 22:34:28.136611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.673 qpair failed and we were unable to recover it. 00:25:02.673 [2024-07-24 22:34:28.136716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.673 [2024-07-24 22:34:28.136745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.673 qpair failed and we were unable to recover it. 00:25:02.673 [2024-07-24 22:34:28.136851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.673 [2024-07-24 22:34:28.136879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.673 qpair failed and we were unable to recover it. 00:25:02.673 [2024-07-24 22:34:28.136986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.673 [2024-07-24 22:34:28.137011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.673 qpair failed and we were unable to recover it. 00:25:02.673 [2024-07-24 22:34:28.137110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.673 [2024-07-24 22:34:28.137136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.673 qpair failed and we were unable to recover it. 00:25:02.673 [2024-07-24 22:34:28.137235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.673 [2024-07-24 22:34:28.137260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.673 qpair failed and we were unable to recover it. 00:25:02.673 [2024-07-24 22:34:28.137376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.673 [2024-07-24 22:34:28.137402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.673 qpair failed and we were unable to recover it. 00:25:02.673 [2024-07-24 22:34:28.137501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.673 [2024-07-24 22:34:28.137527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.673 qpair failed and we were unable to recover it. 00:25:02.673 [2024-07-24 22:34:28.137628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.673 [2024-07-24 22:34:28.137653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.673 qpair failed and we were unable to recover it. 00:25:02.673 [2024-07-24 22:34:28.137761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.673 [2024-07-24 22:34:28.137786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.673 qpair failed and we were unable to recover it. 00:25:02.673 [2024-07-24 22:34:28.137885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.673 [2024-07-24 22:34:28.137913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.673 qpair failed and we were unable to recover it. 00:25:02.673 [2024-07-24 22:34:28.138025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.673 [2024-07-24 22:34:28.138055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.673 qpair failed and we were unable to recover it. 00:25:02.673 [2024-07-24 22:34:28.138157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.673 [2024-07-24 22:34:28.138184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.673 qpair failed and we were unable to recover it. 00:25:02.673 [2024-07-24 22:34:28.138281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.673 [2024-07-24 22:34:28.138307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.673 qpair failed and we were unable to recover it. 00:25:02.673 [2024-07-24 22:34:28.138411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.673 [2024-07-24 22:34:28.138439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.673 qpair failed and we were unable to recover it. 00:25:02.673 [2024-07-24 22:34:28.138541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.673 [2024-07-24 22:34:28.138567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.673 qpair failed and we were unable to recover it. 00:25:02.673 [2024-07-24 22:34:28.138676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.673 [2024-07-24 22:34:28.138704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.673 qpair failed and we were unable to recover it. 00:25:02.673 [2024-07-24 22:34:28.138818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.673 [2024-07-24 22:34:28.138845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.673 qpair failed and we were unable to recover it. 00:25:02.673 [2024-07-24 22:34:28.138954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.673 [2024-07-24 22:34:28.138984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.673 qpair failed and we were unable to recover it. 00:25:02.673 [2024-07-24 22:34:28.139087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.673 [2024-07-24 22:34:28.139113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.673 qpair failed and we were unable to recover it. 00:25:02.673 [2024-07-24 22:34:28.139218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.673 [2024-07-24 22:34:28.139244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.673 qpair failed and we were unable to recover it. 00:25:02.673 [2024-07-24 22:34:28.139347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.673 [2024-07-24 22:34:28.139373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.673 qpair failed and we were unable to recover it. 00:25:02.673 [2024-07-24 22:34:28.139476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.673 [2024-07-24 22:34:28.139509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.673 qpair failed and we were unable to recover it. 00:25:02.673 [2024-07-24 22:34:28.139618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.673 [2024-07-24 22:34:28.139647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.673 qpair failed and we were unable to recover it. 00:25:02.674 [2024-07-24 22:34:28.139746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.674 [2024-07-24 22:34:28.139772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.674 qpair failed and we were unable to recover it. 00:25:02.674 [2024-07-24 22:34:28.139888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.674 [2024-07-24 22:34:28.139913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.674 qpair failed and we were unable to recover it. 00:25:02.674 [2024-07-24 22:34:28.140030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.674 [2024-07-24 22:34:28.140057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.674 qpair failed and we were unable to recover it. 00:25:02.674 [2024-07-24 22:34:28.140156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.674 [2024-07-24 22:34:28.140182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.674 qpair failed and we were unable to recover it. 00:25:02.674 [2024-07-24 22:34:28.140280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.674 [2024-07-24 22:34:28.140306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.674 qpair failed and we were unable to recover it. 00:25:02.674 [2024-07-24 22:34:28.140405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.674 [2024-07-24 22:34:28.140431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.674 qpair failed and we were unable to recover it. 00:25:02.674 [2024-07-24 22:34:28.140532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.674 [2024-07-24 22:34:28.140558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.674 qpair failed and we were unable to recover it. 00:25:02.674 [2024-07-24 22:34:28.140658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.674 [2024-07-24 22:34:28.140684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.674 qpair failed and we were unable to recover it. 00:25:02.674 [2024-07-24 22:34:28.140788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.674 [2024-07-24 22:34:28.140816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.674 qpair failed and we were unable to recover it. 00:25:02.674 [2024-07-24 22:34:28.140913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.674 [2024-07-24 22:34:28.140939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.674 qpair failed and we were unable to recover it. 00:25:02.674 [2024-07-24 22:34:28.141043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.674 [2024-07-24 22:34:28.141073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.674 qpair failed and we were unable to recover it. 00:25:02.674 [2024-07-24 22:34:28.141193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.674 [2024-07-24 22:34:28.141220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.674 qpair failed and we were unable to recover it. 00:25:02.674 [2024-07-24 22:34:28.141343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.674 [2024-07-24 22:34:28.141371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.674 qpair failed and we were unable to recover it. 00:25:02.674 [2024-07-24 22:34:28.141476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.674 [2024-07-24 22:34:28.141509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.674 qpair failed and we were unable to recover it. 00:25:02.674 [2024-07-24 22:34:28.141615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.674 [2024-07-24 22:34:28.141642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.674 qpair failed and we were unable to recover it. 00:25:02.674 [2024-07-24 22:34:28.141745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.674 [2024-07-24 22:34:28.141771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.674 qpair failed and we were unable to recover it. 00:25:02.674 [2024-07-24 22:34:28.141898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.674 [2024-07-24 22:34:28.141923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.674 qpair failed and we were unable to recover it. 00:25:02.674 [2024-07-24 22:34:28.142026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.674 [2024-07-24 22:34:28.142055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.674 qpair failed and we were unable to recover it. 00:25:02.674 [2024-07-24 22:34:28.142193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.674 [2024-07-24 22:34:28.142222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.674 qpair failed and we were unable to recover it. 00:25:02.674 [2024-07-24 22:34:28.142326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.674 [2024-07-24 22:34:28.142353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.674 qpair failed and we were unable to recover it. 00:25:02.674 [2024-07-24 22:34:28.142463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.674 [2024-07-24 22:34:28.142497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.674 qpair failed and we were unable to recover it. 00:25:02.674 [2024-07-24 22:34:28.142601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.674 [2024-07-24 22:34:28.142628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.674 qpair failed and we were unable to recover it. 00:25:02.674 [2024-07-24 22:34:28.142762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.674 [2024-07-24 22:34:28.142788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.674 qpair failed and we were unable to recover it. 00:25:02.674 [2024-07-24 22:34:28.142891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.674 [2024-07-24 22:34:28.142918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.674 qpair failed and we were unable to recover it. 00:25:02.674 [2024-07-24 22:34:28.143023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.674 [2024-07-24 22:34:28.143051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.674 qpair failed and we were unable to recover it. 00:25:02.674 [2024-07-24 22:34:28.143163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.674 [2024-07-24 22:34:28.143190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.674 qpair failed and we were unable to recover it. 00:25:02.674 [2024-07-24 22:34:28.143293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.674 [2024-07-24 22:34:28.143321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.674 qpair failed and we were unable to recover it. 00:25:02.674 [2024-07-24 22:34:28.143455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.674 [2024-07-24 22:34:28.143486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.674 qpair failed and we were unable to recover it. 00:25:02.674 [2024-07-24 22:34:28.143617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.674 [2024-07-24 22:34:28.143645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.674 qpair failed and we were unable to recover it. 00:25:02.674 [2024-07-24 22:34:28.143750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.674 [2024-07-24 22:34:28.143776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.674 qpair failed and we were unable to recover it. 00:25:02.674 [2024-07-24 22:34:28.143831] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:02.674 [2024-07-24 22:34:28.143877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.674 [2024-07-24 22:34:28.143902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.674 qpair failed and we were unable to recover it. 00:25:02.674 [2024-07-24 22:34:28.144012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.674 [2024-07-24 22:34:28.144039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.674 qpair failed and we were unable to recover it. 00:25:02.674 [2024-07-24 22:34:28.144147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.675 [2024-07-24 22:34:28.144173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.675 qpair failed and we were unable to recover it. 00:25:02.675 [2024-07-24 22:34:28.144306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.675 [2024-07-24 22:34:28.144334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.675 qpair failed and we were unable to recover it. 00:25:02.675 [2024-07-24 22:34:28.144439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.675 [2024-07-24 22:34:28.144468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.675 qpair failed and we were unable to recover it. 00:25:02.675 [2024-07-24 22:34:28.144581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.675 [2024-07-24 22:34:28.144609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.675 qpair failed and we were unable to recover it. 00:25:02.675 [2024-07-24 22:34:28.144715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.675 [2024-07-24 22:34:28.144741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.675 qpair failed and we were unable to recover it. 00:25:02.675 [2024-07-24 22:34:28.144850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.675 [2024-07-24 22:34:28.144876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.675 qpair failed and we were unable to recover it. 00:25:02.675 [2024-07-24 22:34:28.144984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.675 [2024-07-24 22:34:28.145013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.675 qpair failed and we were unable to recover it. 00:25:02.675 [2024-07-24 22:34:28.145122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.675 [2024-07-24 22:34:28.145151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.675 qpair failed and we were unable to recover it. 00:25:02.675 [2024-07-24 22:34:28.145254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.675 [2024-07-24 22:34:28.145281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.675 qpair failed and we were unable to recover it. 00:25:02.675 [2024-07-24 22:34:28.145390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.675 [2024-07-24 22:34:28.145418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.675 qpair failed and we were unable to recover it. 00:25:02.675 [2024-07-24 22:34:28.145522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.675 [2024-07-24 22:34:28.145551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.675 qpair failed and we were unable to recover it. 00:25:02.675 [2024-07-24 22:34:28.145654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.675 [2024-07-24 22:34:28.145680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.675 qpair failed and we were unable to recover it. 00:25:02.675 [2024-07-24 22:34:28.145815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.675 [2024-07-24 22:34:28.145843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.675 qpair failed and we were unable to recover it. 00:25:02.675 [2024-07-24 22:34:28.145945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.675 [2024-07-24 22:34:28.145973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.675 qpair failed and we were unable to recover it. 00:25:02.675 [2024-07-24 22:34:28.146106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.675 [2024-07-24 22:34:28.146132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.675 qpair failed and we were unable to recover it. 00:25:02.675 [2024-07-24 22:34:28.146237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.675 [2024-07-24 22:34:28.146265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.675 qpair failed and we were unable to recover it. 00:25:02.675 [2024-07-24 22:34:28.146369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.675 [2024-07-24 22:34:28.146395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.675 qpair failed and we were unable to recover it. 00:25:02.675 [2024-07-24 22:34:28.146493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.675 [2024-07-24 22:34:28.146520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.675 qpair failed and we were unable to recover it. 00:25:02.675 [2024-07-24 22:34:28.146629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.675 [2024-07-24 22:34:28.146656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.675 qpair failed and we were unable to recover it. 00:25:02.675 [2024-07-24 22:34:28.146753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.675 [2024-07-24 22:34:28.146784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.675 qpair failed and we were unable to recover it. 00:25:02.675 [2024-07-24 22:34:28.146892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.675 [2024-07-24 22:34:28.146921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.675 qpair failed and we were unable to recover it. 00:25:02.675 [2024-07-24 22:34:28.147054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.675 [2024-07-24 22:34:28.147080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.675 qpair failed and we were unable to recover it. 00:25:02.675 [2024-07-24 22:34:28.147191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.675 [2024-07-24 22:34:28.147217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.675 qpair failed and we were unable to recover it. 00:25:02.675 [2024-07-24 22:34:28.147319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.675 [2024-07-24 22:34:28.147345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.675 qpair failed and we were unable to recover it. 00:25:02.675 [2024-07-24 22:34:28.147454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.675 [2024-07-24 22:34:28.147487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.675 qpair failed and we were unable to recover it. 00:25:02.675 [2024-07-24 22:34:28.147591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.675 [2024-07-24 22:34:28.147617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.675 qpair failed and we were unable to recover it. 00:25:02.675 [2024-07-24 22:34:28.147726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.675 [2024-07-24 22:34:28.147753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.675 qpair failed and we were unable to recover it. 00:25:02.675 [2024-07-24 22:34:28.147853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.675 [2024-07-24 22:34:28.147878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.675 qpair failed and we were unable to recover it. 00:25:02.675 [2024-07-24 22:34:28.148016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.675 [2024-07-24 22:34:28.148046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.675 qpair failed and we were unable to recover it. 00:25:02.675 [2024-07-24 22:34:28.148152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.675 [2024-07-24 22:34:28.148178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.675 qpair failed and we were unable to recover it. 00:25:02.675 [2024-07-24 22:34:28.148280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.675 [2024-07-24 22:34:28.148308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.675 qpair failed and we were unable to recover it. 00:25:02.675 [2024-07-24 22:34:28.148444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.675 [2024-07-24 22:34:28.148469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.675 qpair failed and we were unable to recover it. 00:25:02.675 [2024-07-24 22:34:28.148606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.675 [2024-07-24 22:34:28.148632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.675 qpair failed and we were unable to recover it. 00:25:02.675 [2024-07-24 22:34:28.148739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.675 [2024-07-24 22:34:28.148765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.675 qpair failed and we were unable to recover it. 00:25:02.675 [2024-07-24 22:34:28.148870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.675 [2024-07-24 22:34:28.148898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.675 qpair failed and we were unable to recover it. 00:25:02.675 [2024-07-24 22:34:28.149004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.675 [2024-07-24 22:34:28.149033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.675 qpair failed and we were unable to recover it. 00:25:02.675 [2024-07-24 22:34:28.149166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.675 [2024-07-24 22:34:28.149195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.675 qpair failed and we were unable to recover it. 00:25:02.675 [2024-07-24 22:34:28.149302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.675 [2024-07-24 22:34:28.149329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.675 qpair failed and we were unable to recover it. 00:25:02.676 [2024-07-24 22:34:28.149432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.676 [2024-07-24 22:34:28.149462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.676 qpair failed and we were unable to recover it. 00:25:02.676 [2024-07-24 22:34:28.149571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.676 [2024-07-24 22:34:28.149598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.676 qpair failed and we were unable to recover it. 00:25:02.676 [2024-07-24 22:34:28.149710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.676 [2024-07-24 22:34:28.149737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.676 qpair failed and we were unable to recover it. 00:25:02.676 [2024-07-24 22:34:28.149843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.676 [2024-07-24 22:34:28.149871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.676 qpair failed and we were unable to recover it. 00:25:02.676 [2024-07-24 22:34:28.150008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.676 [2024-07-24 22:34:28.150034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.676 qpair failed and we were unable to recover it. 00:25:02.676 [2024-07-24 22:34:28.150140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.676 [2024-07-24 22:34:28.150168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.676 qpair failed and we were unable to recover it. 00:25:02.676 [2024-07-24 22:34:28.150272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.676 [2024-07-24 22:34:28.150298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.676 qpair failed and we were unable to recover it. 00:25:02.676 [2024-07-24 22:34:28.150406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.676 [2024-07-24 22:34:28.150433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.676 qpair failed and we were unable to recover it. 00:25:02.676 [2024-07-24 22:34:28.150542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.676 [2024-07-24 22:34:28.150575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.676 qpair failed and we were unable to recover it. 00:25:02.676 [2024-07-24 22:34:28.150684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.676 [2024-07-24 22:34:28.150712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.676 qpair failed and we were unable to recover it. 00:25:02.676 [2024-07-24 22:34:28.150822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.676 [2024-07-24 22:34:28.150852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.676 qpair failed and we were unable to recover it. 00:25:02.676 [2024-07-24 22:34:28.150959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.676 [2024-07-24 22:34:28.150986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.676 qpair failed and we were unable to recover it. 00:25:02.676 [2024-07-24 22:34:28.151095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.676 [2024-07-24 22:34:28.151123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.676 qpair failed and we were unable to recover it. 00:25:02.676 [2024-07-24 22:34:28.151231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.676 [2024-07-24 22:34:28.151257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.676 qpair failed and we were unable to recover it. 00:25:02.676 [2024-07-24 22:34:28.151386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.676 [2024-07-24 22:34:28.151411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.676 qpair failed and we were unable to recover it. 00:25:02.676 [2024-07-24 22:34:28.151549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.676 [2024-07-24 22:34:28.151578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.676 qpair failed and we were unable to recover it. 00:25:02.676 [2024-07-24 22:34:28.151681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.676 [2024-07-24 22:34:28.151707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.676 qpair failed and we were unable to recover it. 00:25:02.676 [2024-07-24 22:34:28.151809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.676 [2024-07-24 22:34:28.151835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.676 qpair failed and we were unable to recover it. 00:25:02.676 [2024-07-24 22:34:28.151958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.676 [2024-07-24 22:34:28.151984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.676 qpair failed and we were unable to recover it. 00:25:02.676 [2024-07-24 22:34:28.152082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.676 [2024-07-24 22:34:28.152108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.676 qpair failed and we were unable to recover it. 00:25:02.676 [2024-07-24 22:34:28.152202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.676 [2024-07-24 22:34:28.152228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.676 qpair failed and we were unable to recover it. 00:25:02.676 [2024-07-24 22:34:28.152326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.676 [2024-07-24 22:34:28.152353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.676 qpair failed and we were unable to recover it. 00:25:02.676 [2024-07-24 22:34:28.152468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.676 [2024-07-24 22:34:28.152508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.676 qpair failed and we were unable to recover it. 00:25:02.676 [2024-07-24 22:34:28.152608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.676 [2024-07-24 22:34:28.152634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.676 qpair failed and we were unable to recover it. 00:25:02.676 [2024-07-24 22:34:28.152774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.676 [2024-07-24 22:34:28.152802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.676 qpair failed and we were unable to recover it. 00:25:02.676 [2024-07-24 22:34:28.152907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.676 [2024-07-24 22:34:28.152935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.676 qpair failed and we were unable to recover it. 00:25:02.676 [2024-07-24 22:34:28.153044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.676 [2024-07-24 22:34:28.153073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.676 qpair failed and we were unable to recover it. 00:25:02.676 [2024-07-24 22:34:28.153183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.676 [2024-07-24 22:34:28.153209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.676 qpair failed and we were unable to recover it. 00:25:02.676 [2024-07-24 22:34:28.153339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.676 [2024-07-24 22:34:28.153369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.676 qpair failed and we were unable to recover it. 00:25:02.676 [2024-07-24 22:34:28.153492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.676 [2024-07-24 22:34:28.153539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.676 qpair failed and we were unable to recover it. 00:25:02.676 [2024-07-24 22:34:28.153648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.676 [2024-07-24 22:34:28.153675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.676 qpair failed and we were unable to recover it. 00:25:02.676 [2024-07-24 22:34:28.153783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.676 [2024-07-24 22:34:28.153809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.676 qpair failed and we were unable to recover it. 00:25:02.676 [2024-07-24 22:34:28.153917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.676 [2024-07-24 22:34:28.153945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.676 qpair failed and we were unable to recover it. 00:25:02.676 [2024-07-24 22:34:28.154049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.676 [2024-07-24 22:34:28.154074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.676 qpair failed and we were unable to recover it. 00:25:02.676 [2024-07-24 22:34:28.154173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.676 [2024-07-24 22:34:28.154199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.676 qpair failed and we were unable to recover it. 00:25:02.676 [2024-07-24 22:34:28.154296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.676 [2024-07-24 22:34:28.154323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.676 qpair failed and we were unable to recover it. 00:25:02.676 [2024-07-24 22:34:28.154429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.676 [2024-07-24 22:34:28.154455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.676 qpair failed and we were unable to recover it. 00:25:02.676 [2024-07-24 22:34:28.154571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.676 [2024-07-24 22:34:28.154599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.677 qpair failed and we were unable to recover it. 00:25:02.677 [2024-07-24 22:34:28.154706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.677 [2024-07-24 22:34:28.154733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.677 qpair failed and we were unable to recover it. 00:25:02.677 [2024-07-24 22:34:28.154848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.677 [2024-07-24 22:34:28.154876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.677 qpair failed and we were unable to recover it. 00:25:02.677 [2024-07-24 22:34:28.154985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.677 [2024-07-24 22:34:28.155011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.677 qpair failed and we were unable to recover it. 00:25:02.677 [2024-07-24 22:34:28.155117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.677 [2024-07-24 22:34:28.155144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.677 qpair failed and we were unable to recover it. 00:25:02.677 [2024-07-24 22:34:28.155283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.677 [2024-07-24 22:34:28.155309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.677 qpair failed and we were unable to recover it. 00:25:02.677 [2024-07-24 22:34:28.155415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.677 [2024-07-24 22:34:28.155443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.677 qpair failed and we were unable to recover it. 00:25:02.677 [2024-07-24 22:34:28.155558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.677 [2024-07-24 22:34:28.155585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.677 qpair failed and we were unable to recover it. 00:25:02.677 [2024-07-24 22:34:28.155682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.677 [2024-07-24 22:34:28.155709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.677 qpair failed and we were unable to recover it. 00:25:02.677 [2024-07-24 22:34:28.155809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.677 [2024-07-24 22:34:28.155835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.677 qpair failed and we were unable to recover it. 00:25:02.677 [2024-07-24 22:34:28.155962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.677 [2024-07-24 22:34:28.155989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.677 qpair failed and we were unable to recover it. 00:25:02.677 [2024-07-24 22:34:28.156091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.677 [2024-07-24 22:34:28.156119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.677 qpair failed and we were unable to recover it. 00:25:02.677 [2024-07-24 22:34:28.156230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.677 [2024-07-24 22:34:28.156257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.677 qpair failed and we were unable to recover it. 00:25:02.677 [2024-07-24 22:34:28.156361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.677 [2024-07-24 22:34:28.156386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.677 qpair failed and we were unable to recover it. 00:25:02.677 [2024-07-24 22:34:28.156502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.677 [2024-07-24 22:34:28.156530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.677 qpair failed and we were unable to recover it. 00:25:02.677 [2024-07-24 22:34:28.156639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.677 [2024-07-24 22:34:28.156667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.677 qpair failed and we were unable to recover it. 00:25:02.677 [2024-07-24 22:34:28.156769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.677 [2024-07-24 22:34:28.156796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.677 qpair failed and we were unable to recover it. 00:25:02.677 [2024-07-24 22:34:28.156893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.677 [2024-07-24 22:34:28.156918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.677 qpair failed and we were unable to recover it. 00:25:02.677 [2024-07-24 22:34:28.157018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.677 [2024-07-24 22:34:28.157044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.677 qpair failed and we were unable to recover it. 00:25:02.677 [2024-07-24 22:34:28.157148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.677 [2024-07-24 22:34:28.157177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.677 qpair failed and we were unable to recover it. 00:25:02.677 [2024-07-24 22:34:28.157285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.677 [2024-07-24 22:34:28.157311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.677 qpair failed and we were unable to recover it. 00:25:02.677 [2024-07-24 22:34:28.157444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.677 [2024-07-24 22:34:28.157470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.677 qpair failed and we were unable to recover it. 00:25:02.677 [2024-07-24 22:34:28.157584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.677 [2024-07-24 22:34:28.157612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.677 qpair failed and we were unable to recover it. 00:25:02.677 [2024-07-24 22:34:28.157722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.677 [2024-07-24 22:34:28.157753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.677 qpair failed and we were unable to recover it. 00:25:02.677 [2024-07-24 22:34:28.157860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.677 [2024-07-24 22:34:28.157887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.677 qpair failed and we were unable to recover it. 00:25:02.677 [2024-07-24 22:34:28.158001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.677 [2024-07-24 22:34:28.158029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.677 qpair failed and we were unable to recover it. 00:25:02.677 [2024-07-24 22:34:28.158129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.677 [2024-07-24 22:34:28.158154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.677 qpair failed and we were unable to recover it. 00:25:02.677 [2024-07-24 22:34:28.158252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.677 [2024-07-24 22:34:28.158277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.677 qpair failed and we were unable to recover it. 00:25:02.677 [2024-07-24 22:34:28.158382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.677 [2024-07-24 22:34:28.158409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.677 qpair failed and we were unable to recover it. 00:25:02.677 [2024-07-24 22:34:28.158515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.677 [2024-07-24 22:34:28.158543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.677 qpair failed and we were unable to recover it. 00:25:02.677 [2024-07-24 22:34:28.158650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.677 [2024-07-24 22:34:28.158677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.677 qpair failed and we were unable to recover it. 00:25:02.677 [2024-07-24 22:34:28.158775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.677 [2024-07-24 22:34:28.158801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.677 qpair failed and we were unable to recover it. 00:25:02.677 [2024-07-24 22:34:28.158905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.677 [2024-07-24 22:34:28.158931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.677 qpair failed and we were unable to recover it. 00:25:02.677 [2024-07-24 22:34:28.159034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.677 [2024-07-24 22:34:28.159061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.677 qpair failed and we were unable to recover it. 00:25:02.677 [2024-07-24 22:34:28.159166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.677 [2024-07-24 22:34:28.159192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.677 qpair failed and we were unable to recover it. 00:25:02.677 [2024-07-24 22:34:28.159291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.677 [2024-07-24 22:34:28.159318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.677 qpair failed and we were unable to recover it. 00:25:02.677 [2024-07-24 22:34:28.159420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.677 [2024-07-24 22:34:28.159446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.677 qpair failed and we were unable to recover it. 00:25:02.677 [2024-07-24 22:34:28.159571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.677 [2024-07-24 22:34:28.159599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.677 qpair failed and we were unable to recover it. 00:25:02.678 [2024-07-24 22:34:28.159704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.678 [2024-07-24 22:34:28.159736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.678 qpair failed and we were unable to recover it. 00:25:02.678 [2024-07-24 22:34:28.159871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.678 [2024-07-24 22:34:28.159897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.678 qpair failed and we were unable to recover it. 00:25:02.678 [2024-07-24 22:34:28.160029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.678 [2024-07-24 22:34:28.160055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.678 qpair failed and we were unable to recover it. 00:25:02.678 [2024-07-24 22:34:28.160160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.678 [2024-07-24 22:34:28.160188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.678 qpair failed and we were unable to recover it. 00:25:02.678 [2024-07-24 22:34:28.160302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.678 [2024-07-24 22:34:28.160332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.678 qpair failed and we were unable to recover it. 00:25:02.678 [2024-07-24 22:34:28.160438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.678 [2024-07-24 22:34:28.160467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.678 qpair failed and we were unable to recover it. 00:25:02.678 [2024-07-24 22:34:28.160580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.678 [2024-07-24 22:34:28.160607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.678 qpair failed and we were unable to recover it. 00:25:02.678 [2024-07-24 22:34:28.160712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.678 [2024-07-24 22:34:28.160739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.678 qpair failed and we were unable to recover it. 00:25:02.678 [2024-07-24 22:34:28.160845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.678 [2024-07-24 22:34:28.160872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.678 qpair failed and we were unable to recover it. 00:25:02.678 [2024-07-24 22:34:28.160974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.678 [2024-07-24 22:34:28.161000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.678 qpair failed and we were unable to recover it. 00:25:02.678 [2024-07-24 22:34:28.161128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.678 [2024-07-24 22:34:28.161154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.678 qpair failed and we were unable to recover it. 00:25:02.678 [2024-07-24 22:34:28.161253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.678 [2024-07-24 22:34:28.161280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.678 qpair failed and we were unable to recover it. 00:25:02.678 [2024-07-24 22:34:28.161383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.678 [2024-07-24 22:34:28.161410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.678 qpair failed and we were unable to recover it. 00:25:02.678 [2024-07-24 22:34:28.161516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.678 [2024-07-24 22:34:28.161545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.678 qpair failed and we were unable to recover it. 00:25:02.678 [2024-07-24 22:34:28.161663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.678 [2024-07-24 22:34:28.161692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.678 qpair failed and we were unable to recover it. 00:25:02.678 [2024-07-24 22:34:28.161810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.678 [2024-07-24 22:34:28.161839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.678 qpair failed and we were unable to recover it. 00:25:02.678 [2024-07-24 22:34:28.161941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.678 [2024-07-24 22:34:28.161968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.678 qpair failed and we were unable to recover it. 00:25:02.678 [2024-07-24 22:34:28.162096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.678 [2024-07-24 22:34:28.162122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.678 qpair failed and we were unable to recover it. 00:25:02.678 [2024-07-24 22:34:28.162231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.678 [2024-07-24 22:34:28.162260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.678 qpair failed and we were unable to recover it. 00:25:02.678 [2024-07-24 22:34:28.162366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.678 [2024-07-24 22:34:28.162394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.678 qpair failed and we were unable to recover it. 00:25:02.678 [2024-07-24 22:34:28.162503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.678 [2024-07-24 22:34:28.162530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.678 qpair failed and we were unable to recover it. 00:25:02.678 [2024-07-24 22:34:28.162633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.678 [2024-07-24 22:34:28.162659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.678 qpair failed and we were unable to recover it. 00:25:02.678 [2024-07-24 22:34:28.162791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.678 [2024-07-24 22:34:28.162816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.678 qpair failed and we were unable to recover it. 00:25:02.678 [2024-07-24 22:34:28.162915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.678 [2024-07-24 22:34:28.162940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.678 qpair failed and we were unable to recover it. 00:25:02.678 [2024-07-24 22:34:28.163046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.678 [2024-07-24 22:34:28.163072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.678 qpair failed and we were unable to recover it. 00:25:02.678 [2024-07-24 22:34:28.163202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.678 [2024-07-24 22:34:28.163228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.678 qpair failed and we were unable to recover it. 00:25:02.678 [2024-07-24 22:34:28.163333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.678 [2024-07-24 22:34:28.163361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.678 qpair failed and we were unable to recover it. 00:25:02.678 [2024-07-24 22:34:28.163471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.678 [2024-07-24 22:34:28.163505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.678 qpair failed and we were unable to recover it. 00:25:02.678 [2024-07-24 22:34:28.163603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.678 [2024-07-24 22:34:28.163629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.678 qpair failed and we were unable to recover it. 00:25:02.678 [2024-07-24 22:34:28.163733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.678 [2024-07-24 22:34:28.163758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.678 qpair failed and we were unable to recover it. 00:25:02.678 [2024-07-24 22:34:28.163862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.678 [2024-07-24 22:34:28.163889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.678 qpair failed and we were unable to recover it. 00:25:02.678 [2024-07-24 22:34:28.163994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.678 [2024-07-24 22:34:28.164021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.678 qpair failed and we were unable to recover it. 00:25:02.678 [2024-07-24 22:34:28.164151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.678 [2024-07-24 22:34:28.164179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.678 qpair failed and we were unable to recover it. 00:25:02.678 [2024-07-24 22:34:28.164288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.679 [2024-07-24 22:34:28.164317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.679 qpair failed and we were unable to recover it. 00:25:02.679 [2024-07-24 22:34:28.164423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.679 [2024-07-24 22:34:28.164452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.679 qpair failed and we were unable to recover it. 00:25:02.679 [2024-07-24 22:34:28.164568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.679 [2024-07-24 22:34:28.164594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.679 qpair failed and we were unable to recover it. 00:25:02.679 [2024-07-24 22:34:28.164699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.679 [2024-07-24 22:34:28.164727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.679 qpair failed and we were unable to recover it. 00:25:02.679 [2024-07-24 22:34:28.164861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.679 [2024-07-24 22:34:28.164887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.679 qpair failed and we were unable to recover it. 00:25:02.679 [2024-07-24 22:34:28.164996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.679 [2024-07-24 22:34:28.165025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.679 qpair failed and we were unable to recover it. 00:25:02.679 [2024-07-24 22:34:28.165131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.679 [2024-07-24 22:34:28.165159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.679 qpair failed and we were unable to recover it. 00:25:02.679 [2024-07-24 22:34:28.165296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.679 [2024-07-24 22:34:28.165322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.679 qpair failed and we were unable to recover it. 00:25:02.679 [2024-07-24 22:34:28.165432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.679 [2024-07-24 22:34:28.165457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.679 qpair failed and we were unable to recover it. 00:25:02.679 [2024-07-24 22:34:28.165563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.679 [2024-07-24 22:34:28.165589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.679 qpair failed and we were unable to recover it. 00:25:02.679 [2024-07-24 22:34:28.165697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.679 [2024-07-24 22:34:28.165725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.679 qpair failed and we were unable to recover it. 00:25:02.679 [2024-07-24 22:34:28.165827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.679 [2024-07-24 22:34:28.165854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.679 qpair failed and we were unable to recover it. 00:25:02.679 [2024-07-24 22:34:28.165990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.679 [2024-07-24 22:34:28.166016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.679 qpair failed and we were unable to recover it. 00:25:02.679 [2024-07-24 22:34:28.166118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.679 [2024-07-24 22:34:28.166144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.679 qpair failed and we were unable to recover it. 00:25:02.679 [2024-07-24 22:34:28.166251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.679 [2024-07-24 22:34:28.166280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.679 qpair failed and we were unable to recover it. 00:25:02.679 [2024-07-24 22:34:28.166376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.679 [2024-07-24 22:34:28.166402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.679 qpair failed and we were unable to recover it. 00:25:02.679 [2024-07-24 22:34:28.166504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.679 [2024-07-24 22:34:28.166531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.679 qpair failed and we were unable to recover it. 00:25:02.679 [2024-07-24 22:34:28.166662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.679 [2024-07-24 22:34:28.166688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.679 qpair failed and we were unable to recover it. 00:25:02.679 [2024-07-24 22:34:28.166789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.679 [2024-07-24 22:34:28.166815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.679 qpair failed and we were unable to recover it. 00:25:02.679 [2024-07-24 22:34:28.166916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.679 [2024-07-24 22:34:28.166943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.679 qpair failed and we were unable to recover it. 00:25:02.679 [2024-07-24 22:34:28.167051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.679 [2024-07-24 22:34:28.167079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.679 qpair failed and we were unable to recover it. 00:25:02.679 [2024-07-24 22:34:28.167223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.679 [2024-07-24 22:34:28.167252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.679 qpair failed and we were unable to recover it. 00:25:02.679 [2024-07-24 22:34:28.167357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.679 [2024-07-24 22:34:28.167385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.679 qpair failed and we were unable to recover it. 00:25:02.679 [2024-07-24 22:34:28.167508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.679 [2024-07-24 22:34:28.167536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.679 qpair failed and we were unable to recover it. 00:25:02.679 [2024-07-24 22:34:28.167661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.679 [2024-07-24 22:34:28.167689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.679 qpair failed and we were unable to recover it. 00:25:02.679 [2024-07-24 22:34:28.167792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.679 [2024-07-24 22:34:28.167818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.679 qpair failed and we were unable to recover it. 00:25:02.679 [2024-07-24 22:34:28.167923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.679 [2024-07-24 22:34:28.167952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.679 qpair failed and we were unable to recover it. 00:25:02.679 [2024-07-24 22:34:28.168059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.679 [2024-07-24 22:34:28.168086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.679 qpair failed and we were unable to recover it. 00:25:02.679 [2024-07-24 22:34:28.168191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.679 [2024-07-24 22:34:28.168220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.679 qpair failed and we were unable to recover it. 00:25:02.679 [2024-07-24 22:34:28.168325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.679 [2024-07-24 22:34:28.168352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.679 qpair failed and we were unable to recover it. 00:25:02.679 [2024-07-24 22:34:28.168502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.679 [2024-07-24 22:34:28.168529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.679 qpair failed and we were unable to recover it. 00:25:02.679 [2024-07-24 22:34:28.168665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.679 [2024-07-24 22:34:28.168692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.679 qpair failed and we were unable to recover it. 00:25:02.679 [2024-07-24 22:34:28.168797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.679 [2024-07-24 22:34:28.168826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.679 qpair failed and we were unable to recover it. 00:25:02.679 [2024-07-24 22:34:28.168932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.679 [2024-07-24 22:34:28.168959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.679 qpair failed and we were unable to recover it. 00:25:02.679 [2024-07-24 22:34:28.169102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.679 [2024-07-24 22:34:28.169134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.679 qpair failed and we were unable to recover it. 00:25:02.679 [2024-07-24 22:34:28.169271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.679 [2024-07-24 22:34:28.169298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.679 qpair failed and we were unable to recover it. 00:25:02.679 [2024-07-24 22:34:28.169402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.679 [2024-07-24 22:34:28.169429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.679 qpair failed and we were unable to recover it. 00:25:02.679 [2024-07-24 22:34:28.169537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.680 [2024-07-24 22:34:28.169565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.680 qpair failed and we were unable to recover it. 00:25:02.680 [2024-07-24 22:34:28.169668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.680 [2024-07-24 22:34:28.169695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.680 qpair failed and we were unable to recover it. 00:25:02.680 [2024-07-24 22:34:28.169802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.680 [2024-07-24 22:34:28.169828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.680 qpair failed and we were unable to recover it. 00:25:02.680 [2024-07-24 22:34:28.169926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.680 [2024-07-24 22:34:28.169954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.680 qpair failed and we were unable to recover it. 00:25:02.680 [2024-07-24 22:34:28.170056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.680 [2024-07-24 22:34:28.170082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.680 qpair failed and we were unable to recover it. 00:25:02.680 [2024-07-24 22:34:28.170183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.680 [2024-07-24 22:34:28.170209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.680 qpair failed and we were unable to recover it. 00:25:02.680 [2024-07-24 22:34:28.170316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.680 [2024-07-24 22:34:28.170342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.680 qpair failed and we were unable to recover it. 00:25:02.680 [2024-07-24 22:34:28.170446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.680 [2024-07-24 22:34:28.170473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.680 qpair failed and we were unable to recover it. 00:25:02.680 [2024-07-24 22:34:28.170619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.680 [2024-07-24 22:34:28.170645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.680 qpair failed and we were unable to recover it. 00:25:02.680 [2024-07-24 22:34:28.170746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.680 [2024-07-24 22:34:28.170771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.680 qpair failed and we were unable to recover it. 00:25:02.680 [2024-07-24 22:34:28.170876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.680 [2024-07-24 22:34:28.170902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.680 qpair failed and we were unable to recover it. 00:25:02.680 [2024-07-24 22:34:28.171014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.680 [2024-07-24 22:34:28.171040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.680 qpair failed and we were unable to recover it. 00:25:02.680 [2024-07-24 22:34:28.171182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.680 [2024-07-24 22:34:28.171211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.680 qpair failed and we were unable to recover it. 00:25:02.680 [2024-07-24 22:34:28.171318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.680 [2024-07-24 22:34:28.171347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.680 qpair failed and we were unable to recover it. 00:25:02.680 [2024-07-24 22:34:28.171455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.680 [2024-07-24 22:34:28.171496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.680 qpair failed and we were unable to recover it. 00:25:02.680 [2024-07-24 22:34:28.171602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.680 [2024-07-24 22:34:28.171628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.680 qpair failed and we were unable to recover it. 00:25:02.680 [2024-07-24 22:34:28.171726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.680 [2024-07-24 22:34:28.171752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.680 qpair failed and we were unable to recover it. 00:25:02.680 [2024-07-24 22:34:28.171856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.680 [2024-07-24 22:34:28.171882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.680 qpair failed and we were unable to recover it. 00:25:02.680 [2024-07-24 22:34:28.171980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.680 [2024-07-24 22:34:28.172005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.680 qpair failed and we were unable to recover it. 00:25:02.680 [2024-07-24 22:34:28.172111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.680 [2024-07-24 22:34:28.172138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.680 qpair failed and we were unable to recover it. 00:25:02.680 [2024-07-24 22:34:28.172240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.680 [2024-07-24 22:34:28.172266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.680 qpair failed and we were unable to recover it. 00:25:02.680 [2024-07-24 22:34:28.172373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.680 [2024-07-24 22:34:28.172399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.680 qpair failed and we were unable to recover it. 00:25:02.680 [2024-07-24 22:34:28.172513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.680 [2024-07-24 22:34:28.172542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.680 qpair failed and we were unable to recover it. 00:25:02.680 [2024-07-24 22:34:28.172659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.680 [2024-07-24 22:34:28.172686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.680 qpair failed and we were unable to recover it. 00:25:02.680 [2024-07-24 22:34:28.172794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.680 [2024-07-24 22:34:28.172822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.680 qpair failed and we were unable to recover it. 00:25:02.680 [2024-07-24 22:34:28.172950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.680 [2024-07-24 22:34:28.172976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.680 qpair failed and we were unable to recover it. 00:25:02.680 [2024-07-24 22:34:28.173078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.680 [2024-07-24 22:34:28.173103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.680 qpair failed and we were unable to recover it. 00:25:02.680 [2024-07-24 22:34:28.173207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.680 [2024-07-24 22:34:28.173232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.680 qpair failed and we were unable to recover it. 00:25:02.680 [2024-07-24 22:34:28.173331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.680 [2024-07-24 22:34:28.173358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.680 qpair failed and we were unable to recover it. 00:25:02.680 [2024-07-24 22:34:28.173461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.680 [2024-07-24 22:34:28.173492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.680 qpair failed and we were unable to recover it. 00:25:02.680 [2024-07-24 22:34:28.173593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.680 [2024-07-24 22:34:28.173619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.680 qpair failed and we were unable to recover it. 00:25:02.680 [2024-07-24 22:34:28.173718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.680 [2024-07-24 22:34:28.173744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.680 qpair failed and we were unable to recover it. 00:25:02.680 [2024-07-24 22:34:28.173875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.680 [2024-07-24 22:34:28.173903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.680 qpair failed and we were unable to recover it. 00:25:02.680 [2024-07-24 22:34:28.174008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.680 [2024-07-24 22:34:28.174036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.680 qpair failed and we were unable to recover it. 00:25:02.680 [2024-07-24 22:34:28.174145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.680 [2024-07-24 22:34:28.174173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.680 qpair failed and we were unable to recover it. 00:25:02.680 [2024-07-24 22:34:28.174273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.680 [2024-07-24 22:34:28.174299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.680 qpair failed and we were unable to recover it. 00:25:02.680 [2024-07-24 22:34:28.174401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.680 [2024-07-24 22:34:28.174427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.680 qpair failed and we were unable to recover it. 00:25:02.680 [2024-07-24 22:34:28.174540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.680 [2024-07-24 22:34:28.174571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.680 qpair failed and we were unable to recover it. 00:25:02.680 [2024-07-24 22:34:28.174703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.681 [2024-07-24 22:34:28.174730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.681 qpair failed and we were unable to recover it. 00:25:02.681 [2024-07-24 22:34:28.174833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.681 [2024-07-24 22:34:28.174859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.681 qpair failed and we were unable to recover it. 00:25:02.681 [2024-07-24 22:34:28.174961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.681 [2024-07-24 22:34:28.174986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.681 qpair failed and we were unable to recover it. 00:25:02.681 [2024-07-24 22:34:28.175119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.681 [2024-07-24 22:34:28.175145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.681 qpair failed and we were unable to recover it. 00:25:02.681 [2024-07-24 22:34:28.175248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.681 [2024-07-24 22:34:28.175274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.681 qpair failed and we were unable to recover it. 00:25:02.681 [2024-07-24 22:34:28.175384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.681 [2024-07-24 22:34:28.175417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.681 qpair failed and we were unable to recover it. 00:25:02.681 [2024-07-24 22:34:28.175523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.681 [2024-07-24 22:34:28.175550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.681 qpair failed and we were unable to recover it. 00:25:02.681 [2024-07-24 22:34:28.175654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.681 [2024-07-24 22:34:28.175680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.681 qpair failed and we were unable to recover it. 00:25:02.681 [2024-07-24 22:34:28.175781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.681 [2024-07-24 22:34:28.175809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.681 qpair failed and we were unable to recover it. 00:25:02.681 [2024-07-24 22:34:28.175907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.681 [2024-07-24 22:34:28.175934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.681 qpair failed and we were unable to recover it. 00:25:02.681 [2024-07-24 22:34:28.176039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.681 [2024-07-24 22:34:28.176066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.681 qpair failed and we were unable to recover it. 00:25:02.681 [2024-07-24 22:34:28.176192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.681 [2024-07-24 22:34:28.176219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.681 qpair failed and we were unable to recover it. 00:25:02.681 [2024-07-24 22:34:28.176351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.681 [2024-07-24 22:34:28.176378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.681 qpair failed and we were unable to recover it. 00:25:02.681 [2024-07-24 22:34:28.176513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.681 [2024-07-24 22:34:28.176540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.681 qpair failed and we were unable to recover it. 00:25:02.681 [2024-07-24 22:34:28.176649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.681 [2024-07-24 22:34:28.176675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.681 qpair failed and we were unable to recover it. 00:25:02.681 [2024-07-24 22:34:28.176787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.681 [2024-07-24 22:34:28.176814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.681 qpair failed and we were unable to recover it. 00:25:02.681 [2024-07-24 22:34:28.176918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.681 [2024-07-24 22:34:28.176948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.681 qpair failed and we were unable to recover it. 00:25:02.681 [2024-07-24 22:34:28.177065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.681 [2024-07-24 22:34:28.177094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.681 qpair failed and we were unable to recover it. 00:25:02.681 [2024-07-24 22:34:28.177193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.681 [2024-07-24 22:34:28.177218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.681 qpair failed and we were unable to recover it. 00:25:02.681 [2024-07-24 22:34:28.177320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.681 [2024-07-24 22:34:28.177347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.681 qpair failed and we were unable to recover it. 00:25:02.681 [2024-07-24 22:34:28.177448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.681 [2024-07-24 22:34:28.177473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.681 qpair failed and we were unable to recover it. 00:25:02.681 [2024-07-24 22:34:28.177587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.681 [2024-07-24 22:34:28.177613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.681 qpair failed and we were unable to recover it. 00:25:02.681 [2024-07-24 22:34:28.177715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.681 [2024-07-24 22:34:28.177743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.681 qpair failed and we were unable to recover it. 00:25:02.681 [2024-07-24 22:34:28.177855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.681 [2024-07-24 22:34:28.177882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.681 qpair failed and we were unable to recover it. 00:25:02.681 [2024-07-24 22:34:28.178013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.681 [2024-07-24 22:34:28.178039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.681 qpair failed and we were unable to recover it. 00:25:02.681 [2024-07-24 22:34:28.178140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.681 [2024-07-24 22:34:28.178166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.681 qpair failed and we were unable to recover it. 00:25:02.681 [2024-07-24 22:34:28.178286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.681 [2024-07-24 22:34:28.178312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.681 qpair failed and we were unable to recover it. 00:25:02.681 [2024-07-24 22:34:28.178408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.681 [2024-07-24 22:34:28.178434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.681 qpair failed and we were unable to recover it. 00:25:02.681 [2024-07-24 22:34:28.178541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.681 [2024-07-24 22:34:28.178569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.681 qpair failed and we were unable to recover it. 00:25:02.681 [2024-07-24 22:34:28.178675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.681 [2024-07-24 22:34:28.178701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.681 qpair failed and we were unable to recover it. 00:25:02.681 [2024-07-24 22:34:28.178809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.681 [2024-07-24 22:34:28.178838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.681 qpair failed and we were unable to recover it. 00:25:02.681 [2024-07-24 22:34:28.178940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.681 [2024-07-24 22:34:28.178968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.681 qpair failed and we were unable to recover it. 00:25:02.681 [2024-07-24 22:34:28.179072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.681 [2024-07-24 22:34:28.179099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.681 qpair failed and we were unable to recover it. 00:25:02.681 [2024-07-24 22:34:28.179205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.681 [2024-07-24 22:34:28.179233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.681 qpair failed and we were unable to recover it. 00:25:02.681 [2024-07-24 22:34:28.179336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.681 [2024-07-24 22:34:28.179363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.681 qpair failed and we were unable to recover it. 00:25:02.681 [2024-07-24 22:34:28.179476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.681 [2024-07-24 22:34:28.179508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.681 qpair failed and we were unable to recover it. 00:25:02.681 [2024-07-24 22:34:28.179623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.681 [2024-07-24 22:34:28.179651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.681 qpair failed and we were unable to recover it. 00:25:02.681 [2024-07-24 22:34:28.179751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.681 [2024-07-24 22:34:28.179778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.681 qpair failed and we were unable to recover it. 00:25:02.682 [2024-07-24 22:34:28.179875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.682 [2024-07-24 22:34:28.179901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.682 qpair failed and we were unable to recover it. 00:25:02.682 [2024-07-24 22:34:28.180003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.682 [2024-07-24 22:34:28.180034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.682 qpair failed and we were unable to recover it. 00:25:02.682 [2024-07-24 22:34:28.180135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.682 [2024-07-24 22:34:28.180161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.682 qpair failed and we were unable to recover it. 00:25:02.682 [2024-07-24 22:34:28.180289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.682 [2024-07-24 22:34:28.180315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.682 qpair failed and we were unable to recover it. 00:25:02.682 [2024-07-24 22:34:28.180415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.682 [2024-07-24 22:34:28.180441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.682 qpair failed and we were unable to recover it. 00:25:02.682 [2024-07-24 22:34:28.180548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.682 [2024-07-24 22:34:28.180575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.682 qpair failed and we were unable to recover it. 00:25:02.682 [2024-07-24 22:34:28.180708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.682 [2024-07-24 22:34:28.180734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.682 qpair failed and we were unable to recover it. 00:25:02.682 [2024-07-24 22:34:28.180833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.682 [2024-07-24 22:34:28.180860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.682 qpair failed and we were unable to recover it. 00:25:02.682 [2024-07-24 22:34:28.180961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.682 [2024-07-24 22:34:28.180987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.682 qpair failed and we were unable to recover it. 00:25:02.682 [2024-07-24 22:34:28.181096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.682 [2024-07-24 22:34:28.181123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.682 qpair failed and we were unable to recover it. 00:25:02.682 [2024-07-24 22:34:28.181249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.682 [2024-07-24 22:34:28.181286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.682 qpair failed and we were unable to recover it. 00:25:02.682 [2024-07-24 22:34:28.181424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.682 [2024-07-24 22:34:28.181453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.682 qpair failed and we were unable to recover it. 00:25:02.682 [2024-07-24 22:34:28.181567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.682 [2024-07-24 22:34:28.181597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.682 qpair failed and we were unable to recover it. 00:25:02.682 [2024-07-24 22:34:28.181708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.682 [2024-07-24 22:34:28.181736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.682 qpair failed and we were unable to recover it. 00:25:02.682 [2024-07-24 22:34:28.181843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.682 [2024-07-24 22:34:28.181871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.682 qpair failed and we were unable to recover it. 00:25:02.682 [2024-07-24 22:34:28.181981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.682 [2024-07-24 22:34:28.182008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.682 qpair failed and we were unable to recover it. 00:25:02.682 [2024-07-24 22:34:28.182122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.682 [2024-07-24 22:34:28.182150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.682 qpair failed and we were unable to recover it. 00:25:02.682 [2024-07-24 22:34:28.182266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.682 [2024-07-24 22:34:28.182296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.682 qpair failed and we were unable to recover it. 00:25:02.682 [2024-07-24 22:34:28.182402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.682 [2024-07-24 22:34:28.182429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.682 qpair failed and we were unable to recover it. 00:25:02.682 [2024-07-24 22:34:28.182532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.682 [2024-07-24 22:34:28.182559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.682 qpair failed and we were unable to recover it. 00:25:02.682 [2024-07-24 22:34:28.182692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.682 [2024-07-24 22:34:28.182718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.682 qpair failed and we were unable to recover it. 00:25:02.682 [2024-07-24 22:34:28.182819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.682 [2024-07-24 22:34:28.182845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.682 qpair failed and we were unable to recover it. 00:25:02.682 [2024-07-24 22:34:28.182974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.682 [2024-07-24 22:34:28.182999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.682 qpair failed and we were unable to recover it. 00:25:02.682 [2024-07-24 22:34:28.183099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.682 [2024-07-24 22:34:28.183126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.682 qpair failed and we were unable to recover it. 00:25:02.682 [2024-07-24 22:34:28.183223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.682 [2024-07-24 22:34:28.183250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.682 qpair failed and we were unable to recover it. 00:25:02.682 [2024-07-24 22:34:28.183385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.682 [2024-07-24 22:34:28.183411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.682 qpair failed and we were unable to recover it. 00:25:02.682 [2024-07-24 22:34:28.183518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.682 [2024-07-24 22:34:28.183545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.682 qpair failed and we were unable to recover it. 00:25:02.682 [2024-07-24 22:34:28.183653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.682 [2024-07-24 22:34:28.183682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.682 qpair failed and we were unable to recover it. 00:25:02.682 [2024-07-24 22:34:28.183797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.682 [2024-07-24 22:34:28.183826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.682 qpair failed and we were unable to recover it. 00:25:02.682 [2024-07-24 22:34:28.183932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.682 [2024-07-24 22:34:28.183960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.682 qpair failed and we were unable to recover it. 00:25:02.682 [2024-07-24 22:34:28.184069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.682 [2024-07-24 22:34:28.184096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.682 qpair failed and we were unable to recover it. 00:25:02.682 [2024-07-24 22:34:28.184200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.682 [2024-07-24 22:34:28.184227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.682 qpair failed and we were unable to recover it. 00:25:02.682 [2024-07-24 22:34:28.184361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.682 [2024-07-24 22:34:28.184390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.682 qpair failed and we were unable to recover it. 00:25:02.682 [2024-07-24 22:34:28.184495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.682 [2024-07-24 22:34:28.184523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.682 qpair failed and we were unable to recover it. 00:25:02.682 [2024-07-24 22:34:28.184626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.682 [2024-07-24 22:34:28.184652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.682 qpair failed and we were unable to recover it. 00:25:02.682 [2024-07-24 22:34:28.184780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.682 [2024-07-24 22:34:28.184806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.682 qpair failed and we were unable to recover it. 00:25:02.682 [2024-07-24 22:34:28.185833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.683 [2024-07-24 22:34:28.185866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.683 qpair failed and we were unable to recover it. 00:25:02.683 [2024-07-24 22:34:28.185977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.683 [2024-07-24 22:34:28.186006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.683 qpair failed and we were unable to recover it. 00:25:02.683 [2024-07-24 22:34:28.186108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.683 [2024-07-24 22:34:28.186135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.683 qpair failed and we were unable to recover it. 00:25:02.683 [2024-07-24 22:34:28.186244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.683 [2024-07-24 22:34:28.186270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.683 qpair failed and we were unable to recover it. 00:25:02.683 [2024-07-24 22:34:28.186372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.683 [2024-07-24 22:34:28.186398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.683 qpair failed and we were unable to recover it. 00:25:02.683 [2024-07-24 22:34:28.186513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.683 [2024-07-24 22:34:28.186547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.683 qpair failed and we were unable to recover it. 00:25:02.683 [2024-07-24 22:34:28.186657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.683 [2024-07-24 22:34:28.186685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.683 qpair failed and we were unable to recover it. 00:25:02.683 [2024-07-24 22:34:28.186816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.683 [2024-07-24 22:34:28.186842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.683 qpair failed and we were unable to recover it. 00:25:02.683 [2024-07-24 22:34:28.186943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.683 [2024-07-24 22:34:28.186969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.683 qpair failed and we were unable to recover it. 00:25:02.683 [2024-07-24 22:34:28.187103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.683 [2024-07-24 22:34:28.187130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.683 qpair failed and we were unable to recover it. 00:25:02.683 [2024-07-24 22:34:28.187261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.683 [2024-07-24 22:34:28.187286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.683 qpair failed and we were unable to recover it. 00:25:02.683 [2024-07-24 22:34:28.187396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.683 [2024-07-24 22:34:28.187425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.683 qpair failed and we were unable to recover it. 00:25:02.683 [2024-07-24 22:34:28.187545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.683 [2024-07-24 22:34:28.187575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.683 qpair failed and we were unable to recover it. 00:25:02.683 [2024-07-24 22:34:28.187685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.683 [2024-07-24 22:34:28.187713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.683 qpair failed and we were unable to recover it. 00:25:02.683 [2024-07-24 22:34:28.187849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.683 [2024-07-24 22:34:28.187875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.683 qpair failed and we were unable to recover it. 00:25:02.683 [2024-07-24 22:34:28.188004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.683 [2024-07-24 22:34:28.188031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.683 qpair failed and we were unable to recover it. 00:25:02.683 [2024-07-24 22:34:28.188136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.683 [2024-07-24 22:34:28.188162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.683 qpair failed and we were unable to recover it. 00:25:02.683 [2024-07-24 22:34:28.188261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.683 [2024-07-24 22:34:28.188287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.683 qpair failed and we were unable to recover it. 00:25:02.683 [2024-07-24 22:34:28.188394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.683 [2024-07-24 22:34:28.188422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.683 qpair failed and we were unable to recover it. 00:25:02.683 [2024-07-24 22:34:28.188530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.683 [2024-07-24 22:34:28.188558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.683 qpair failed and we were unable to recover it. 00:25:02.683 [2024-07-24 22:34:28.188664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.683 [2024-07-24 22:34:28.188692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.683 qpair failed and we were unable to recover it. 00:25:02.683 [2024-07-24 22:34:28.188793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.683 [2024-07-24 22:34:28.188819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.683 qpair failed and we were unable to recover it. 00:25:02.683 [2024-07-24 22:34:28.188928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.683 [2024-07-24 22:34:28.188957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.683 qpair failed and we were unable to recover it. 00:25:02.683 [2024-07-24 22:34:28.189094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.683 [2024-07-24 22:34:28.189120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.683 qpair failed and we were unable to recover it. 00:25:02.683 [2024-07-24 22:34:28.189224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.683 [2024-07-24 22:34:28.189252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.683 qpair failed and we were unable to recover it. 00:25:02.683 [2024-07-24 22:34:28.189352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.683 [2024-07-24 22:34:28.189379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.683 qpair failed and we were unable to recover it. 00:25:02.683 [2024-07-24 22:34:28.189504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.683 [2024-07-24 22:34:28.189531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.683 qpair failed and we were unable to recover it. 00:25:02.683 [2024-07-24 22:34:28.189674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.683 [2024-07-24 22:34:28.189701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.683 qpair failed and we were unable to recover it. 00:25:02.683 [2024-07-24 22:34:28.189801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.683 [2024-07-24 22:34:28.189827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.683 qpair failed and we were unable to recover it. 00:25:02.683 [2024-07-24 22:34:28.189930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.683 [2024-07-24 22:34:28.189955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.683 qpair failed and we were unable to recover it. 00:25:02.683 [2024-07-24 22:34:28.190056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.683 [2024-07-24 22:34:28.190082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.683 qpair failed and we were unable to recover it. 00:25:02.683 [2024-07-24 22:34:28.190191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.683 [2024-07-24 22:34:28.190217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.683 qpair failed and we were unable to recover it. 00:25:02.683 [2024-07-24 22:34:28.190322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.684 [2024-07-24 22:34:28.190354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.684 qpair failed and we were unable to recover it. 00:25:02.684 [2024-07-24 22:34:28.190455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.684 [2024-07-24 22:34:28.190488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.684 qpair failed and we were unable to recover it. 00:25:02.684 [2024-07-24 22:34:28.190627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.684 [2024-07-24 22:34:28.190653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.684 qpair failed and we were unable to recover it. 00:25:02.684 [2024-07-24 22:34:28.190762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.684 [2024-07-24 22:34:28.190792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.684 qpair failed and we were unable to recover it. 00:25:02.684 [2024-07-24 22:34:28.190929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.684 [2024-07-24 22:34:28.190958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.684 qpair failed and we were unable to recover it. 00:25:02.684 [2024-07-24 22:34:28.191062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.684 [2024-07-24 22:34:28.191089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.684 qpair failed and we were unable to recover it. 00:25:02.684 [2024-07-24 22:34:28.191217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.684 [2024-07-24 22:34:28.191244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.684 qpair failed and we were unable to recover it. 00:25:02.684 [2024-07-24 22:34:28.192042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.684 [2024-07-24 22:34:28.192073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.684 qpair failed and we were unable to recover it. 00:25:02.684 [2024-07-24 22:34:28.192184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.684 [2024-07-24 22:34:28.192211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.684 qpair failed and we were unable to recover it. 00:25:02.684 [2024-07-24 22:34:28.192316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.684 [2024-07-24 22:34:28.192344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.684 qpair failed and we were unable to recover it. 00:25:02.684 [2024-07-24 22:34:28.192450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.684 [2024-07-24 22:34:28.192485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.684 qpair failed and we were unable to recover it. 00:25:02.684 [2024-07-24 22:34:28.192589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.684 [2024-07-24 22:34:28.192616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.684 qpair failed and we were unable to recover it. 00:25:02.684 [2024-07-24 22:34:28.192724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.684 [2024-07-24 22:34:28.192752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.684 qpair failed and we were unable to recover it. 00:25:02.684 [2024-07-24 22:34:28.192889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.684 [2024-07-24 22:34:28.192916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.684 qpair failed and we were unable to recover it. 00:25:02.684 [2024-07-24 22:34:28.193025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.684 [2024-07-24 22:34:28.193052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.684 qpair failed and we were unable to recover it. 00:25:02.684 [2024-07-24 22:34:28.193158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.684 [2024-07-24 22:34:28.193187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.684 qpair failed and we were unable to recover it. 00:25:02.684 [2024-07-24 22:34:28.193292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.684 [2024-07-24 22:34:28.193319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.684 qpair failed and we were unable to recover it. 00:25:02.684 [2024-07-24 22:34:28.193417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.684 [2024-07-24 22:34:28.193443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.684 qpair failed and we were unable to recover it. 00:25:02.684 [2024-07-24 22:34:28.193578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.684 [2024-07-24 22:34:28.193604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.684 qpair failed and we were unable to recover it. 00:25:02.684 [2024-07-24 22:34:28.193735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.684 [2024-07-24 22:34:28.193762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.684 qpair failed and we were unable to recover it. 00:25:02.684 [2024-07-24 22:34:28.193861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.684 [2024-07-24 22:34:28.193887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.684 qpair failed and we were unable to recover it. 00:25:02.684 [2024-07-24 22:34:28.193987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.684 [2024-07-24 22:34:28.194013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.684 qpair failed and we were unable to recover it. 00:25:02.684 [2024-07-24 22:34:28.194113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.684 [2024-07-24 22:34:28.194139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.684 qpair failed and we were unable to recover it. 00:25:02.684 [2024-07-24 22:34:28.194241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.684 [2024-07-24 22:34:28.194267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.684 qpair failed and we were unable to recover it. 00:25:02.684 [2024-07-24 22:34:28.194370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.684 [2024-07-24 22:34:28.194399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.684 qpair failed and we were unable to recover it. 00:25:02.684 [2024-07-24 22:34:28.194503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.684 [2024-07-24 22:34:28.194533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.684 qpair failed and we were unable to recover it. 00:25:02.684 [2024-07-24 22:34:28.195302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.684 [2024-07-24 22:34:28.195333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.684 qpair failed and we were unable to recover it. 00:25:02.684 [2024-07-24 22:34:28.195442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.684 [2024-07-24 22:34:28.195474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.684 qpair failed and we were unable to recover it. 00:25:02.684 [2024-07-24 22:34:28.195618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.684 [2024-07-24 22:34:28.195645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.684 qpair failed and we were unable to recover it. 00:25:02.684 [2024-07-24 22:34:28.195780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.684 [2024-07-24 22:34:28.195806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.684 qpair failed and we were unable to recover it. 00:25:02.684 [2024-07-24 22:34:28.195911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.684 [2024-07-24 22:34:28.195937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.684 qpair failed and we were unable to recover it. 00:25:02.684 [2024-07-24 22:34:28.196041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.684 [2024-07-24 22:34:28.196067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.684 qpair failed and we were unable to recover it. 00:25:02.684 [2024-07-24 22:34:28.196167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.684 [2024-07-24 22:34:28.196193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.684 qpair failed and we were unable to recover it. 00:25:02.684 [2024-07-24 22:34:28.196327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.684 [2024-07-24 22:34:28.196356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.684 qpair failed and we were unable to recover it. 00:25:02.684 [2024-07-24 22:34:28.196465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.684 [2024-07-24 22:34:28.196500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.684 qpair failed and we were unable to recover it. 00:25:02.684 [2024-07-24 22:34:28.196603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.684 [2024-07-24 22:34:28.196628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.684 qpair failed and we were unable to recover it. 00:25:02.684 [2024-07-24 22:34:28.196732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.684 [2024-07-24 22:34:28.196757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.684 qpair failed and we were unable to recover it. 00:25:02.684 [2024-07-24 22:34:28.196865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.684 [2024-07-24 22:34:28.196891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.684 qpair failed and we were unable to recover it. 00:25:02.684 [2024-07-24 22:34:28.196990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.685 [2024-07-24 22:34:28.197016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.685 qpair failed and we were unable to recover it. 00:25:02.685 [2024-07-24 22:34:28.197142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.685 [2024-07-24 22:34:28.197167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.685 qpair failed and we were unable to recover it. 00:25:02.685 [2024-07-24 22:34:28.197272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.685 [2024-07-24 22:34:28.197302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.685 qpair failed and we were unable to recover it. 00:25:02.685 [2024-07-24 22:34:28.197417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.685 [2024-07-24 22:34:28.197443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.685 qpair failed and we were unable to recover it. 00:25:02.685 [2024-07-24 22:34:28.197556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.685 [2024-07-24 22:34:28.197586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.685 qpair failed and we were unable to recover it. 00:25:02.685 [2024-07-24 22:34:28.197691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.685 [2024-07-24 22:34:28.197716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.685 qpair failed and we were unable to recover it. 00:25:02.685 [2024-07-24 22:34:28.197857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.685 [2024-07-24 22:34:28.197882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.685 qpair failed and we were unable to recover it. 00:25:02.685 [2024-07-24 22:34:28.197988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.685 [2024-07-24 22:34:28.198015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.685 qpair failed and we were unable to recover it. 00:25:02.685 [2024-07-24 22:34:28.198120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.685 [2024-07-24 22:34:28.198146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.685 qpair failed and we were unable to recover it. 00:25:02.685 [2024-07-24 22:34:28.198255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.685 [2024-07-24 22:34:28.198291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.685 qpair failed and we were unable to recover it. 00:25:02.685 [2024-07-24 22:34:28.198401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.685 [2024-07-24 22:34:28.198429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.685 qpair failed and we were unable to recover it. 00:25:02.685 [2024-07-24 22:34:28.198549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.685 [2024-07-24 22:34:28.198578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.685 qpair failed and we were unable to recover it. 00:25:02.685 [2024-07-24 22:34:28.198684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.685 [2024-07-24 22:34:28.198712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.685 qpair failed and we were unable to recover it. 00:25:02.685 [2024-07-24 22:34:28.198814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.685 [2024-07-24 22:34:28.198840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.685 qpair failed and we were unable to recover it. 00:25:02.685 [2024-07-24 22:34:28.199646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.685 [2024-07-24 22:34:28.199677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.685 qpair failed and we were unable to recover it. 00:25:02.685 [2024-07-24 22:34:28.199788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.685 [2024-07-24 22:34:28.199814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.685 qpair failed and we were unable to recover it. 00:25:02.685 [2024-07-24 22:34:28.199929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.685 [2024-07-24 22:34:28.199958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.685 qpair failed and we were unable to recover it. 00:25:02.685 [2024-07-24 22:34:28.200063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.685 [2024-07-24 22:34:28.200090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.685 qpair failed and we were unable to recover it. 00:25:02.685 [2024-07-24 22:34:28.200831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.685 [2024-07-24 22:34:28.200862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.685 qpair failed and we were unable to recover it. 00:25:02.685 [2024-07-24 22:34:28.200976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.685 [2024-07-24 22:34:28.201004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.685 qpair failed and we were unable to recover it. 00:25:02.685 [2024-07-24 22:34:28.201109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.685 [2024-07-24 22:34:28.201136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.685 qpair failed and we were unable to recover it. 00:25:02.685 [2024-07-24 22:34:28.201240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.685 [2024-07-24 22:34:28.201268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.685 qpair failed and we were unable to recover it. 00:25:02.685 [2024-07-24 22:34:28.201382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.685 [2024-07-24 22:34:28.201409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.685 qpair failed and we were unable to recover it. 00:25:02.685 [2024-07-24 22:34:28.201525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.685 [2024-07-24 22:34:28.201555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.685 qpair failed and we were unable to recover it. 00:25:02.685 [2024-07-24 22:34:28.201717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.685 [2024-07-24 22:34:28.201744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.685 qpair failed and we were unable to recover it. 00:25:02.685 [2024-07-24 22:34:28.201879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.685 [2024-07-24 22:34:28.201906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.685 qpair failed and we were unable to recover it. 00:25:02.685 [2024-07-24 22:34:28.202037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.685 [2024-07-24 22:34:28.202063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.685 qpair failed and we were unable to recover it. 00:25:02.685 [2024-07-24 22:34:28.202174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.685 [2024-07-24 22:34:28.202200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.685 qpair failed and we were unable to recover it. 00:25:02.685 [2024-07-24 22:34:28.202303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.685 [2024-07-24 22:34:28.202329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.685 qpair failed and we were unable to recover it. 00:25:02.685 [2024-07-24 22:34:28.202432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.685 [2024-07-24 22:34:28.202464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.685 qpair failed and we were unable to recover it. 00:25:02.685 [2024-07-24 22:34:28.202582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.685 [2024-07-24 22:34:28.202611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.685 qpair failed and we were unable to recover it. 00:25:02.685 [2024-07-24 22:34:28.202711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.685 [2024-07-24 22:34:28.202738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.685 qpair failed and we were unable to recover it. 00:25:02.685 [2024-07-24 22:34:28.202847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.685 [2024-07-24 22:34:28.202873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.685 qpair failed and we were unable to recover it. 00:25:02.685 [2024-07-24 22:34:28.202988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.685 [2024-07-24 22:34:28.203015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.685 qpair failed and we were unable to recover it. 00:25:02.685 [2024-07-24 22:34:28.203120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.685 [2024-07-24 22:34:28.203146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.685 qpair failed and we were unable to recover it. 00:25:02.685 [2024-07-24 22:34:28.203257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.685 [2024-07-24 22:34:28.203285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.685 qpair failed and we were unable to recover it. 00:25:02.685 [2024-07-24 22:34:28.203390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.685 [2024-07-24 22:34:28.203416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.685 qpair failed and we were unable to recover it. 00:25:02.685 [2024-07-24 22:34:28.203526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.685 [2024-07-24 22:34:28.203556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.685 qpair failed and we were unable to recover it. 00:25:02.685 [2024-07-24 22:34:28.203692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.686 [2024-07-24 22:34:28.203719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.686 qpair failed and we were unable to recover it. 00:25:02.686 [2024-07-24 22:34:28.203817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.686 [2024-07-24 22:34:28.203842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.686 qpair failed and we were unable to recover it. 00:25:02.686 [2024-07-24 22:34:28.203978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.686 [2024-07-24 22:34:28.204005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.686 qpair failed and we were unable to recover it. 00:25:02.686 [2024-07-24 22:34:28.204139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.686 [2024-07-24 22:34:28.204166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.686 qpair failed and we were unable to recover it. 00:25:02.686 [2024-07-24 22:34:28.204269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.686 [2024-07-24 22:34:28.204295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.686 qpair failed and we were unable to recover it. 00:25:02.686 [2024-07-24 22:34:28.204418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.686 [2024-07-24 22:34:28.204445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.686 qpair failed and we were unable to recover it. 00:25:02.686 [2024-07-24 22:34:28.204588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.686 [2024-07-24 22:34:28.204615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.686 qpair failed and we were unable to recover it. 00:25:02.686 [2024-07-24 22:34:28.204721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.686 [2024-07-24 22:34:28.204748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.686 qpair failed and we were unable to recover it. 00:25:02.686 [2024-07-24 22:34:28.204848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.686 [2024-07-24 22:34:28.204874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.686 qpair failed and we were unable to recover it. 00:25:02.686 [2024-07-24 22:34:28.204974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.686 [2024-07-24 22:34:28.205000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.686 qpair failed and we were unable to recover it. 00:25:02.686 [2024-07-24 22:34:28.205113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.686 [2024-07-24 22:34:28.205143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.686 qpair failed and we were unable to recover it. 00:25:02.686 [2024-07-24 22:34:28.205277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.686 [2024-07-24 22:34:28.205303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.686 qpair failed and we were unable to recover it. 00:25:02.686 [2024-07-24 22:34:28.205407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.686 [2024-07-24 22:34:28.205433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.686 qpair failed and we were unable to recover it. 00:25:02.686 [2024-07-24 22:34:28.205545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.686 [2024-07-24 22:34:28.205572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.686 qpair failed and we were unable to recover it. 00:25:02.686 [2024-07-24 22:34:28.205677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.686 [2024-07-24 22:34:28.205703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.686 qpair failed and we were unable to recover it. 00:25:02.686 [2024-07-24 22:34:28.205836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.686 [2024-07-24 22:34:28.205864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.686 qpair failed and we were unable to recover it. 00:25:02.686 [2024-07-24 22:34:28.205971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.686 [2024-07-24 22:34:28.205998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.686 qpair failed and we were unable to recover it. 00:25:02.686 [2024-07-24 22:34:28.206108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.686 [2024-07-24 22:34:28.206136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.686 qpair failed and we were unable to recover it. 00:25:02.686 [2024-07-24 22:34:28.206279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.686 [2024-07-24 22:34:28.206306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.686 qpair failed and we were unable to recover it. 00:25:02.686 [2024-07-24 22:34:28.206433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.686 [2024-07-24 22:34:28.206459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.686 qpair failed and we were unable to recover it. 00:25:02.686 [2024-07-24 22:34:28.206584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.686 [2024-07-24 22:34:28.206619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.686 qpair failed and we were unable to recover it. 00:25:02.686 [2024-07-24 22:34:28.206732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.686 [2024-07-24 22:34:28.206761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.686 qpair failed and we were unable to recover it. 00:25:02.686 [2024-07-24 22:34:28.206898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.686 [2024-07-24 22:34:28.206926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.686 qpair failed and we were unable to recover it. 00:25:02.686 [2024-07-24 22:34:28.207090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.686 [2024-07-24 22:34:28.207116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.686 qpair failed and we were unable to recover it. 00:25:02.686 [2024-07-24 22:34:28.207254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.686 [2024-07-24 22:34:28.207281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.686 qpair failed and we were unable to recover it. 00:25:02.686 [2024-07-24 22:34:28.207395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.686 [2024-07-24 22:34:28.207423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.686 qpair failed and we were unable to recover it. 00:25:02.686 [2024-07-24 22:34:28.207540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.686 [2024-07-24 22:34:28.207570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.686 qpair failed and we were unable to recover it. 00:25:02.686 [2024-07-24 22:34:28.207709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.686 [2024-07-24 22:34:28.207736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.686 qpair failed and we were unable to recover it. 00:25:02.686 [2024-07-24 22:34:28.207861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.686 [2024-07-24 22:34:28.207888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.686 qpair failed and we were unable to recover it. 00:25:02.686 [2024-07-24 22:34:28.207993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.686 [2024-07-24 22:34:28.208019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.686 qpair failed and we were unable to recover it. 00:25:02.686 [2024-07-24 22:34:28.208159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.686 [2024-07-24 22:34:28.208186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.686 qpair failed and we were unable to recover it. 00:25:02.686 [2024-07-24 22:34:28.208326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.686 [2024-07-24 22:34:28.208358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.686 qpair failed and we were unable to recover it. 00:25:02.686 [2024-07-24 22:34:28.208477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.686 [2024-07-24 22:34:28.208509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.686 qpair failed and we were unable to recover it. 00:25:02.686 [2024-07-24 22:34:28.208646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.686 [2024-07-24 22:34:28.208673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.686 qpair failed and we were unable to recover it. 00:25:02.686 [2024-07-24 22:34:28.208810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.686 [2024-07-24 22:34:28.208837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.686 qpair failed and we were unable to recover it. 00:25:02.686 [2024-07-24 22:34:28.208941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.686 [2024-07-24 22:34:28.208972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.686 qpair failed and we were unable to recover it. 00:25:02.686 [2024-07-24 22:34:28.209092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.686 [2024-07-24 22:34:28.209129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.686 qpair failed and we were unable to recover it. 00:25:02.686 [2024-07-24 22:34:28.209261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.686 [2024-07-24 22:34:28.209292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.686 qpair failed and we were unable to recover it. 00:25:02.687 [2024-07-24 22:34:28.209404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.687 [2024-07-24 22:34:28.209432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.687 qpair failed and we were unable to recover it. 00:25:02.687 [2024-07-24 22:34:28.209548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.687 [2024-07-24 22:34:28.209575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.687 qpair failed and we were unable to recover it. 00:25:02.687 [2024-07-24 22:34:28.209713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.687 [2024-07-24 22:34:28.209740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.687 qpair failed and we were unable to recover it. 00:25:02.687 [2024-07-24 22:34:28.209846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.687 [2024-07-24 22:34:28.209874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.687 qpair failed and we were unable to recover it. 00:25:02.687 [2024-07-24 22:34:28.209986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.687 [2024-07-24 22:34:28.210013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.687 qpair failed and we were unable to recover it. 00:25:02.687 [2024-07-24 22:34:28.210148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.687 [2024-07-24 22:34:28.210177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.687 qpair failed and we were unable to recover it. 00:25:02.687 [2024-07-24 22:34:28.210290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.687 [2024-07-24 22:34:28.210317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.687 qpair failed and we were unable to recover it. 00:25:02.687 [2024-07-24 22:34:28.210428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.687 [2024-07-24 22:34:28.210454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.687 qpair failed and we were unable to recover it. 00:25:02.687 [2024-07-24 22:34:28.210566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.687 [2024-07-24 22:34:28.210593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.687 qpair failed and we were unable to recover it. 00:25:02.687 [2024-07-24 22:34:28.210735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.687 [2024-07-24 22:34:28.210763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.687 qpair failed and we were unable to recover it. 00:25:02.687 [2024-07-24 22:34:28.210906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.687 [2024-07-24 22:34:28.210935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.687 qpair failed and we were unable to recover it. 00:25:02.687 [2024-07-24 22:34:28.211043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.687 [2024-07-24 22:34:28.211070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.687 qpair failed and we were unable to recover it. 00:25:02.687 [2024-07-24 22:34:28.211203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.687 [2024-07-24 22:34:28.211230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.687 qpair failed and we were unable to recover it. 00:25:02.687 [2024-07-24 22:34:28.211339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.687 [2024-07-24 22:34:28.211367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.687 qpair failed and we were unable to recover it. 00:25:02.687 [2024-07-24 22:34:28.211471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.687 [2024-07-24 22:34:28.211506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.687 qpair failed and we were unable to recover it. 00:25:02.687 [2024-07-24 22:34:28.211657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.687 [2024-07-24 22:34:28.211694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.687 qpair failed and we were unable to recover it. 00:25:02.687 [2024-07-24 22:34:28.211828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.687 [2024-07-24 22:34:28.211856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.687 qpair failed and we were unable to recover it. 00:25:02.687 [2024-07-24 22:34:28.211968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.687 [2024-07-24 22:34:28.211994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.687 qpair failed and we were unable to recover it. 00:25:02.687 [2024-07-24 22:34:28.212129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.687 [2024-07-24 22:34:28.212156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.687 qpair failed and we were unable to recover it. 00:25:02.687 [2024-07-24 22:34:28.212258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.687 [2024-07-24 22:34:28.212284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.687 qpair failed and we were unable to recover it. 00:25:02.687 [2024-07-24 22:34:28.212390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.687 [2024-07-24 22:34:28.212424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.687 qpair failed and we were unable to recover it. 00:25:02.687 [2024-07-24 22:34:28.212556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.687 [2024-07-24 22:34:28.212583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.687 qpair failed and we were unable to recover it. 00:25:02.687 [2024-07-24 22:34:28.212692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.687 [2024-07-24 22:34:28.212722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.687 qpair failed and we were unable to recover it. 00:25:02.687 [2024-07-24 22:34:28.212830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.687 [2024-07-24 22:34:28.212856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.687 qpair failed and we were unable to recover it. 00:25:02.687 [2024-07-24 22:34:28.212968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.687 [2024-07-24 22:34:28.212994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.687 qpair failed and we were unable to recover it. 00:25:02.687 [2024-07-24 22:34:28.213127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.687 [2024-07-24 22:34:28.213154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.687 qpair failed and we were unable to recover it. 00:25:02.687 [2024-07-24 22:34:28.213269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.687 [2024-07-24 22:34:28.213296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.687 qpair failed and we were unable to recover it. 00:25:02.687 [2024-07-24 22:34:28.213433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.687 [2024-07-24 22:34:28.213472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.687 qpair failed and we were unable to recover it. 00:25:02.687 [2024-07-24 22:34:28.213614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.687 [2024-07-24 22:34:28.213651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.687 qpair failed and we were unable to recover it. 00:25:02.687 [2024-07-24 22:34:28.213778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.687 [2024-07-24 22:34:28.213809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.687 qpair failed and we were unable to recover it. 00:25:02.687 [2024-07-24 22:34:28.213949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.687 [2024-07-24 22:34:28.213976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.687 qpair failed and we were unable to recover it. 00:25:02.687 [2024-07-24 22:34:28.214109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.687 [2024-07-24 22:34:28.214137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.687 qpair failed and we were unable to recover it. 00:25:02.687 [2024-07-24 22:34:28.214243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.687 [2024-07-24 22:34:28.214270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.687 qpair failed and we were unable to recover it. 00:25:02.687 [2024-07-24 22:34:28.214419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.687 [2024-07-24 22:34:28.214449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.687 qpair failed and we were unable to recover it. 00:25:02.687 [2024-07-24 22:34:28.214571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.687 [2024-07-24 22:34:28.214599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.687 qpair failed and we were unable to recover it. 00:25:02.687 [2024-07-24 22:34:28.214709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.687 [2024-07-24 22:34:28.214735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.687 qpair failed and we were unable to recover it. 00:25:02.687 [2024-07-24 22:34:28.214870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.687 [2024-07-24 22:34:28.214897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.687 qpair failed and we were unable to recover it. 00:25:02.687 [2024-07-24 22:34:28.215032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.688 [2024-07-24 22:34:28.215058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.688 qpair failed and we were unable to recover it. 00:25:02.688 [2024-07-24 22:34:28.215194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.688 [2024-07-24 22:34:28.215221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.688 qpair failed and we were unable to recover it. 00:25:02.688 [2024-07-24 22:34:28.215359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.688 [2024-07-24 22:34:28.215388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.688 qpair failed and we were unable to recover it. 00:25:02.688 [2024-07-24 22:34:28.216382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.688 [2024-07-24 22:34:28.216414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.688 qpair failed and we were unable to recover it. 00:25:02.688 [2024-07-24 22:34:28.216531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.688 [2024-07-24 22:34:28.216560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.688 qpair failed and we were unable to recover it. 00:25:02.688 [2024-07-24 22:34:28.216695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.688 [2024-07-24 22:34:28.216722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.688 qpair failed and we were unable to recover it. 00:25:02.688 [2024-07-24 22:34:28.216859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.688 [2024-07-24 22:34:28.216885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.688 qpair failed and we were unable to recover it. 00:25:02.688 [2024-07-24 22:34:28.217015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.688 [2024-07-24 22:34:28.217042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.688 qpair failed and we were unable to recover it. 00:25:02.688 [2024-07-24 22:34:28.217176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.688 [2024-07-24 22:34:28.217203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.688 qpair failed and we were unable to recover it. 00:25:02.688 [2024-07-24 22:34:28.217307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.688 [2024-07-24 22:34:28.217335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.688 qpair failed and we were unable to recover it. 00:25:02.688 [2024-07-24 22:34:28.217476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.688 [2024-07-24 22:34:28.217508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.688 qpair failed and we were unable to recover it. 00:25:02.688 [2024-07-24 22:34:28.217609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.688 [2024-07-24 22:34:28.217635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.688 qpair failed and we were unable to recover it. 00:25:02.688 [2024-07-24 22:34:28.217767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.688 [2024-07-24 22:34:28.217793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.688 qpair failed and we were unable to recover it. 00:25:02.688 [2024-07-24 22:34:28.217903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.688 [2024-07-24 22:34:28.217929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.688 qpair failed and we were unable to recover it. 00:25:02.688 [2024-07-24 22:34:28.218063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.688 [2024-07-24 22:34:28.218090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.688 qpair failed and we were unable to recover it. 00:25:02.688 [2024-07-24 22:34:28.218197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.688 [2024-07-24 22:34:28.218224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.688 qpair failed and we were unable to recover it. 00:25:02.688 [2024-07-24 22:34:28.218359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.688 [2024-07-24 22:34:28.218384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.688 qpair failed and we were unable to recover it. 00:25:02.688 [2024-07-24 22:34:28.218487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.688 [2024-07-24 22:34:28.218514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.688 qpair failed and we were unable to recover it. 00:25:02.688 [2024-07-24 22:34:28.218615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.688 [2024-07-24 22:34:28.218641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.688 qpair failed and we were unable to recover it. 00:25:02.688 [2024-07-24 22:34:28.218743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.688 [2024-07-24 22:34:28.218770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.688 qpair failed and we were unable to recover it. 00:25:02.688 [2024-07-24 22:34:28.218907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.688 [2024-07-24 22:34:28.218934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.688 qpair failed and we were unable to recover it. 00:25:02.688 [2024-07-24 22:34:28.219036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.688 [2024-07-24 22:34:28.219064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.688 qpair failed and we were unable to recover it. 00:25:02.688 [2024-07-24 22:34:28.219203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.688 [2024-07-24 22:34:28.219228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.688 qpair failed and we were unable to recover it. 00:25:02.688 [2024-07-24 22:34:28.219334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.689 [2024-07-24 22:34:28.219367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.689 qpair failed and we were unable to recover it. 00:25:02.689 [2024-07-24 22:34:28.219509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.689 [2024-07-24 22:34:28.219537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.689 qpair failed and we were unable to recover it. 00:25:02.689 [2024-07-24 22:34:28.219672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.689 [2024-07-24 22:34:28.219698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.689 qpair failed and we were unable to recover it. 00:25:02.689 [2024-07-24 22:34:28.219807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.689 [2024-07-24 22:34:28.219834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.689 qpair failed and we were unable to recover it. 00:25:02.689 [2024-07-24 22:34:28.219960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.689 [2024-07-24 22:34:28.219986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.689 qpair failed and we were unable to recover it. 00:25:02.689 [2024-07-24 22:34:28.220749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.689 [2024-07-24 22:34:28.220781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.689 qpair failed and we were unable to recover it. 00:25:02.689 [2024-07-24 22:34:28.220893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.689 [2024-07-24 22:34:28.220922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.689 qpair failed and we were unable to recover it. 00:25:02.689 [2024-07-24 22:34:28.221061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.689 [2024-07-24 22:34:28.221088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.689 qpair failed and we were unable to recover it. 00:25:02.689 [2024-07-24 22:34:28.221186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.689 [2024-07-24 22:34:28.221213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.689 qpair failed and we were unable to recover it. 00:25:02.689 [2024-07-24 22:34:28.221309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.689 [2024-07-24 22:34:28.221335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.689 qpair failed and we were unable to recover it. 00:25:02.689 [2024-07-24 22:34:28.221437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.689 [2024-07-24 22:34:28.221464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.689 qpair failed and we were unable to recover it. 00:25:02.689 [2024-07-24 22:34:28.221589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.689 [2024-07-24 22:34:28.221619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.689 qpair failed and we were unable to recover it. 00:25:02.689 [2024-07-24 22:34:28.221742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.689 [2024-07-24 22:34:28.221768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.689 qpair failed and we were unable to recover it. 00:25:02.689 [2024-07-24 22:34:28.221886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.689 [2024-07-24 22:34:28.221922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.689 qpair failed and we were unable to recover it. 00:25:02.689 [2024-07-24 22:34:28.222052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.689 [2024-07-24 22:34:28.222090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.689 qpair failed and we were unable to recover it. 00:25:02.689 [2024-07-24 22:34:28.222226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.689 [2024-07-24 22:34:28.222256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.689 qpair failed and we were unable to recover it. 00:25:02.689 [2024-07-24 22:34:28.222393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.689 [2024-07-24 22:34:28.222419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.689 qpair failed and we were unable to recover it. 00:25:02.689 [2024-07-24 22:34:28.222539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.689 [2024-07-24 22:34:28.222566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.689 qpair failed and we were unable to recover it. 00:25:02.689 [2024-07-24 22:34:28.222696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.689 [2024-07-24 22:34:28.222722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.689 qpair failed and we were unable to recover it. 00:25:02.689 [2024-07-24 22:34:28.222856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.689 [2024-07-24 22:34:28.222882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.689 qpair failed and we were unable to recover it. 00:25:02.689 [2024-07-24 22:34:28.222999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.689 [2024-07-24 22:34:28.223025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.689 qpair failed and we were unable to recover it. 00:25:02.689 [2024-07-24 22:34:28.223130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.689 [2024-07-24 22:34:28.223157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.689 qpair failed and we were unable to recover it. 00:25:02.689 [2024-07-24 22:34:28.223263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.689 [2024-07-24 22:34:28.223289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.689 qpair failed and we were unable to recover it. 00:25:02.689 [2024-07-24 22:34:28.223429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.689 [2024-07-24 22:34:28.223457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.689 qpair failed and we were unable to recover it. 00:25:02.689 [2024-07-24 22:34:28.223573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.689 [2024-07-24 22:34:28.223604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.689 qpair failed and we were unable to recover it. 00:25:02.689 [2024-07-24 22:34:28.223712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.689 [2024-07-24 22:34:28.223738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.689 qpair failed and we were unable to recover it. 00:25:02.689 [2024-07-24 22:34:28.223841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.689 [2024-07-24 22:34:28.223868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.689 qpair failed and we were unable to recover it. 00:25:02.689 [2024-07-24 22:34:28.223969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.689 [2024-07-24 22:34:28.223994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.689 qpair failed and we were unable to recover it. 00:25:02.689 [2024-07-24 22:34:28.224103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.689 [2024-07-24 22:34:28.224131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.689 qpair failed and we were unable to recover it. 00:25:02.689 [2024-07-24 22:34:28.224265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.689 [2024-07-24 22:34:28.224291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.689 qpair failed and we were unable to recover it. 00:25:02.689 [2024-07-24 22:34:28.224396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.689 [2024-07-24 22:34:28.224423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.689 qpair failed and we were unable to recover it. 00:25:02.689 [2024-07-24 22:34:28.224533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.689 [2024-07-24 22:34:28.224560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.689 qpair failed and we were unable to recover it. 00:25:02.689 [2024-07-24 22:34:28.224677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.689 [2024-07-24 22:34:28.224703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.689 qpair failed and we were unable to recover it. 00:25:02.689 [2024-07-24 22:34:28.224806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.689 [2024-07-24 22:34:28.224833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.689 qpair failed and we were unable to recover it. 00:25:02.689 [2024-07-24 22:34:28.224935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.689 [2024-07-24 22:34:28.224961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.689 qpair failed and we were unable to recover it. 00:25:02.689 [2024-07-24 22:34:28.225075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.689 [2024-07-24 22:34:28.225101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.689 qpair failed and we were unable to recover it. 00:25:02.689 [2024-07-24 22:34:28.225212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.690 [2024-07-24 22:34:28.225237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.690 qpair failed and we were unable to recover it. 00:25:02.690 [2024-07-24 22:34:28.225353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.690 [2024-07-24 22:34:28.225380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.690 qpair failed and we were unable to recover it. 00:25:02.690 [2024-07-24 22:34:28.225496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.690 [2024-07-24 22:34:28.225523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.690 qpair failed and we were unable to recover it. 00:25:02.690 [2024-07-24 22:34:28.225654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.690 [2024-07-24 22:34:28.225679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.690 qpair failed and we were unable to recover it. 00:25:02.690 [2024-07-24 22:34:28.226425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.690 [2024-07-24 22:34:28.226460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.690 qpair failed and we were unable to recover it. 00:25:02.690 [2024-07-24 22:34:28.226589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.690 [2024-07-24 22:34:28.226617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.690 qpair failed and we were unable to recover it. 00:25:02.690 [2024-07-24 22:34:28.226721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.690 [2024-07-24 22:34:28.226747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.690 qpair failed and we were unable to recover it. 00:25:02.690 [2024-07-24 22:34:28.226895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.690 [2024-07-24 22:34:28.226921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.690 qpair failed and we were unable to recover it. 00:25:02.690 [2024-07-24 22:34:28.227035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.690 [2024-07-24 22:34:28.227061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.690 qpair failed and we were unable to recover it. 00:25:02.690 [2024-07-24 22:34:28.227162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.690 [2024-07-24 22:34:28.227188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.690 qpair failed and we were unable to recover it. 00:25:02.690 [2024-07-24 22:34:28.227286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.690 [2024-07-24 22:34:28.227312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.690 qpair failed and we were unable to recover it. 00:25:02.690 [2024-07-24 22:34:28.227411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.690 [2024-07-24 22:34:28.227437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.690 qpair failed and we were unable to recover it. 00:25:02.690 [2024-07-24 22:34:28.227543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.690 [2024-07-24 22:34:28.227571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.690 qpair failed and we were unable to recover it. 00:25:02.690 [2024-07-24 22:34:28.227696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.690 [2024-07-24 22:34:28.227723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.690 qpair failed and we were unable to recover it. 00:25:02.690 [2024-07-24 22:34:28.227836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.690 [2024-07-24 22:34:28.227868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.690 qpair failed and we were unable to recover it. 00:25:02.690 [2024-07-24 22:34:28.227979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.690 [2024-07-24 22:34:28.228006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.690 qpair failed and we were unable to recover it. 00:25:02.690 [2024-07-24 22:34:28.228112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.690 [2024-07-24 22:34:28.228140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.690 qpair failed and we were unable to recover it. 00:25:02.690 [2024-07-24 22:34:28.228259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.690 [2024-07-24 22:34:28.228285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.690 qpair failed and we were unable to recover it. 00:25:02.690 [2024-07-24 22:34:28.228388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.690 [2024-07-24 22:34:28.228415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.690 qpair failed and we were unable to recover it. 00:25:02.690 [2024-07-24 22:34:28.228514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.690 [2024-07-24 22:34:28.228542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.690 qpair failed and we were unable to recover it. 00:25:02.690 [2024-07-24 22:34:28.228664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.690 [2024-07-24 22:34:28.228690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.690 qpair failed and we were unable to recover it. 00:25:02.690 [2024-07-24 22:34:28.228799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.690 [2024-07-24 22:34:28.228827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.690 qpair failed and we were unable to recover it. 00:25:02.690 [2024-07-24 22:34:28.228941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.690 [2024-07-24 22:34:28.228967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.690 qpair failed and we were unable to recover it. 00:25:02.690 [2024-07-24 22:34:28.229071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.690 [2024-07-24 22:34:28.229100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.690 qpair failed and we were unable to recover it. 00:25:02.690 [2024-07-24 22:34:28.229224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.690 [2024-07-24 22:34:28.229250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.690 qpair failed and we were unable to recover it. 00:25:02.690 [2024-07-24 22:34:28.229366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.690 [2024-07-24 22:34:28.229391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.690 qpair failed and we were unable to recover it. 00:25:02.690 [2024-07-24 22:34:28.229497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.690 [2024-07-24 22:34:28.229523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.690 qpair failed and we were unable to recover it. 00:25:02.690 [2024-07-24 22:34:28.229643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.690 [2024-07-24 22:34:28.229668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.690 qpair failed and we were unable to recover it. 00:25:02.690 [2024-07-24 22:34:28.229766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.690 [2024-07-24 22:34:28.229792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.690 qpair failed and we were unable to recover it. 00:25:02.690 [2024-07-24 22:34:28.229893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.690 [2024-07-24 22:34:28.229920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.691 qpair failed and we were unable to recover it. 00:25:02.691 [2024-07-24 22:34:28.230034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.691 [2024-07-24 22:34:28.230061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.691 qpair failed and we were unable to recover it. 00:25:02.691 [2024-07-24 22:34:28.230180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.691 [2024-07-24 22:34:28.230205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.691 qpair failed and we were unable to recover it. 00:25:02.691 [2024-07-24 22:34:28.230310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.691 [2024-07-24 22:34:28.230342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.691 qpair failed and we were unable to recover it. 00:25:02.691 [2024-07-24 22:34:28.230469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.691 [2024-07-24 22:34:28.230507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.691 qpair failed and we were unable to recover it. 00:25:02.691 [2024-07-24 22:34:28.230614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.691 [2024-07-24 22:34:28.230640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.691 qpair failed and we were unable to recover it. 00:25:02.691 [2024-07-24 22:34:28.230741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.691 [2024-07-24 22:34:28.230767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.691 qpair failed and we were unable to recover it. 00:25:02.691 [2024-07-24 22:34:28.230895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.691 [2024-07-24 22:34:28.230925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.691 qpair failed and we were unable to recover it. 00:25:02.691 [2024-07-24 22:34:28.231029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.691 [2024-07-24 22:34:28.231056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.691 qpair failed and we were unable to recover it. 00:25:02.691 [2024-07-24 22:34:28.231155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.691 [2024-07-24 22:34:28.231180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.691 qpair failed and we were unable to recover it. 00:25:02.691 [2024-07-24 22:34:28.231293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.691 [2024-07-24 22:34:28.231319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.691 qpair failed and we were unable to recover it. 00:25:02.691 [2024-07-24 22:34:28.231447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.691 [2024-07-24 22:34:28.231473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.691 qpair failed and we were unable to recover it. 00:25:02.691 [2024-07-24 22:34:28.231597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.691 [2024-07-24 22:34:28.231623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.691 qpair failed and we were unable to recover it. 00:25:02.691 [2024-07-24 22:34:28.231725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.691 [2024-07-24 22:34:28.231752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.691 qpair failed and we were unable to recover it. 00:25:02.691 [2024-07-24 22:34:28.231869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.691 [2024-07-24 22:34:28.231896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.691 qpair failed and we were unable to recover it. 00:25:02.691 [2024-07-24 22:34:28.232014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.691 [2024-07-24 22:34:28.232045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.691 qpair failed and we were unable to recover it. 00:25:02.691 [2024-07-24 22:34:28.232164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.691 [2024-07-24 22:34:28.232190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.691 qpair failed and we were unable to recover it. 00:25:02.691 [2024-07-24 22:34:28.232293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.691 [2024-07-24 22:34:28.232320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.691 qpair failed and we were unable to recover it. 00:25:02.691 [2024-07-24 22:34:28.232420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.691 [2024-07-24 22:34:28.232449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.691 qpair failed and we were unable to recover it. 00:25:02.691 [2024-07-24 22:34:28.232569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.691 [2024-07-24 22:34:28.232599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.691 qpair failed and we were unable to recover it. 00:25:02.691 [2024-07-24 22:34:28.232707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.691 [2024-07-24 22:34:28.232733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.691 qpair failed and we were unable to recover it. 00:25:02.691 [2024-07-24 22:34:28.232854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.691 [2024-07-24 22:34:28.232881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.691 qpair failed and we were unable to recover it. 00:25:02.691 [2024-07-24 22:34:28.232980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.691 [2024-07-24 22:34:28.233007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.691 qpair failed and we were unable to recover it. 00:25:02.691 [2024-07-24 22:34:28.233105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.691 [2024-07-24 22:34:28.233132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.691 qpair failed and we were unable to recover it. 00:25:02.691 [2024-07-24 22:34:28.233252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.691 [2024-07-24 22:34:28.233278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.691 qpair failed and we were unable to recover it. 00:25:02.691 [2024-07-24 22:34:28.233391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.691 [2024-07-24 22:34:28.233417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.691 qpair failed and we were unable to recover it. 00:25:02.691 [2024-07-24 22:34:28.233543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.691 [2024-07-24 22:34:28.233571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.691 qpair failed and we were unable to recover it. 00:25:02.691 [2024-07-24 22:34:28.233687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.691 [2024-07-24 22:34:28.233713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.691 qpair failed and we were unable to recover it. 00:25:02.691 [2024-07-24 22:34:28.233818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.691 [2024-07-24 22:34:28.233844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.691 qpair failed and we were unable to recover it. 00:25:02.691 [2024-07-24 22:34:28.233947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.691 [2024-07-24 22:34:28.233974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.691 qpair failed and we were unable to recover it. 00:25:02.691 [2024-07-24 22:34:28.234094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.691 [2024-07-24 22:34:28.234122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.691 qpair failed and we were unable to recover it. 00:25:02.691 [2024-07-24 22:34:28.234227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.691 [2024-07-24 22:34:28.234255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.691 qpair failed and we were unable to recover it. 00:25:02.691 [2024-07-24 22:34:28.234360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.691 [2024-07-24 22:34:28.234385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.691 qpair failed and we were unable to recover it. 00:25:02.691 [2024-07-24 22:34:28.234489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.691 [2024-07-24 22:34:28.234517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.691 qpair failed and we were unable to recover it. 00:25:02.691 [2024-07-24 22:34:28.234620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.691 [2024-07-24 22:34:28.234646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.691 qpair failed and we were unable to recover it. 00:25:02.691 [2024-07-24 22:34:28.234746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.691 [2024-07-24 22:34:28.234773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.691 qpair failed and we were unable to recover it. 00:25:02.691 [2024-07-24 22:34:28.234902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.692 [2024-07-24 22:34:28.234927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.692 qpair failed and we were unable to recover it. 00:25:02.692 [2024-07-24 22:34:28.235035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.692 [2024-07-24 22:34:28.235062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.692 qpair failed and we were unable to recover it. 00:25:02.692 [2024-07-24 22:34:28.235821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.692 [2024-07-24 22:34:28.235852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.692 qpair failed and we were unable to recover it. 00:25:02.692 [2024-07-24 22:34:28.235962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.692 [2024-07-24 22:34:28.235990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.692 qpair failed and we were unable to recover it. 00:25:02.692 [2024-07-24 22:34:28.236111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.692 [2024-07-24 22:34:28.236139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.692 qpair failed and we were unable to recover it. 00:25:02.692 [2024-07-24 22:34:28.236258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.692 [2024-07-24 22:34:28.236284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.692 qpair failed and we were unable to recover it. 00:25:02.692 [2024-07-24 22:34:28.236394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.692 [2024-07-24 22:34:28.236423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.692 qpair failed and we were unable to recover it. 00:25:02.692 [2024-07-24 22:34:28.236540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.692 [2024-07-24 22:34:28.236567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.692 qpair failed and we were unable to recover it. 00:25:02.692 [2024-07-24 22:34:28.236689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.692 [2024-07-24 22:34:28.236715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.692 qpair failed and we were unable to recover it. 00:25:02.692 [2024-07-24 22:34:28.236820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.692 [2024-07-24 22:34:28.236846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.692 qpair failed and we were unable to recover it. 00:25:02.692 [2024-07-24 22:34:28.236948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.692 [2024-07-24 22:34:28.236975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.692 qpair failed and we were unable to recover it. 00:25:02.692 [2024-07-24 22:34:28.237082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.692 [2024-07-24 22:34:28.237109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.692 qpair failed and we were unable to recover it. 00:25:02.692 [2024-07-24 22:34:28.237208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.692 [2024-07-24 22:34:28.237236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.692 qpair failed and we were unable to recover it. 00:25:02.692 [2024-07-24 22:34:28.237350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.692 [2024-07-24 22:34:28.237379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.692 qpair failed and we were unable to recover it. 00:25:02.692 [2024-07-24 22:34:28.237494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.692 [2024-07-24 22:34:28.237522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.692 qpair failed and we were unable to recover it. 00:25:02.692 [2024-07-24 22:34:28.237631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.692 [2024-07-24 22:34:28.237659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.692 qpair failed and we were unable to recover it. 00:25:02.692 [2024-07-24 22:34:28.237768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.692 [2024-07-24 22:34:28.237795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.692 qpair failed and we were unable to recover it. 00:25:02.692 [2024-07-24 22:34:28.237911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.692 [2024-07-24 22:34:28.237938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.692 qpair failed and we were unable to recover it. 00:25:02.692 [2024-07-24 22:34:28.238040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.692 [2024-07-24 22:34:28.238067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.692 qpair failed and we were unable to recover it. 00:25:02.692 [2024-07-24 22:34:28.238191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.692 [2024-07-24 22:34:28.238224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.692 qpair failed and we were unable to recover it. 00:25:02.692 [2024-07-24 22:34:28.238340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.692 [2024-07-24 22:34:28.238365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.692 qpair failed and we were unable to recover it. 00:25:02.692 [2024-07-24 22:34:28.238485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.692 [2024-07-24 22:34:28.238512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.692 qpair failed and we were unable to recover it. 00:25:02.692 [2024-07-24 22:34:28.238611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.692 [2024-07-24 22:34:28.238638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.692 qpair failed and we were unable to recover it. 00:25:02.692 [2024-07-24 22:34:28.238734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.692 [2024-07-24 22:34:28.238759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.692 qpair failed and we were unable to recover it. 00:25:02.692 [2024-07-24 22:34:28.238877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.692 [2024-07-24 22:34:28.238903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.692 qpair failed and we were unable to recover it. 00:25:02.692 [2024-07-24 22:34:28.239002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.692 [2024-07-24 22:34:28.239027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.692 qpair failed and we were unable to recover it. 00:25:02.692 [2024-07-24 22:34:28.239129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.692 [2024-07-24 22:34:28.239157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.692 qpair failed and we were unable to recover it. 00:25:02.692 [2024-07-24 22:34:28.239263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.692 [2024-07-24 22:34:28.239292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.692 qpair failed and we were unable to recover it. 00:25:02.692 [2024-07-24 22:34:28.239407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.692 [2024-07-24 22:34:28.239433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.692 qpair failed and we were unable to recover it. 00:25:02.692 [2024-07-24 22:34:28.239553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.692 [2024-07-24 22:34:28.239580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.692 qpair failed and we were unable to recover it. 00:25:02.692 [2024-07-24 22:34:28.239684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.692 [2024-07-24 22:34:28.239710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.692 qpair failed and we were unable to recover it. 00:25:02.692 [2024-07-24 22:34:28.239810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.692 [2024-07-24 22:34:28.239836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.692 qpair failed and we were unable to recover it. 00:25:02.692 [2024-07-24 22:34:28.239942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.692 [2024-07-24 22:34:28.239970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.692 qpair failed and we were unable to recover it. 00:25:02.692 [2024-07-24 22:34:28.240076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.692 [2024-07-24 22:34:28.240103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.693 qpair failed and we were unable to recover it. 00:25:02.693 [2024-07-24 22:34:28.240201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.693 [2024-07-24 22:34:28.240227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.693 qpair failed and we were unable to recover it. 00:25:02.693 [2024-07-24 22:34:28.240327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.693 [2024-07-24 22:34:28.240353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.693 qpair failed and we were unable to recover it. 00:25:02.693 [2024-07-24 22:34:28.240457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.693 [2024-07-24 22:34:28.240489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.693 qpair failed and we were unable to recover it. 00:25:02.693 [2024-07-24 22:34:28.240626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.693 [2024-07-24 22:34:28.240654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.693 qpair failed and we were unable to recover it. 00:25:02.693 [2024-07-24 22:34:28.240750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.693 [2024-07-24 22:34:28.240776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.693 qpair failed and we were unable to recover it. 00:25:02.693 [2024-07-24 22:34:28.240879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.693 [2024-07-24 22:34:28.240906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.693 qpair failed and we were unable to recover it. 00:25:02.693 [2024-07-24 22:34:28.241010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.693 [2024-07-24 22:34:28.241040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.693 qpair failed and we were unable to recover it. 00:25:02.693 [2024-07-24 22:34:28.241144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.693 [2024-07-24 22:34:28.241172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.693 qpair failed and we were unable to recover it. 00:25:02.693 [2024-07-24 22:34:28.241275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.693 [2024-07-24 22:34:28.241302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.693 qpair failed and we were unable to recover it. 00:25:02.693 [2024-07-24 22:34:28.241406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.693 [2024-07-24 22:34:28.241434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.693 qpair failed and we were unable to recover it. 00:25:02.693 [2024-07-24 22:34:28.241561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.693 [2024-07-24 22:34:28.241589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.693 qpair failed and we were unable to recover it. 00:25:02.693 [2024-07-24 22:34:28.241696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.693 [2024-07-24 22:34:28.241725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.693 qpair failed and we were unable to recover it. 00:25:02.693 [2024-07-24 22:34:28.241832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.693 [2024-07-24 22:34:28.241860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.693 qpair failed and we were unable to recover it. 00:25:02.693 [2024-07-24 22:34:28.241971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.693 [2024-07-24 22:34:28.241999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.693 qpair failed and we were unable to recover it. 00:25:02.693 [2024-07-24 22:34:28.242132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.693 [2024-07-24 22:34:28.242161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.693 qpair failed and we were unable to recover it. 00:25:02.693 [2024-07-24 22:34:28.242270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.693 [2024-07-24 22:34:28.242297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.693 qpair failed and we were unable to recover it. 00:25:02.693 [2024-07-24 22:34:28.242402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.693 [2024-07-24 22:34:28.242429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.693 qpair failed and we were unable to recover it. 00:25:02.693 [2024-07-24 22:34:28.242555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.693 [2024-07-24 22:34:28.242581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.693 qpair failed and we were unable to recover it. 00:25:02.693 [2024-07-24 22:34:28.242694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.693 [2024-07-24 22:34:28.242719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.693 qpair failed and we were unable to recover it. 00:25:02.693 [2024-07-24 22:34:28.242824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.693 [2024-07-24 22:34:28.242850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.693 qpair failed and we were unable to recover it. 00:25:02.693 [2024-07-24 22:34:28.242958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.693 [2024-07-24 22:34:28.242984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.693 qpair failed and we were unable to recover it. 00:25:02.693 [2024-07-24 22:34:28.243083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.693 [2024-07-24 22:34:28.243109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.693 qpair failed and we were unable to recover it. 00:25:02.693 [2024-07-24 22:34:28.243206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.693 [2024-07-24 22:34:28.243232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.693 qpair failed and we were unable to recover it. 00:25:02.693 [2024-07-24 22:34:28.243352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.693 [2024-07-24 22:34:28.243379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.693 qpair failed and we were unable to recover it. 00:25:02.693 [2024-07-24 22:34:28.243485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.693 [2024-07-24 22:34:28.243512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.693 qpair failed and we were unable to recover it. 00:25:02.693 [2024-07-24 22:34:28.243615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.693 [2024-07-24 22:34:28.243645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.693 qpair failed and we were unable to recover it. 00:25:02.693 [2024-07-24 22:34:28.243749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.693 [2024-07-24 22:34:28.243775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.693 qpair failed and we were unable to recover it. 00:25:02.693 [2024-07-24 22:34:28.243893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.693 [2024-07-24 22:34:28.243918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.693 qpair failed and we were unable to recover it. 00:25:02.693 [2024-07-24 22:34:28.244023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.694 [2024-07-24 22:34:28.244049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.694 qpair failed and we were unable to recover it. 00:25:02.694 [2024-07-24 22:34:28.244174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.694 [2024-07-24 22:34:28.244199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.694 qpair failed and we were unable to recover it. 00:25:02.694 [2024-07-24 22:34:28.244305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.694 [2024-07-24 22:34:28.244335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.694 qpair failed and we were unable to recover it. 00:25:02.694 [2024-07-24 22:34:28.244442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.694 [2024-07-24 22:34:28.244469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.694 qpair failed and we were unable to recover it. 00:25:02.694 [2024-07-24 22:34:28.245269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.694 [2024-07-24 22:34:28.245300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.694 qpair failed and we were unable to recover it. 00:25:02.694 [2024-07-24 22:34:28.245443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.694 [2024-07-24 22:34:28.245471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.694 qpair failed and we were unable to recover it. 00:25:02.694 [2024-07-24 22:34:28.246423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.694 [2024-07-24 22:34:28.246455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.694 qpair failed and we were unable to recover it. 00:25:02.694 [2024-07-24 22:34:28.246595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.694 [2024-07-24 22:34:28.246622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.694 qpair failed and we were unable to recover it. 00:25:02.694 [2024-07-24 22:34:28.246720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.694 [2024-07-24 22:34:28.246746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.694 qpair failed and we were unable to recover it. 00:25:02.694 [2024-07-24 22:34:28.246867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.694 [2024-07-24 22:34:28.246893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.694 qpair failed and we were unable to recover it. 00:25:02.694 [2024-07-24 22:34:28.246998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.694 [2024-07-24 22:34:28.247026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.694 qpair failed and we were unable to recover it. 00:25:02.694 [2024-07-24 22:34:28.247169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.694 [2024-07-24 22:34:28.247196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.694 qpair failed and we were unable to recover it. 00:25:02.694 [2024-07-24 22:34:28.247302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.694 [2024-07-24 22:34:28.247329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.694 qpair failed and we were unable to recover it. 00:25:02.694 [2024-07-24 22:34:28.247433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.694 [2024-07-24 22:34:28.247460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.694 qpair failed and we were unable to recover it. 00:25:02.694 [2024-07-24 22:34:28.247607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.694 [2024-07-24 22:34:28.247638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.694 qpair failed and we were unable to recover it. 00:25:02.694 [2024-07-24 22:34:28.247740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.694 [2024-07-24 22:34:28.247766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.694 qpair failed and we were unable to recover it. 00:25:02.694 [2024-07-24 22:34:28.247902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.694 [2024-07-24 22:34:28.247928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.694 qpair failed and we were unable to recover it. 00:25:02.694 [2024-07-24 22:34:28.248037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.694 [2024-07-24 22:34:28.248064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.694 qpair failed and we were unable to recover it. 00:25:02.694 [2024-07-24 22:34:28.248172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.694 [2024-07-24 22:34:28.248199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.694 qpair failed and we were unable to recover it. 00:25:02.694 [2024-07-24 22:34:28.248313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.694 [2024-07-24 22:34:28.248339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.694 qpair failed and we were unable to recover it. 00:25:02.694 [2024-07-24 22:34:28.248458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.694 [2024-07-24 22:34:28.248492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.694 qpair failed and we were unable to recover it. 00:25:02.694 [2024-07-24 22:34:28.248603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.694 [2024-07-24 22:34:28.248629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.694 qpair failed and we were unable to recover it. 00:25:02.694 [2024-07-24 22:34:28.248737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.694 [2024-07-24 22:34:28.248762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.694 qpair failed and we were unable to recover it. 00:25:02.694 [2024-07-24 22:34:28.248872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.694 [2024-07-24 22:34:28.248898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.694 qpair failed and we were unable to recover it. 00:25:02.694 [2024-07-24 22:34:28.249049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.694 [2024-07-24 22:34:28.249087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.694 qpair failed and we were unable to recover it. 00:25:02.694 [2024-07-24 22:34:28.249204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.694 [2024-07-24 22:34:28.249234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.694 qpair failed and we were unable to recover it. 00:25:02.694 [2024-07-24 22:34:28.249371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.694 [2024-07-24 22:34:28.249399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.694 qpair failed and we were unable to recover it. 00:25:02.694 [2024-07-24 22:34:28.250285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.694 [2024-07-24 22:34:28.250317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.694 qpair failed and we were unable to recover it. 00:25:02.694 [2024-07-24 22:34:28.250422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.694 [2024-07-24 22:34:28.250449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.694 qpair failed and we were unable to recover it. 00:25:02.694 [2024-07-24 22:34:28.250610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.694 [2024-07-24 22:34:28.250638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.694 qpair failed and we were unable to recover it. 00:25:02.694 [2024-07-24 22:34:28.250742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.694 [2024-07-24 22:34:28.250767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.694 qpair failed and we were unable to recover it. 00:25:02.694 [2024-07-24 22:34:28.250881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.694 [2024-07-24 22:34:28.250907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.694 qpair failed and we were unable to recover it. 00:25:02.694 [2024-07-24 22:34:28.251025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.694 [2024-07-24 22:34:28.251051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.694 qpair failed and we were unable to recover it. 00:25:02.694 [2024-07-24 22:34:28.251151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.694 [2024-07-24 22:34:28.251179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.694 qpair failed and we were unable to recover it. 00:25:02.694 [2024-07-24 22:34:28.251298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.694 [2024-07-24 22:34:28.251328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.694 qpair failed and we were unable to recover it. 00:25:02.694 [2024-07-24 22:34:28.251444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.694 [2024-07-24 22:34:28.251472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.694 qpair failed and we were unable to recover it. 00:25:02.694 [2024-07-24 22:34:28.251593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-24 22:34:28.251620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.695 qpair failed and we were unable to recover it. 00:25:02.695 [2024-07-24 22:34:28.251727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-24 22:34:28.251759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.695 qpair failed and we were unable to recover it. 00:25:02.695 [2024-07-24 22:34:28.251871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-24 22:34:28.251898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.695 qpair failed and we were unable to recover it. 00:25:02.695 [2024-07-24 22:34:28.252029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-24 22:34:28.252055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.695 qpair failed and we were unable to recover it. 00:25:02.695 [2024-07-24 22:34:28.252157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-24 22:34:28.252183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.695 qpair failed and we were unable to recover it. 00:25:02.695 [2024-07-24 22:34:28.252298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-24 22:34:28.252323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.695 qpair failed and we were unable to recover it. 00:25:02.695 [2024-07-24 22:34:28.252432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-24 22:34:28.252458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.695 qpair failed and we were unable to recover it. 00:25:02.695 [2024-07-24 22:34:28.252570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-24 22:34:28.252598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.695 qpair failed and we were unable to recover it. 00:25:02.695 [2024-07-24 22:34:28.252701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-24 22:34:28.252727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.695 qpair failed and we were unable to recover it. 00:25:02.695 [2024-07-24 22:34:28.252833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-24 22:34:28.252859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.695 qpair failed and we were unable to recover it. 00:25:02.695 [2024-07-24 22:34:28.252958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-24 22:34:28.252983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.695 qpair failed and we were unable to recover it. 00:25:02.695 [2024-07-24 22:34:28.253107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-24 22:34:28.253135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.695 qpair failed and we were unable to recover it. 00:25:02.695 [2024-07-24 22:34:28.253241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-24 22:34:28.253267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.695 qpair failed and we were unable to recover it. 00:25:02.695 [2024-07-24 22:34:28.253368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-24 22:34:28.253396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.695 qpair failed and we were unable to recover it. 00:25:02.695 [2024-07-24 22:34:28.253499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-24 22:34:28.253526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.695 qpair failed and we were unable to recover it. 00:25:02.695 [2024-07-24 22:34:28.253637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-24 22:34:28.253664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.695 qpair failed and we were unable to recover it. 00:25:02.695 [2024-07-24 22:34:28.253771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-24 22:34:28.253797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.695 qpair failed and we were unable to recover it. 00:25:02.695 [2024-07-24 22:34:28.253900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-24 22:34:28.253925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.695 qpair failed and we were unable to recover it. 00:25:02.695 [2024-07-24 22:34:28.254034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-24 22:34:28.254063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.695 qpair failed and we were unable to recover it. 00:25:02.695 [2024-07-24 22:34:28.254166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-24 22:34:28.254193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.695 qpair failed and we were unable to recover it. 00:25:02.695 [2024-07-24 22:34:28.254298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-24 22:34:28.254325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.695 qpair failed and we were unable to recover it. 00:25:02.695 [2024-07-24 22:34:28.255205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-24 22:34:28.255236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.695 qpair failed and we were unable to recover it. 00:25:02.695 [2024-07-24 22:34:28.255373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-24 22:34:28.255400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.695 qpair failed and we were unable to recover it. 00:25:02.695 [2024-07-24 22:34:28.255506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-24 22:34:28.255534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.695 qpair failed and we were unable to recover it. 00:25:02.695 [2024-07-24 22:34:28.255678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-24 22:34:28.255705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.695 qpair failed and we were unable to recover it. 00:25:02.695 [2024-07-24 22:34:28.255825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-24 22:34:28.255852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.695 qpair failed and we were unable to recover it. 00:25:02.695 [2024-07-24 22:34:28.255952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-24 22:34:28.255979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.695 qpair failed and we were unable to recover it. 00:25:02.695 [2024-07-24 22:34:28.256094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-24 22:34:28.256121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.695 qpair failed and we were unable to recover it. 00:25:02.695 [2024-07-24 22:34:28.256238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-24 22:34:28.256270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.695 qpair failed and we were unable to recover it. 00:25:02.695 [2024-07-24 22:34:28.256378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-24 22:34:28.256406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.695 qpair failed and we were unable to recover it. 00:25:02.695 [2024-07-24 22:34:28.256520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-24 22:34:28.256548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.695 qpair failed and we were unable to recover it. 00:25:02.695 [2024-07-24 22:34:28.256683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-24 22:34:28.256709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.695 qpair failed and we were unable to recover it. 00:25:02.695 [2024-07-24 22:34:28.256843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-24 22:34:28.256869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.695 qpair failed and we were unable to recover it. 00:25:02.695 [2024-07-24 22:34:28.256975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-24 22:34:28.257003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.695 qpair failed and we were unable to recover it. 00:25:02.695 [2024-07-24 22:34:28.257106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-24 22:34:28.257135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.695 qpair failed and we were unable to recover it. 00:25:02.695 [2024-07-24 22:34:28.257267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-24 22:34:28.257301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.695 qpair failed and we were unable to recover it. 00:25:02.695 [2024-07-24 22:34:28.257414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-24 22:34:28.257443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.695 qpair failed and we were unable to recover it. 00:25:02.695 [2024-07-24 22:34:28.257587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-24 22:34:28.257616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.695 qpair failed and we were unable to recover it. 00:25:02.695 [2024-07-24 22:34:28.257737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-24 22:34:28.257764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.695 qpair failed and we were unable to recover it. 00:25:02.695 [2024-07-24 22:34:28.257869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-24 22:34:28.257895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.696 qpair failed and we were unable to recover it. 00:25:02.696 [2024-07-24 22:34:28.258011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-24 22:34:28.258037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.696 qpair failed and we were unable to recover it. 00:25:02.696 [2024-07-24 22:34:28.258175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-24 22:34:28.258205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.696 qpair failed and we were unable to recover it. 00:25:02.696 [2024-07-24 22:34:28.258340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-24 22:34:28.258368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.696 qpair failed and we were unable to recover it. 00:25:02.696 [2024-07-24 22:34:28.259091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-24 22:34:28.259122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.696 qpair failed and we were unable to recover it. 00:25:02.696 [2024-07-24 22:34:28.259246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-24 22:34:28.259274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.696 qpair failed and we were unable to recover it. 00:25:02.696 [2024-07-24 22:34:28.259381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-24 22:34:28.259409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.696 qpair failed and we were unable to recover it. 00:25:02.696 [2024-07-24 22:34:28.259525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-24 22:34:28.259554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.696 qpair failed and we were unable to recover it. 00:25:02.696 [2024-07-24 22:34:28.259656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-24 22:34:28.259683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.696 qpair failed and we were unable to recover it. 00:25:02.696 [2024-07-24 22:34:28.259786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-24 22:34:28.259814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.696 qpair failed and we were unable to recover it. 00:25:02.696 [2024-07-24 22:34:28.259933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-24 22:34:28.259959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.696 qpair failed and we were unable to recover it. 00:25:02.696 [2024-07-24 22:34:28.260066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-24 22:34:28.260094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.696 qpair failed and we were unable to recover it. 00:25:02.696 [2024-07-24 22:34:28.260198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-24 22:34:28.260224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.696 qpair failed and we were unable to recover it. 00:25:02.696 [2024-07-24 22:34:28.260332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-24 22:34:28.260361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.696 qpair failed and we were unable to recover it. 00:25:02.696 [2024-07-24 22:34:28.260463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-24 22:34:28.260494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.696 qpair failed and we were unable to recover it. 00:25:02.696 [2024-07-24 22:34:28.260612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-24 22:34:28.260638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.696 qpair failed and we were unable to recover it. 00:25:02.696 [2024-07-24 22:34:28.260745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-24 22:34:28.260771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.696 qpair failed and we were unable to recover it. 00:25:02.696 [2024-07-24 22:34:28.260870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-24 22:34:28.260895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.696 qpair failed and we were unable to recover it. 00:25:02.696 [2024-07-24 22:34:28.261031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-24 22:34:28.261057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.696 qpair failed and we were unable to recover it. 00:25:02.696 [2024-07-24 22:34:28.261159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-24 22:34:28.261184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.696 qpair failed and we were unable to recover it. 00:25:02.696 [2024-07-24 22:34:28.261285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-24 22:34:28.261311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.696 qpair failed and we were unable to recover it. 00:25:02.696 [2024-07-24 22:34:28.261416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-24 22:34:28.261442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.696 qpair failed and we were unable to recover it. 00:25:02.696 [2024-07-24 22:34:28.261567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-24 22:34:28.261595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.696 qpair failed and we were unable to recover it. 00:25:02.696 [2024-07-24 22:34:28.261706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-24 22:34:28.261732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.696 qpair failed and we were unable to recover it. 00:25:02.696 [2024-07-24 22:34:28.261843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-24 22:34:28.261875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.696 qpair failed and we were unable to recover it. 00:25:02.696 [2024-07-24 22:34:28.261991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-24 22:34:28.262017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.696 qpair failed and we were unable to recover it. 00:25:02.696 [2024-07-24 22:34:28.262123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-24 22:34:28.262149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.696 qpair failed and we were unable to recover it. 00:25:02.696 [2024-07-24 22:34:28.262254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-24 22:34:28.262281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.696 qpair failed and we were unable to recover it. 00:25:02.696 [2024-07-24 22:34:28.262392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-24 22:34:28.262418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.696 qpair failed and we were unable to recover it. 00:25:02.696 [2024-07-24 22:34:28.262526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-24 22:34:28.262558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.696 qpair failed and we were unable to recover it. 00:25:02.696 [2024-07-24 22:34:28.262663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-24 22:34:28.262689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.696 qpair failed and we were unable to recover it. 00:25:02.696 [2024-07-24 22:34:28.262806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-24 22:34:28.262833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.696 qpair failed and we were unable to recover it. 00:25:02.696 [2024-07-24 22:34:28.262977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-24 22:34:28.263003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.696 qpair failed and we were unable to recover it. 00:25:02.696 [2024-07-24 22:34:28.263763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-24 22:34:28.263794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.696 qpair failed and we were unable to recover it. 00:25:02.696 [2024-07-24 22:34:28.263934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-24 22:34:28.263961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.696 qpair failed and we were unable to recover it. 00:25:02.696 [2024-07-24 22:34:28.264066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-24 22:34:28.264092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.696 qpair failed and we were unable to recover it. 00:25:02.696 [2024-07-24 22:34:28.264210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-24 22:34:28.264236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.696 qpair failed and we were unable to recover it. 00:25:02.696 [2024-07-24 22:34:28.264342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-24 22:34:28.264368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.696 qpair failed and we were unable to recover it. 00:25:02.696 [2024-07-24 22:34:28.264484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-24 22:34:28.264515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.696 qpair failed and we were unable to recover it. 00:25:02.696 [2024-07-24 22:34:28.264624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-24 22:34:28.264651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.696 qpair failed and we were unable to recover it. 00:25:02.696 [2024-07-24 22:34:28.264753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-24 22:34:28.264779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.696 qpair failed and we were unable to recover it. 00:25:02.696 [2024-07-24 22:34:28.264909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-24 22:34:28.264935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.696 qpair failed and we were unable to recover it. 00:25:02.696 [2024-07-24 22:34:28.265899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-24 22:34:28.265935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.696 qpair failed and we were unable to recover it. 00:25:02.696 [2024-07-24 22:34:28.266063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-24 22:34:28.266090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.696 qpair failed and we were unable to recover it. 00:25:02.696 [2024-07-24 22:34:28.266231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-24 22:34:28.266257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.696 qpair failed and we were unable to recover it. 00:25:02.697 [2024-07-24 22:34:28.266360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.697 [2024-07-24 22:34:28.266386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.697 qpair failed and we were unable to recover it. 00:25:02.697 [2024-07-24 22:34:28.266493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.697 [2024-07-24 22:34:28.266521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.697 qpair failed and we were unable to recover it. 00:25:02.697 [2024-07-24 22:34:28.266580] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:02.697 [2024-07-24 22:34:28.266620] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:02.697 [2024-07-24 22:34:28.266637] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:02.697 [2024-07-24 22:34:28.266643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.697 [2024-07-24 22:34:28.266650] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:02.697 [2024-07-24 22:34:28.266663] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:02.697 [2024-07-24 22:34:28.266668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.697 qpair failed and we were unable to recover it. 00:25:02.697 [2024-07-24 22:34:28.266800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.697 [2024-07-24 22:34:28.266844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.697 qpair failed and we were unable to recover it. 00:25:02.697 [2024-07-24 22:34:28.266956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.697 [2024-07-24 22:34:28.266924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:25:02.697 [2024-07-24 22:34:28.266990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.697 qpair failed and we were unable to recover it. 00:25:02.697 [2024-07-24 22:34:28.267004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:25:02.697 [2024-07-24 22:34:28.267049] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:25:02.697 [2024-07-24 22:34:28.267132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.697 [2024-07-24 22:34:28.267058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:25:02.697 [2024-07-24 22:34:28.267179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.697 qpair failed and we were unable to recover it. 00:25:02.697 [2024-07-24 22:34:28.267303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.697 [2024-07-24 22:34:28.267330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.697 qpair failed and we were unable to recover it. 00:25:02.697 [2024-07-24 22:34:28.267443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.697 [2024-07-24 22:34:28.267470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.697 qpair failed and we were unable to recover it. 00:25:02.697 [2024-07-24 22:34:28.267584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.697 [2024-07-24 22:34:28.267616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.697 qpair failed and we were unable to recover it. 00:25:02.697 [2024-07-24 22:34:28.267724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.697 [2024-07-24 22:34:28.267752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.697 qpair failed and we were unable to recover it. 00:25:02.697 [2024-07-24 22:34:28.267866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.697 [2024-07-24 22:34:28.267894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.697 qpair failed and we were unable to recover it. 00:25:02.697 [2024-07-24 22:34:28.267997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.697 [2024-07-24 22:34:28.268025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.697 qpair failed and we were unable to recover it. 00:25:02.697 [2024-07-24 22:34:28.268146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.697 [2024-07-24 22:34:28.268179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.697 qpair failed and we were unable to recover it. 00:25:02.697 [2024-07-24 22:34:28.268291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.697 [2024-07-24 22:34:28.268319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.697 qpair failed and we were unable to recover it. 00:25:02.697 [2024-07-24 22:34:28.268425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.697 [2024-07-24 22:34:28.268451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.697 qpair failed and we were unable to recover it. 00:25:02.697 [2024-07-24 22:34:28.268577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.697 [2024-07-24 22:34:28.268606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.697 qpair failed and we were unable to recover it. 00:25:02.697 [2024-07-24 22:34:28.268715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.697 [2024-07-24 22:34:28.268743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.697 qpair failed and we were unable to recover it. 00:25:02.697 [2024-07-24 22:34:28.268846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.697 [2024-07-24 22:34:28.268873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.697 qpair failed and we were unable to recover it. 00:25:02.697 [2024-07-24 22:34:28.268975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.697 [2024-07-24 22:34:28.269002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.697 qpair failed and we were unable to recover it. 00:25:02.697 [2024-07-24 22:34:28.269134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.697 [2024-07-24 22:34:28.269161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.697 qpair failed and we were unable to recover it. 00:25:02.697 [2024-07-24 22:34:28.269263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.697 [2024-07-24 22:34:28.269290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.697 qpair failed and we were unable to recover it. 00:25:02.697 [2024-07-24 22:34:28.269396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.697 [2024-07-24 22:34:28.269425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.697 qpair failed and we were unable to recover it. 00:25:02.697 [2024-07-24 22:34:28.269558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.697 [2024-07-24 22:34:28.269586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.697 qpair failed and we were unable to recover it. 00:25:02.697 [2024-07-24 22:34:28.269693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.697 [2024-07-24 22:34:28.269722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.697 qpair failed and we were unable to recover it. 00:25:02.697 [2024-07-24 22:34:28.269826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.697 [2024-07-24 22:34:28.269853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.697 qpair failed and we were unable to recover it. 00:25:02.697 [2024-07-24 22:34:28.269950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.697 [2024-07-24 22:34:28.269976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.697 qpair failed and we were unable to recover it. 00:25:02.697 [2024-07-24 22:34:28.270074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.697 [2024-07-24 22:34:28.270100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.697 qpair failed and we were unable to recover it. 00:25:02.697 [2024-07-24 22:34:28.270219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.697 [2024-07-24 22:34:28.270248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.697 qpair failed and we were unable to recover it. 00:25:02.697 [2024-07-24 22:34:28.270354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.697 [2024-07-24 22:34:28.270381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.697 qpair failed and we were unable to recover it. 00:25:02.697 [2024-07-24 22:34:28.270489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.697 [2024-07-24 22:34:28.270517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.697 qpair failed and we were unable to recover it. 00:25:02.697 [2024-07-24 22:34:28.270626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.697 [2024-07-24 22:34:28.270652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.697 qpair failed and we were unable to recover it. 00:25:02.697 [2024-07-24 22:34:28.270755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.697 [2024-07-24 22:34:28.270781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.697 qpair failed and we were unable to recover it. 00:25:02.697 [2024-07-24 22:34:28.270882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.697 [2024-07-24 22:34:28.270910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.697 qpair failed and we were unable to recover it. 00:25:02.697 [2024-07-24 22:34:28.271020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.697 [2024-07-24 22:34:28.271047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.697 qpair failed and we were unable to recover it. 00:25:02.697 [2024-07-24 22:34:28.271154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.697 [2024-07-24 22:34:28.271180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.697 qpair failed and we were unable to recover it. 00:25:02.697 [2024-07-24 22:34:28.271327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.698 [2024-07-24 22:34:28.271357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.698 qpair failed and we were unable to recover it. 00:25:02.698 [2024-07-24 22:34:28.271467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.698 [2024-07-24 22:34:28.271502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.698 qpair failed and we were unable to recover it. 00:25:02.698 [2024-07-24 22:34:28.271612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.698 [2024-07-24 22:34:28.271639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.698 qpair failed and we were unable to recover it. 00:25:02.698 [2024-07-24 22:34:28.271745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.698 [2024-07-24 22:34:28.271773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.698 qpair failed and we were unable to recover it. 00:25:02.698 [2024-07-24 22:34:28.271878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.698 [2024-07-24 22:34:28.271905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.698 qpair failed and we were unable to recover it. 00:25:02.698 [2024-07-24 22:34:28.272015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.698 [2024-07-24 22:34:28.272041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.698 qpair failed and we were unable to recover it. 00:25:02.698 [2024-07-24 22:34:28.272196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.698 [2024-07-24 22:34:28.272225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.698 qpair failed and we were unable to recover it. 00:25:02.698 [2024-07-24 22:34:28.272339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.698 [2024-07-24 22:34:28.272368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.698 qpair failed and we were unable to recover it. 00:25:02.698 [2024-07-24 22:34:28.272477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.698 [2024-07-24 22:34:28.272510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.698 qpair failed and we were unable to recover it. 00:25:02.698 [2024-07-24 22:34:28.272620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.698 [2024-07-24 22:34:28.272648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.698 qpair failed and we were unable to recover it. 00:25:02.698 [2024-07-24 22:34:28.272757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.698 [2024-07-24 22:34:28.272784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.698 qpair failed and we were unable to recover it. 00:25:02.698 [2024-07-24 22:34:28.272885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.698 [2024-07-24 22:34:28.272911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.698 qpair failed and we were unable to recover it. 00:25:02.698 [2024-07-24 22:34:28.273023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.698 [2024-07-24 22:34:28.273050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.698 qpair failed and we were unable to recover it. 00:25:02.698 [2024-07-24 22:34:28.273183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.698 [2024-07-24 22:34:28.273214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.698 qpair failed and we were unable to recover it. 00:25:02.698 [2024-07-24 22:34:28.273316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.698 [2024-07-24 22:34:28.273342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.698 qpair failed and we were unable to recover it. 00:25:02.698 [2024-07-24 22:34:28.273452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.698 [2024-07-24 22:34:28.273486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.698 qpair failed and we were unable to recover it. 00:25:02.698 [2024-07-24 22:34:28.273591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.698 [2024-07-24 22:34:28.273617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.698 qpair failed and we were unable to recover it. 00:25:02.698 [2024-07-24 22:34:28.273727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.698 [2024-07-24 22:34:28.273754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.698 qpair failed and we were unable to recover it. 00:25:02.698 [2024-07-24 22:34:28.273861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.698 [2024-07-24 22:34:28.273888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.698 qpair failed and we were unable to recover it. 00:25:02.698 [2024-07-24 22:34:28.273994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.698 [2024-07-24 22:34:28.274022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.698 qpair failed and we were unable to recover it. 00:25:02.698 [2024-07-24 22:34:28.274126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.698 [2024-07-24 22:34:28.274154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.698 qpair failed and we were unable to recover it. 00:25:02.698 [2024-07-24 22:34:28.274258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.698 [2024-07-24 22:34:28.274287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.698 qpair failed and we were unable to recover it. 00:25:02.698 [2024-07-24 22:34:28.274403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.698 [2024-07-24 22:34:28.274433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.698 qpair failed and we were unable to recover it. 00:25:02.698 [2024-07-24 22:34:28.274552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.698 [2024-07-24 22:34:28.274581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.698 qpair failed and we were unable to recover it. 00:25:02.698 [2024-07-24 22:34:28.274693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.698 [2024-07-24 22:34:28.274728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.698 qpair failed and we were unable to recover it. 00:25:02.698 [2024-07-24 22:34:28.274857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.698 [2024-07-24 22:34:28.274884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.698 qpair failed and we were unable to recover it. 00:25:02.698 [2024-07-24 22:34:28.275001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.698 [2024-07-24 22:34:28.275028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.698 qpair failed and we were unable to recover it. 00:25:02.698 [2024-07-24 22:34:28.275142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.698 [2024-07-24 22:34:28.275168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.698 qpair failed and we were unable to recover it. 00:25:02.698 [2024-07-24 22:34:28.275270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.698 [2024-07-24 22:34:28.275296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.698 qpair failed and we were unable to recover it. 00:25:02.698 [2024-07-24 22:34:28.275396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.698 [2024-07-24 22:34:28.275423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.698 qpair failed and we were unable to recover it. 00:25:02.698 [2024-07-24 22:34:28.275538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.698 [2024-07-24 22:34:28.275566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.698 qpair failed and we were unable to recover it. 00:25:02.698 [2024-07-24 22:34:28.275673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.698 [2024-07-24 22:34:28.275701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.698 qpair failed and we were unable to recover it. 00:25:02.698 [2024-07-24 22:34:28.275804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.698 [2024-07-24 22:34:28.275830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.698 qpair failed and we were unable to recover it. 00:25:02.698 [2024-07-24 22:34:28.275945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.698 [2024-07-24 22:34:28.275972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.698 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-24 22:34:28.276078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-24 22:34:28.276104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-24 22:34:28.276207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-24 22:34:28.276233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-24 22:34:28.276366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-24 22:34:28.276392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-24 22:34:28.276501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-24 22:34:28.276531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-24 22:34:28.276660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-24 22:34:28.276688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-24 22:34:28.276786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-24 22:34:28.276812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-24 22:34:28.276928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-24 22:34:28.276956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-24 22:34:28.277065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-24 22:34:28.277092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-24 22:34:28.277200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-24 22:34:28.277230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-24 22:34:28.277338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-24 22:34:28.277364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-24 22:34:28.277467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-24 22:34:28.277505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-24 22:34:28.277663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-24 22:34:28.277690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-24 22:34:28.277801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-24 22:34:28.277832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-24 22:34:28.277950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-24 22:34:28.277978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-24 22:34:28.278088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-24 22:34:28.278114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-24 22:34:28.278252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-24 22:34:28.278279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-24 22:34:28.278407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-24 22:34:28.278434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-24 22:34:28.278548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-24 22:34:28.278578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-24 22:34:28.278687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-24 22:34:28.278715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-24 22:34:28.278826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-24 22:34:28.278860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-24 22:34:28.278970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-24 22:34:28.278997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-24 22:34:28.279106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-24 22:34:28.279133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-24 22:34:28.279258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-24 22:34:28.279285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-24 22:34:28.279393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-24 22:34:28.279419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-24 22:34:28.279535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-24 22:34:28.279562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-24 22:34:28.279669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-24 22:34:28.279696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-24 22:34:28.279802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-24 22:34:28.279830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-24 22:34:28.279934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-24 22:34:28.279961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-24 22:34:28.280067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-24 22:34:28.280093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-24 22:34:28.280195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-24 22:34:28.280221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-24 22:34:28.280321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-24 22:34:28.280348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-24 22:34:28.280456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-24 22:34:28.280523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-24 22:34:28.280662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-24 22:34:28.280692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-24 22:34:28.280804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-24 22:34:28.280831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-24 22:34:28.280934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-24 22:34:28.280961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-24 22:34:28.281066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-24 22:34:28.281092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-24 22:34:28.281202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-24 22:34:28.281233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-24 22:34:28.281333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-24 22:34:28.281360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-24 22:34:28.281511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-24 22:34:28.281539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-24 22:34:28.281643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-24 22:34:28.281670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-24 22:34:28.281774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-24 22:34:28.281800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-24 22:34:28.281932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-24 22:34:28.281959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-24 22:34:28.282068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-24 22:34:28.282097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-24 22:34:28.282200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-24 22:34:28.282227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-24 22:34:28.282334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-24 22:34:28.282363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-24 22:34:28.282520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-24 22:34:28.282547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-24 22:34:28.282664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-24 22:34:28.282691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-24 22:34:28.282804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-24 22:34:28.282832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-24 22:34:28.282964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-24 22:34:28.282992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-24 22:34:28.283098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-24 22:34:28.283125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-24 22:34:28.283236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-24 22:34:28.283266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.699 [2024-07-24 22:34:28.283382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.699 [2024-07-24 22:34:28.283411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.699 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-24 22:34:28.283525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-24 22:34:28.283553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-24 22:34:28.283662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-24 22:34:28.283689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-24 22:34:28.283787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-24 22:34:28.283813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-24 22:34:28.283928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-24 22:34:28.283955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-24 22:34:28.284061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-24 22:34:28.284088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-24 22:34:28.284197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-24 22:34:28.284226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-24 22:34:28.284327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-24 22:34:28.284354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-24 22:34:28.284464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-24 22:34:28.284502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-24 22:34:28.284607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-24 22:34:28.284636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-24 22:34:28.284741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-24 22:34:28.284768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-24 22:34:28.284873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-24 22:34:28.284900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-24 22:34:28.285006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-24 22:34:28.285033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-24 22:34:28.285142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-24 22:34:28.285170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-24 22:34:28.285308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-24 22:34:28.285335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-24 22:34:28.285470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-24 22:34:28.285504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-24 22:34:28.285610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-24 22:34:28.285637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-24 22:34:28.285741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-24 22:34:28.285767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-24 22:34:28.285874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-24 22:34:28.285901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-24 22:34:28.286009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-24 22:34:28.286037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-24 22:34:28.286136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-24 22:34:28.286162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-24 22:34:28.286265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-24 22:34:28.286291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-24 22:34:28.286444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-24 22:34:28.286470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-24 22:34:28.286582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-24 22:34:28.286610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-24 22:34:28.286708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-24 22:34:28.286734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-24 22:34:28.286837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-24 22:34:28.286863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-24 22:34:28.286973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-24 22:34:28.287000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-24 22:34:28.287101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-24 22:34:28.287128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-24 22:34:28.287235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-24 22:34:28.287262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-24 22:34:28.287374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-24 22:34:28.287404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-24 22:34:28.287534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-24 22:34:28.287563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-24 22:34:28.287677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-24 22:34:28.287704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-24 22:34:28.287807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-24 22:34:28.287833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-24 22:34:28.287939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-24 22:34:28.287970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-24 22:34:28.288079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-24 22:34:28.288106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-24 22:34:28.288222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-24 22:34:28.288251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-24 22:34:28.288367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-24 22:34:28.288395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-24 22:34:28.288544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-24 22:34:28.288571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-24 22:34:28.288675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-24 22:34:28.288701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-24 22:34:28.288803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-24 22:34:28.288829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-24 22:34:28.288928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-24 22:34:28.288955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-24 22:34:28.289053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-24 22:34:28.289078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-24 22:34:28.289186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-24 22:34:28.289212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-24 22:34:28.289339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-24 22:34:28.289365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-24 22:34:28.289465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-24 22:34:28.289497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-24 22:34:28.289599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-24 22:34:28.289625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-24 22:34:28.289727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-24 22:34:28.289754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-24 22:34:28.289863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-24 22:34:28.289889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-24 22:34:28.289999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-24 22:34:28.290033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-24 22:34:28.290162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-24 22:34:28.290189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-24 22:34:28.290339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-24 22:34:28.290365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-24 22:34:28.290466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-24 22:34:28.290504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-24 22:34:28.290607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-24 22:34:28.290633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.700 [2024-07-24 22:34:28.290781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.700 [2024-07-24 22:34:28.290807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.700 qpair failed and we were unable to recover it. 00:25:02.701 [2024-07-24 22:34:28.290905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.701 [2024-07-24 22:34:28.290932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.701 qpair failed and we were unable to recover it. 00:25:02.701 [2024-07-24 22:34:28.291035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.701 [2024-07-24 22:34:28.291061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.701 qpair failed and we were unable to recover it. 00:25:02.701 [2024-07-24 22:34:28.291180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.701 [2024-07-24 22:34:28.291208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.701 qpair failed and we were unable to recover it. 00:25:02.701 [2024-07-24 22:34:28.291318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.701 [2024-07-24 22:34:28.291346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.701 qpair failed and we were unable to recover it. 00:25:02.701 [2024-07-24 22:34:28.291462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.701 [2024-07-24 22:34:28.291495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.701 qpair failed and we were unable to recover it. 00:25:02.701 [2024-07-24 22:34:28.291607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.701 [2024-07-24 22:34:28.291634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.701 qpair failed and we were unable to recover it. 00:25:02.701 [2024-07-24 22:34:28.291744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.701 [2024-07-24 22:34:28.291773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.701 qpair failed and we were unable to recover it. 00:25:02.701 [2024-07-24 22:34:28.291903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.701 [2024-07-24 22:34:28.291929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.701 qpair failed and we were unable to recover it. 00:25:02.701 [2024-07-24 22:34:28.292042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.701 [2024-07-24 22:34:28.292068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.701 qpair failed and we were unable to recover it. 00:25:02.701 [2024-07-24 22:34:28.292168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.701 [2024-07-24 22:34:28.292194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.701 qpair failed and we were unable to recover it. 00:25:02.701 [2024-07-24 22:34:28.292297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.701 [2024-07-24 22:34:28.292323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.701 qpair failed and we were unable to recover it. 00:25:02.701 [2024-07-24 22:34:28.292428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.701 [2024-07-24 22:34:28.292456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.701 qpair failed and we were unable to recover it. 00:25:02.701 [2024-07-24 22:34:28.292570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.701 [2024-07-24 22:34:28.292599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.701 qpair failed and we were unable to recover it. 00:25:02.701 [2024-07-24 22:34:28.292710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.701 [2024-07-24 22:34:28.292736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.701 qpair failed and we were unable to recover it. 00:25:02.701 [2024-07-24 22:34:28.292833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.701 [2024-07-24 22:34:28.292860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.701 qpair failed and we were unable to recover it. 00:25:02.701 [2024-07-24 22:34:28.292964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.701 [2024-07-24 22:34:28.292990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.701 qpair failed and we were unable to recover it. 00:25:02.701 [2024-07-24 22:34:28.293096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.701 [2024-07-24 22:34:28.293123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.701 qpair failed and we were unable to recover it. 00:25:02.701 [2024-07-24 22:34:28.293231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.701 [2024-07-24 22:34:28.293259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.701 qpair failed and we were unable to recover it. 00:25:02.701 [2024-07-24 22:34:28.293372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.701 [2024-07-24 22:34:28.293398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.701 qpair failed and we were unable to recover it. 00:25:02.701 [2024-07-24 22:34:28.293509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.701 [2024-07-24 22:34:28.293545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.701 qpair failed and we were unable to recover it. 00:25:02.701 [2024-07-24 22:34:28.293660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.701 [2024-07-24 22:34:28.293691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.701 qpair failed and we were unable to recover it. 00:25:02.701 [2024-07-24 22:34:28.293800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.701 [2024-07-24 22:34:28.293828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.701 qpair failed and we were unable to recover it. 00:25:02.701 [2024-07-24 22:34:28.293937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.701 [2024-07-24 22:34:28.293964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.701 qpair failed and we were unable to recover it. 00:25:02.701 [2024-07-24 22:34:28.294066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.701 [2024-07-24 22:34:28.294092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.701 qpair failed and we were unable to recover it. 00:25:02.701 [2024-07-24 22:34:28.294195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.701 [2024-07-24 22:34:28.294223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.701 qpair failed and we were unable to recover it. 00:25:02.701 [2024-07-24 22:34:28.294325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.701 [2024-07-24 22:34:28.294351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.701 qpair failed and we were unable to recover it. 00:25:02.701 [2024-07-24 22:34:28.294474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.701 [2024-07-24 22:34:28.294505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.701 qpair failed and we were unable to recover it. 00:25:02.701 [2024-07-24 22:34:28.294615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.701 [2024-07-24 22:34:28.294642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.701 qpair failed and we were unable to recover it. 00:25:02.701 [2024-07-24 22:34:28.294747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.701 [2024-07-24 22:34:28.294773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.701 qpair failed and we were unable to recover it. 00:25:02.701 [2024-07-24 22:34:28.294878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.701 [2024-07-24 22:34:28.294905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.701 qpair failed and we were unable to recover it. 00:25:02.701 [2024-07-24 22:34:28.295044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.701 [2024-07-24 22:34:28.295072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.701 qpair failed and we were unable to recover it. 00:25:02.701 [2024-07-24 22:34:28.295189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.701 [2024-07-24 22:34:28.295217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.701 qpair failed and we were unable to recover it. 00:25:02.701 [2024-07-24 22:34:28.295322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.701 [2024-07-24 22:34:28.295349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.701 qpair failed and we were unable to recover it. 00:25:02.701 [2024-07-24 22:34:28.295453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.701 [2024-07-24 22:34:28.295486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.701 qpair failed and we were unable to recover it. 00:25:02.701 [2024-07-24 22:34:28.295603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.701 [2024-07-24 22:34:28.295636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.701 qpair failed and we were unable to recover it. 00:25:02.701 [2024-07-24 22:34:28.295758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.701 [2024-07-24 22:34:28.295788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.701 qpair failed and we were unable to recover it. 00:25:02.701 [2024-07-24 22:34:28.295898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.701 [2024-07-24 22:34:28.295927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.701 qpair failed and we were unable to recover it. 00:25:02.701 [2024-07-24 22:34:28.296034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.701 [2024-07-24 22:34:28.296060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.701 qpair failed and we were unable to recover it. 00:25:02.701 [2024-07-24 22:34:28.296162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.701 [2024-07-24 22:34:28.296188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.701 qpair failed and we were unable to recover it. 00:25:02.701 [2024-07-24 22:34:28.296287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.701 [2024-07-24 22:34:28.296315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.701 qpair failed and we were unable to recover it. 00:25:02.701 [2024-07-24 22:34:28.296422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.701 [2024-07-24 22:34:28.296449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.701 qpair failed and we were unable to recover it. 00:25:02.701 [2024-07-24 22:34:28.296555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.701 [2024-07-24 22:34:28.296583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.701 qpair failed and we were unable to recover it. 00:25:02.701 [2024-07-24 22:34:28.296697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.701 [2024-07-24 22:34:28.296726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.701 qpair failed and we were unable to recover it. 00:25:02.701 [2024-07-24 22:34:28.296839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.701 [2024-07-24 22:34:28.296866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-24 22:34:28.296973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-24 22:34:28.297000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-24 22:34:28.297114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-24 22:34:28.297142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-24 22:34:28.297245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-24 22:34:28.297273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-24 22:34:28.297386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-24 22:34:28.297415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-24 22:34:28.297538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-24 22:34:28.297566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-24 22:34:28.297665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-24 22:34:28.297691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-24 22:34:28.297802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-24 22:34:28.297828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-24 22:34:28.297939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-24 22:34:28.297965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-24 22:34:28.298097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-24 22:34:28.298123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-24 22:34:28.298242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-24 22:34:28.298271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-24 22:34:28.298383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-24 22:34:28.298410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-24 22:34:28.298523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-24 22:34:28.298551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-24 22:34:28.298664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-24 22:34:28.298690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-24 22:34:28.298804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-24 22:34:28.298833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-24 22:34:28.298944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-24 22:34:28.298971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-24 22:34:28.299087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-24 22:34:28.299114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-24 22:34:28.299226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-24 22:34:28.299252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-24 22:34:28.299382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-24 22:34:28.299423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-24 22:34:28.299556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-24 22:34:28.299584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-24 22:34:28.299696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-24 22:34:28.299722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-24 22:34:28.299830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-24 22:34:28.299857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-24 22:34:28.299957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-24 22:34:28.299983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-24 22:34:28.300089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-24 22:34:28.300114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-24 22:34:28.300217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-24 22:34:28.300245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-24 22:34:28.300373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-24 22:34:28.300399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-24 22:34:28.300504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-24 22:34:28.300530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-24 22:34:28.300640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-24 22:34:28.300666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-24 22:34:28.300801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-24 22:34:28.300827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-24 22:34:28.300930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-24 22:34:28.300957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-24 22:34:28.301061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-24 22:34:28.301088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-24 22:34:28.301192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-24 22:34:28.301223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-24 22:34:28.301328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-24 22:34:28.301354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-24 22:34:28.301456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-24 22:34:28.301492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-24 22:34:28.301602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-24 22:34:28.301630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-24 22:34:28.301743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-24 22:34:28.301773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-24 22:34:28.301880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-24 22:34:28.301906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-24 22:34:28.302010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-24 22:34:28.302036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-24 22:34:28.302144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-24 22:34:28.302170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-24 22:34:28.302271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-24 22:34:28.302297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-24 22:34:28.302403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-24 22:34:28.302431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-24 22:34:28.302551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-24 22:34:28.302579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-24 22:34:28.302690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-24 22:34:28.302717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-24 22:34:28.302827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-24 22:34:28.302855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-24 22:34:28.302973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-24 22:34:28.303002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-24 22:34:28.303122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-24 22:34:28.303151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-24 22:34:28.303262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-24 22:34:28.303288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-24 22:34:28.303420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-24 22:34:28.303446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-24 22:34:28.303583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-24 22:34:28.303610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.702 [2024-07-24 22:34:28.303717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.702 [2024-07-24 22:34:28.303744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.702 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-24 22:34:28.303852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-24 22:34:28.303878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-24 22:34:28.303981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-24 22:34:28.304007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-24 22:34:28.304112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-24 22:34:28.304139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-24 22:34:28.304241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-24 22:34:28.304268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-24 22:34:28.304362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-24 22:34:28.304388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-24 22:34:28.304497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-24 22:34:28.304524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-24 22:34:28.304634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-24 22:34:28.304660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-24 22:34:28.304760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-24 22:34:28.304787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-24 22:34:28.304907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-24 22:34:28.304945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-24 22:34:28.305071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-24 22:34:28.305100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-24 22:34:28.305209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-24 22:34:28.305235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-24 22:34:28.305347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-24 22:34:28.305374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-24 22:34:28.305474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-24 22:34:28.305509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-24 22:34:28.305628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-24 22:34:28.305654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-24 22:34:28.305770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-24 22:34:28.305796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-24 22:34:28.305894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-24 22:34:28.305920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-24 22:34:28.306059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-24 22:34:28.306085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-24 22:34:28.306187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-24 22:34:28.306215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-24 22:34:28.306316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-24 22:34:28.306342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-24 22:34:28.306444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-24 22:34:28.306470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-24 22:34:28.306607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-24 22:34:28.306635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-24 22:34:28.306738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-24 22:34:28.306769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-24 22:34:28.306877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-24 22:34:28.306904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-24 22:34:28.307019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-24 22:34:28.307045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-24 22:34:28.307155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-24 22:34:28.307181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-24 22:34:28.307289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-24 22:34:28.307316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-24 22:34:28.307422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-24 22:34:28.307449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-24 22:34:28.307564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-24 22:34:28.307590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-24 22:34:28.307696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-24 22:34:28.307722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-24 22:34:28.307824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-24 22:34:28.307850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-24 22:34:28.307979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-24 22:34:28.308005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-24 22:34:28.308101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-24 22:34:28.308128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-24 22:34:28.308239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-24 22:34:28.308267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-24 22:34:28.308361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-24 22:34:28.308387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-24 22:34:28.308492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-24 22:34:28.308519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-24 22:34:28.308639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-24 22:34:28.308665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-24 22:34:28.308771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-24 22:34:28.308798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-24 22:34:28.308897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-24 22:34:28.308923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-24 22:34:28.309021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-24 22:34:28.309047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-24 22:34:28.309155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-24 22:34:28.309181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-24 22:34:28.309286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-24 22:34:28.309313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-24 22:34:28.309444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-24 22:34:28.309470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-24 22:34:28.309607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-24 22:34:28.309633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-24 22:34:28.309745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-24 22:34:28.309773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-24 22:34:28.309882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-24 22:34:28.309909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-24 22:34:28.310016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-24 22:34:28.310043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-24 22:34:28.310145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.703 [2024-07-24 22:34:28.310172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.703 qpair failed and we were unable to recover it. 00:25:02.703 [2024-07-24 22:34:28.310279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-24 22:34:28.310305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-24 22:34:28.310429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-24 22:34:28.310460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-24 22:34:28.310599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-24 22:34:28.310627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-24 22:34:28.310770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-24 22:34:28.310797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-24 22:34:28.310903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-24 22:34:28.310929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-24 22:34:28.311030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-24 22:34:28.311055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-24 22:34:28.311157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-24 22:34:28.311183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-24 22:34:28.311289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-24 22:34:28.311315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-24 22:34:28.311418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-24 22:34:28.311444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-24 22:34:28.311569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-24 22:34:28.311603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-24 22:34:28.311708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-24 22:34:28.311734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-24 22:34:28.311837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-24 22:34:28.311863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-24 22:34:28.311969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-24 22:34:28.311997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-24 22:34:28.312110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-24 22:34:28.312137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-24 22:34:28.312240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-24 22:34:28.312272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-24 22:34:28.312391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-24 22:34:28.312418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-24 22:34:28.312520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-24 22:34:28.312547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-24 22:34:28.312652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-24 22:34:28.312678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-24 22:34:28.312789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-24 22:34:28.312817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-24 22:34:28.312923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-24 22:34:28.312951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-24 22:34:28.313054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-24 22:34:28.313080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-24 22:34:28.313182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-24 22:34:28.313208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-24 22:34:28.313312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-24 22:34:28.313338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-24 22:34:28.313442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-24 22:34:28.313469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-24 22:34:28.313589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-24 22:34:28.313615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-24 22:34:28.313724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-24 22:34:28.313750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-24 22:34:28.313864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-24 22:34:28.313892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-24 22:34:28.313994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-24 22:34:28.314020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-24 22:34:28.314129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-24 22:34:28.314156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-24 22:34:28.314267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-24 22:34:28.314293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-24 22:34:28.314398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-24 22:34:28.314424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-24 22:34:28.314531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-24 22:34:28.314559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-24 22:34:28.314680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-24 22:34:28.314713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-24 22:34:28.314824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-24 22:34:28.314851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-24 22:34:28.314957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-24 22:34:28.314983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-24 22:34:28.315082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-24 22:34:28.315109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-24 22:34:28.315221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-24 22:34:28.315247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-24 22:34:28.315358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-24 22:34:28.315388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-24 22:34:28.315502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-24 22:34:28.315536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-24 22:34:28.315643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-24 22:34:28.315671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-24 22:34:28.315768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-24 22:34:28.315794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-24 22:34:28.315908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-24 22:34:28.315937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-24 22:34:28.316040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-24 22:34:28.316066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-24 22:34:28.316166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-24 22:34:28.316192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-24 22:34:28.316294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-24 22:34:28.316320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-24 22:34:28.316434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-24 22:34:28.316461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-24 22:34:28.316580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-24 22:34:28.316606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-24 22:34:28.316711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-24 22:34:28.316737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-24 22:34:28.316842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-24 22:34:28.316871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-24 22:34:28.316982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-24 22:34:28.317009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.704 [2024-07-24 22:34:28.317120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.704 [2024-07-24 22:34:28.317150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.704 qpair failed and we were unable to recover it. 00:25:02.705 [2024-07-24 22:34:28.317258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.705 [2024-07-24 22:34:28.317285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.705 qpair failed and we were unable to recover it. 00:25:02.705 [2024-07-24 22:34:28.317388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.705 [2024-07-24 22:34:28.317414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.705 qpair failed and we were unable to recover it. 00:25:02.705 [2024-07-24 22:34:28.317534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.705 [2024-07-24 22:34:28.317563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.705 qpair failed and we were unable to recover it. 00:25:02.705 [2024-07-24 22:34:28.317662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.705 [2024-07-24 22:34:28.317688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.705 qpair failed and we were unable to recover it. 00:25:02.705 [2024-07-24 22:34:28.317790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.705 [2024-07-24 22:34:28.317816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.705 qpair failed and we were unable to recover it. 00:25:02.705 [2024-07-24 22:34:28.317929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.705 [2024-07-24 22:34:28.317957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.705 qpair failed and we were unable to recover it. 00:25:02.705 [2024-07-24 22:34:28.318067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.705 [2024-07-24 22:34:28.318094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.705 qpair failed and we were unable to recover it. 00:25:02.705 [2024-07-24 22:34:28.318203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.705 [2024-07-24 22:34:28.318230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.705 qpair failed and we were unable to recover it. 00:25:02.705 [2024-07-24 22:34:28.318335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.705 [2024-07-24 22:34:28.318361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.705 qpair failed and we were unable to recover it. 00:25:02.705 [2024-07-24 22:34:28.318468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.705 [2024-07-24 22:34:28.318503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.705 qpair failed and we were unable to recover it. 00:25:02.705 [2024-07-24 22:34:28.318609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.705 [2024-07-24 22:34:28.318637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.705 qpair failed and we were unable to recover it. 00:25:02.705 [2024-07-24 22:34:28.318745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.705 [2024-07-24 22:34:28.318773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.705 qpair failed and we were unable to recover it. 00:25:02.705 [2024-07-24 22:34:28.318876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.705 [2024-07-24 22:34:28.318902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.705 qpair failed and we were unable to recover it. 00:25:02.705 [2024-07-24 22:34:28.319007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.705 [2024-07-24 22:34:28.319033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.705 qpair failed and we were unable to recover it. 00:25:02.705 [2024-07-24 22:34:28.319142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.705 [2024-07-24 22:34:28.319168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.705 qpair failed and we were unable to recover it. 00:25:02.705 [2024-07-24 22:34:28.319276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.705 [2024-07-24 22:34:28.319302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.705 qpair failed and we were unable to recover it. 00:25:02.705 [2024-07-24 22:34:28.319395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.705 [2024-07-24 22:34:28.319421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.705 qpair failed and we were unable to recover it. 00:25:02.705 [2024-07-24 22:34:28.319555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.705 [2024-07-24 22:34:28.319584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.705 qpair failed and we were unable to recover it. 00:25:02.705 [2024-07-24 22:34:28.319688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.705 [2024-07-24 22:34:28.319715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.705 qpair failed and we were unable to recover it. 00:25:02.705 [2024-07-24 22:34:28.319819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.705 [2024-07-24 22:34:28.319846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.705 qpair failed and we were unable to recover it. 00:25:02.705 [2024-07-24 22:34:28.319958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.705 [2024-07-24 22:34:28.319984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.705 qpair failed and we were unable to recover it. 00:25:02.705 [2024-07-24 22:34:28.320101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.705 [2024-07-24 22:34:28.320127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.705 qpair failed and we were unable to recover it. 00:25:02.705 [2024-07-24 22:34:28.320233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.705 [2024-07-24 22:34:28.320259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.705 qpair failed and we were unable to recover it. 00:25:02.705 [2024-07-24 22:34:28.320365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.705 [2024-07-24 22:34:28.320391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.705 qpair failed and we were unable to recover it. 00:25:02.705 [2024-07-24 22:34:28.320508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.705 [2024-07-24 22:34:28.320536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.705 qpair failed and we were unable to recover it. 00:25:02.705 [2024-07-24 22:34:28.320648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.705 [2024-07-24 22:34:28.320676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.705 qpair failed and we were unable to recover it. 00:25:02.705 [2024-07-24 22:34:28.320773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.705 [2024-07-24 22:34:28.320799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.705 qpair failed and we were unable to recover it. 00:25:02.705 [2024-07-24 22:34:28.320899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.705 [2024-07-24 22:34:28.320925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.705 qpair failed and we were unable to recover it. 00:25:02.705 [2024-07-24 22:34:28.321026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.705 [2024-07-24 22:34:28.321053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.705 qpair failed and we were unable to recover it. 00:25:02.705 [2024-07-24 22:34:28.321185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.705 [2024-07-24 22:34:28.321211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.705 qpair failed and we were unable to recover it. 00:25:02.705 [2024-07-24 22:34:28.321314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.705 [2024-07-24 22:34:28.321345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.705 qpair failed and we were unable to recover it. 00:25:02.705 [2024-07-24 22:34:28.321449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.705 [2024-07-24 22:34:28.321475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.705 qpair failed and we were unable to recover it. 00:25:02.705 [2024-07-24 22:34:28.321597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.705 [2024-07-24 22:34:28.321623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.705 qpair failed and we were unable to recover it. 00:25:02.705 [2024-07-24 22:34:28.321740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.705 [2024-07-24 22:34:28.321769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.705 qpair failed and we were unable to recover it. 00:25:02.705 [2024-07-24 22:34:28.321886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.705 [2024-07-24 22:34:28.321913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.705 qpair failed and we were unable to recover it. 00:25:02.705 [2024-07-24 22:34:28.322017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.705 [2024-07-24 22:34:28.322045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.705 qpair failed and we were unable to recover it. 00:25:02.705 [2024-07-24 22:34:28.322165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.705 [2024-07-24 22:34:28.322192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.705 qpair failed and we were unable to recover it. 00:25:02.705 [2024-07-24 22:34:28.322300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.705 [2024-07-24 22:34:28.322326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.705 qpair failed and we were unable to recover it. 00:25:02.705 [2024-07-24 22:34:28.322441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.705 [2024-07-24 22:34:28.322468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.705 qpair failed and we were unable to recover it. 00:25:02.705 [2024-07-24 22:34:28.322579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.705 [2024-07-24 22:34:28.322605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.705 qpair failed and we were unable to recover it. 00:25:02.705 [2024-07-24 22:34:28.322712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.705 [2024-07-24 22:34:28.322740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.705 qpair failed and we were unable to recover it. 00:25:02.705 [2024-07-24 22:34:28.322841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.705 [2024-07-24 22:34:28.322867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.705 qpair failed and we were unable to recover it. 00:25:02.705 [2024-07-24 22:34:28.322990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.705 [2024-07-24 22:34:28.323017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.705 qpair failed and we were unable to recover it. 00:25:02.705 [2024-07-24 22:34:28.323124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.705 [2024-07-24 22:34:28.323151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.705 qpair failed and we were unable to recover it. 00:25:02.705 [2024-07-24 22:34:28.323270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.705 [2024-07-24 22:34:28.323297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.705 qpair failed and we were unable to recover it. 00:25:02.705 [2024-07-24 22:34:28.323444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.705 [2024-07-24 22:34:28.323470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.705 qpair failed and we were unable to recover it. 00:25:02.705 [2024-07-24 22:34:28.323605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.705 [2024-07-24 22:34:28.323639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.705 qpair failed and we were unable to recover it. 00:25:02.705 [2024-07-24 22:34:28.323777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.705 [2024-07-24 22:34:28.323804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.705 qpair failed and we were unable to recover it. 00:25:02.705 [2024-07-24 22:34:28.323910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.705 [2024-07-24 22:34:28.323939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.705 qpair failed and we were unable to recover it. 00:25:02.705 [2024-07-24 22:34:28.324043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.706 [2024-07-24 22:34:28.324069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.706 qpair failed and we were unable to recover it. 00:25:02.706 [2024-07-24 22:34:28.324172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.706 [2024-07-24 22:34:28.324210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.706 qpair failed and we were unable to recover it. 00:25:02.706 [2024-07-24 22:34:28.324343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.706 [2024-07-24 22:34:28.324382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.706 qpair failed and we were unable to recover it. 00:25:02.706 A controller has encountered a failure and is being reset. 00:25:02.706 [2024-07-24 22:34:28.324548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.706 [2024-07-24 22:34:28.324580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.706 qpair failed and we were unable to recover it. 00:25:02.706 [2024-07-24 22:34:28.324686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.706 [2024-07-24 22:34:28.324713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.706 qpair failed and we were unable to recover it. 00:25:02.706 [2024-07-24 22:34:28.324821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.706 [2024-07-24 22:34:28.324848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.706 qpair failed and we were unable to recover it. 00:25:02.706 [2024-07-24 22:34:28.324954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.706 [2024-07-24 22:34:28.324980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.706 qpair failed and we were unable to recover it. 00:25:02.706 [2024-07-24 22:34:28.325086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.706 [2024-07-24 22:34:28.325114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.706 qpair failed and we were unable to recover it. 00:25:02.706 [2024-07-24 22:34:28.325227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.706 [2024-07-24 22:34:28.325256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.706 qpair failed and we were unable to recover it. 00:25:02.706 [2024-07-24 22:34:28.325362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.706 [2024-07-24 22:34:28.325389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.706 qpair failed and we were unable to recover it. 00:25:02.706 [2024-07-24 22:34:28.325500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.706 [2024-07-24 22:34:28.325527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.706 qpair failed and we were unable to recover it. 00:25:02.706 [2024-07-24 22:34:28.325635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.706 [2024-07-24 22:34:28.325662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.706 qpair failed and we were unable to recover it. 00:25:02.706 [2024-07-24 22:34:28.325768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.706 [2024-07-24 22:34:28.325795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.706 qpair failed and we were unable to recover it. 00:25:02.706 [2024-07-24 22:34:28.325904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.706 [2024-07-24 22:34:28.325930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.706 qpair failed and we were unable to recover it. 00:25:02.706 [2024-07-24 22:34:28.326036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.706 [2024-07-24 22:34:28.326063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.706 qpair failed and we were unable to recover it. 00:25:02.706 [2024-07-24 22:34:28.326163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.706 [2024-07-24 22:34:28.326190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.706 qpair failed and we were unable to recover it. 00:25:02.706 [2024-07-24 22:34:28.326297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.706 [2024-07-24 22:34:28.326323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.706 qpair failed and we were unable to recover it. 00:25:02.706 [2024-07-24 22:34:28.326419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.706 [2024-07-24 22:34:28.326445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.706 qpair failed and we were unable to recover it. 00:25:02.706 [2024-07-24 22:34:28.326599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.706 [2024-07-24 22:34:28.326626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.706 qpair failed and we were unable to recover it. 00:25:02.706 [2024-07-24 22:34:28.326729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.706 [2024-07-24 22:34:28.326756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.706 qpair failed and we were unable to recover it. 00:25:02.706 [2024-07-24 22:34:28.326869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.706 [2024-07-24 22:34:28.326896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.706 qpair failed and we were unable to recover it. 00:25:02.706 [2024-07-24 22:34:28.327024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.706 [2024-07-24 22:34:28.327063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x168b120 with addr=10.0.0.2, port=4420 00:25:02.973 qpair failed and we were unable to recover it. 00:25:02.973 [2024-07-24 22:34:28.327187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.973 [2024-07-24 22:34:28.327218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.973 qpair failed and we were unable to recover it. 00:25:02.973 [2024-07-24 22:34:28.327333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.973 [2024-07-24 22:34:28.327362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.973 qpair failed and we were unable to recover it. 00:25:02.973 [2024-07-24 22:34:28.327470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.973 [2024-07-24 22:34:28.327511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.973 qpair failed and we were unable to recover it. 00:25:02.973 [2024-07-24 22:34:28.327628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.973 [2024-07-24 22:34:28.327658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.973 qpair failed and we were unable to recover it. 00:25:02.973 [2024-07-24 22:34:28.327763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.973 [2024-07-24 22:34:28.327789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.973 qpair failed and we were unable to recover it. 00:25:02.973 [2024-07-24 22:34:28.327903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.973 [2024-07-24 22:34:28.327930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.973 qpair failed and we were unable to recover it. 00:25:02.973 [2024-07-24 22:34:28.328032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.973 [2024-07-24 22:34:28.328059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.973 qpair failed and we were unable to recover it. 00:25:02.973 [2024-07-24 22:34:28.328210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.973 [2024-07-24 22:34:28.328237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.973 qpair failed and we were unable to recover it. 00:25:02.973 [2024-07-24 22:34:28.328346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.973 [2024-07-24 22:34:28.328372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.973 qpair failed and we were unable to recover it. 00:25:02.973 [2024-07-24 22:34:28.328492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.973 [2024-07-24 22:34:28.328519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.973 qpair failed and we were unable to recover it. 00:25:02.973 [2024-07-24 22:34:28.328624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.973 [2024-07-24 22:34:28.328651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.973 qpair failed and we were unable to recover it. 00:25:02.973 [2024-07-24 22:34:28.328752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.974 [2024-07-24 22:34:28.328779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.974 qpair failed and we were unable to recover it. 00:25:02.974 [2024-07-24 22:34:28.328887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.974 [2024-07-24 22:34:28.328919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.974 qpair failed and we were unable to recover it. 00:25:02.974 [2024-07-24 22:34:28.329023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.974 [2024-07-24 22:34:28.329049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.974 qpair failed and we were unable to recover it. 00:25:02.974 [2024-07-24 22:34:28.329160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.974 [2024-07-24 22:34:28.329186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.974 qpair failed and we were unable to recover it. 00:25:02.974 [2024-07-24 22:34:28.329306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.974 [2024-07-24 22:34:28.329333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.974 qpair failed and we were unable to recover it. 00:25:02.974 [2024-07-24 22:34:28.329441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.974 [2024-07-24 22:34:28.329467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.974 qpair failed and we were unable to recover it. 00:25:02.974 [2024-07-24 22:34:28.329590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.974 [2024-07-24 22:34:28.329616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.974 qpair failed and we were unable to recover it. 00:25:02.974 [2024-07-24 22:34:28.329722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.974 [2024-07-24 22:34:28.329750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.974 qpair failed and we were unable to recover it. 00:25:02.974 [2024-07-24 22:34:28.329851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.974 [2024-07-24 22:34:28.329878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.974 qpair failed and we were unable to recover it. 00:25:02.974 [2024-07-24 22:34:28.329979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.974 [2024-07-24 22:34:28.330005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.974 qpair failed and we were unable to recover it. 00:25:02.974 [2024-07-24 22:34:28.330105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.974 [2024-07-24 22:34:28.330132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.974 qpair failed and we were unable to recover it. 00:25:02.974 [2024-07-24 22:34:28.330241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.974 [2024-07-24 22:34:28.330268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.974 qpair failed and we were unable to recover it. 00:25:02.974 [2024-07-24 22:34:28.330375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.974 [2024-07-24 22:34:28.330402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.974 qpair failed and we were unable to recover it. 00:25:02.974 [2024-07-24 22:34:28.330533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.974 [2024-07-24 22:34:28.330561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.974 qpair failed and we were unable to recover it. 00:25:02.974 [2024-07-24 22:34:28.330674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.974 [2024-07-24 22:34:28.330702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.974 qpair failed and we were unable to recover it. 00:25:02.974 [2024-07-24 22:34:28.330806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.974 [2024-07-24 22:34:28.330833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.974 qpair failed and we were unable to recover it. 00:25:02.974 [2024-07-24 22:34:28.330962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.974 [2024-07-24 22:34:28.330988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.974 qpair failed and we were unable to recover it. 00:25:02.974 [2024-07-24 22:34:28.331096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.974 [2024-07-24 22:34:28.331124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.974 qpair failed and we were unable to recover it. 00:25:02.974 [2024-07-24 22:34:28.331226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.974 [2024-07-24 22:34:28.331252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.974 qpair failed and we were unable to recover it. 00:25:02.974 [2024-07-24 22:34:28.331357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.974 [2024-07-24 22:34:28.331384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.974 qpair failed and we were unable to recover it. 00:25:02.974 [2024-07-24 22:34:28.331489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.974 [2024-07-24 22:34:28.331516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.974 qpair failed and we were unable to recover it. 00:25:02.974 [2024-07-24 22:34:28.331619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.974 [2024-07-24 22:34:28.331645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.974 qpair failed and we were unable to recover it. 00:25:02.974 [2024-07-24 22:34:28.331752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.974 [2024-07-24 22:34:28.331780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.974 qpair failed and we were unable to recover it. 00:25:02.974 [2024-07-24 22:34:28.331882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.974 [2024-07-24 22:34:28.331908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.974 qpair failed and we were unable to recover it. 00:25:02.974 [2024-07-24 22:34:28.332019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.974 [2024-07-24 22:34:28.332049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.974 qpair failed and we were unable to recover it. 00:25:02.974 [2024-07-24 22:34:28.332156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.974 [2024-07-24 22:34:28.332182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.974 qpair failed and we were unable to recover it. 00:25:02.974 [2024-07-24 22:34:28.332285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.974 [2024-07-24 22:34:28.332311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.974 qpair failed and we were unable to recover it. 00:25:02.974 [2024-07-24 22:34:28.332426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.974 [2024-07-24 22:34:28.332452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.974 qpair failed and we were unable to recover it. 00:25:02.974 [2024-07-24 22:34:28.332565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.974 [2024-07-24 22:34:28.332592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.974 qpair failed and we were unable to recover it. 00:25:02.974 [2024-07-24 22:34:28.332698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.974 [2024-07-24 22:34:28.332724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.974 qpair failed and we were unable to recover it. 00:25:02.974 [2024-07-24 22:34:28.332833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.974 [2024-07-24 22:34:28.332859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.974 qpair failed and we were unable to recover it. 00:25:02.974 [2024-07-24 22:34:28.332958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.974 [2024-07-24 22:34:28.332984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.974 qpair failed and we were unable to recover it. 00:25:02.974 [2024-07-24 22:34:28.333091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.974 [2024-07-24 22:34:28.333118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.974 qpair failed and we were unable to recover it. 00:25:02.974 [2024-07-24 22:34:28.333224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.974 [2024-07-24 22:34:28.333250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.974 qpair failed and we were unable to recover it. 00:25:02.974 [2024-07-24 22:34:28.333356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.974 [2024-07-24 22:34:28.333384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.974 qpair failed and we were unable to recover it. 00:25:02.974 [2024-07-24 22:34:28.333601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.974 [2024-07-24 22:34:28.333628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.974 qpair failed and we were unable to recover it. 00:25:02.974 [2024-07-24 22:34:28.333761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.974 [2024-07-24 22:34:28.333787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.974 qpair failed and we were unable to recover it. 00:25:02.974 [2024-07-24 22:34:28.333896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.974 [2024-07-24 22:34:28.333923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.974 qpair failed and we were unable to recover it. 00:25:02.974 [2024-07-24 22:34:28.334023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.974 [2024-07-24 22:34:28.334049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.974 qpair failed and we were unable to recover it. 00:25:02.974 [2024-07-24 22:34:28.334147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.974 [2024-07-24 22:34:28.334173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.974 qpair failed and we were unable to recover it. 00:25:02.974 [2024-07-24 22:34:28.334280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.974 [2024-07-24 22:34:28.334308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.974 qpair failed and we were unable to recover it. 00:25:02.974 [2024-07-24 22:34:28.334411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.974 [2024-07-24 22:34:28.334442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.974 qpair failed and we were unable to recover it. 00:25:02.974 [2024-07-24 22:34:28.334559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.974 [2024-07-24 22:34:28.334587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.974 qpair failed and we were unable to recover it. 00:25:02.974 [2024-07-24 22:34:28.334690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.974 [2024-07-24 22:34:28.334718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.974 qpair failed and we were unable to recover it. 00:25:02.974 [2024-07-24 22:34:28.334821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.974 [2024-07-24 22:34:28.334849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.974 qpair failed and we were unable to recover it. 00:25:02.974 [2024-07-24 22:34:28.334950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.974 [2024-07-24 22:34:28.334976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.974 qpair failed and we were unable to recover it. 00:25:02.974 [2024-07-24 22:34:28.335086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.974 [2024-07-24 22:34:28.335114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.974 qpair failed and we were unable to recover it. 00:25:02.974 [2024-07-24 22:34:28.335223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.974 [2024-07-24 22:34:28.335250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.974 qpair failed and we were unable to recover it. 00:25:02.974 [2024-07-24 22:34:28.335362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.974 [2024-07-24 22:34:28.335388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.975 qpair failed and we were unable to recover it. 00:25:02.975 [2024-07-24 22:34:28.335500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.975 [2024-07-24 22:34:28.335526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.975 qpair failed and we were unable to recover it. 00:25:02.975 [2024-07-24 22:34:28.335737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.975 [2024-07-24 22:34:28.335763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.975 qpair failed and we were unable to recover it. 00:25:02.975 [2024-07-24 22:34:28.335866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.975 [2024-07-24 22:34:28.335892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.975 qpair failed and we were unable to recover it. 00:25:02.975 [2024-07-24 22:34:28.335994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.975 [2024-07-24 22:34:28.336019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.975 qpair failed and we were unable to recover it. 00:25:02.975 [2024-07-24 22:34:28.336127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.975 [2024-07-24 22:34:28.336153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.975 qpair failed and we were unable to recover it. 00:25:02.975 [2024-07-24 22:34:28.336263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.975 [2024-07-24 22:34:28.336289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.975 qpair failed and we were unable to recover it. 00:25:02.975 [2024-07-24 22:34:28.336402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.975 [2024-07-24 22:34:28.336428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.975 qpair failed and we were unable to recover it. 00:25:02.975 [2024-07-24 22:34:28.336541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.975 [2024-07-24 22:34:28.336568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.975 qpair failed and we were unable to recover it. 00:25:02.975 [2024-07-24 22:34:28.336679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.975 [2024-07-24 22:34:28.336706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.975 qpair failed and we were unable to recover it. 00:25:02.975 [2024-07-24 22:34:28.336806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.975 [2024-07-24 22:34:28.336833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.975 qpair failed and we were unable to recover it. 00:25:02.975 [2024-07-24 22:34:28.336947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.975 [2024-07-24 22:34:28.336973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.975 qpair failed and we were unable to recover it. 00:25:02.975 [2024-07-24 22:34:28.337085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.975 [2024-07-24 22:34:28.337116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.975 qpair failed and we were unable to recover it. 00:25:02.975 [2024-07-24 22:34:28.337226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.975 [2024-07-24 22:34:28.337253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.975 qpair failed and we were unable to recover it. 00:25:02.975 [2024-07-24 22:34:28.337366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.975 [2024-07-24 22:34:28.337392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.975 qpair failed and we were unable to recover it. 00:25:02.975 [2024-07-24 22:34:28.337494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.975 [2024-07-24 22:34:28.337520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.975 qpair failed and we were unable to recover it. 00:25:02.975 [2024-07-24 22:34:28.337619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.975 [2024-07-24 22:34:28.337645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.975 qpair failed and we were unable to recover it. 00:25:02.975 [2024-07-24 22:34:28.337758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.975 [2024-07-24 22:34:28.337783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.975 qpair failed and we were unable to recover it. 00:25:02.975 [2024-07-24 22:34:28.337891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.975 [2024-07-24 22:34:28.337916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.975 qpair failed and we were unable to recover it. 00:25:02.975 [2024-07-24 22:34:28.338015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.975 [2024-07-24 22:34:28.338040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.975 qpair failed and we were unable to recover it. 00:25:02.975 [2024-07-24 22:34:28.338159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.975 [2024-07-24 22:34:28.338188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.975 qpair failed and we were unable to recover it. 00:25:02.975 [2024-07-24 22:34:28.338286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.975 [2024-07-24 22:34:28.338313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.975 qpair failed and we were unable to recover it. 00:25:02.975 [2024-07-24 22:34:28.338414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.975 [2024-07-24 22:34:28.338440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.975 qpair failed and we were unable to recover it. 00:25:02.975 [2024-07-24 22:34:28.338564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.975 [2024-07-24 22:34:28.338597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.975 qpair failed and we were unable to recover it. 00:25:02.975 [2024-07-24 22:34:28.338712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.975 [2024-07-24 22:34:28.338740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.975 qpair failed and we were unable to recover it. 00:25:02.975 [2024-07-24 22:34:28.338840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.975 [2024-07-24 22:34:28.338867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.975 qpair failed and we were unable to recover it. 00:25:02.975 [2024-07-24 22:34:28.338997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.975 [2024-07-24 22:34:28.339023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.975 qpair failed and we were unable to recover it. 00:25:02.975 [2024-07-24 22:34:28.339127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.975 [2024-07-24 22:34:28.339153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.975 qpair failed and we were unable to recover it. 00:25:02.975 [2024-07-24 22:34:28.339259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.975 [2024-07-24 22:34:28.339287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.975 qpair failed and we were unable to recover it. 00:25:02.975 [2024-07-24 22:34:28.339390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.975 [2024-07-24 22:34:28.339416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.975 qpair failed and we were unable to recover it. 00:25:02.975 [2024-07-24 22:34:28.339522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.975 [2024-07-24 22:34:28.339549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.975 qpair failed and we were unable to recover it. 00:25:02.975 [2024-07-24 22:34:28.339660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.975 [2024-07-24 22:34:28.339688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.975 qpair failed and we were unable to recover it. 00:25:02.975 [2024-07-24 22:34:28.339795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.975 [2024-07-24 22:34:28.339821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.975 qpair failed and we were unable to recover it. 00:25:02.975 [2024-07-24 22:34:28.339919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.975 [2024-07-24 22:34:28.339952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.975 qpair failed and we were unable to recover it. 00:25:02.975 [2024-07-24 22:34:28.340055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.975 [2024-07-24 22:34:28.340081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.975 qpair failed and we were unable to recover it. 00:25:02.975 [2024-07-24 22:34:28.340187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.975 [2024-07-24 22:34:28.340213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.975 qpair failed and we were unable to recover it. 00:25:02.975 [2024-07-24 22:34:28.340317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.975 [2024-07-24 22:34:28.340344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.975 qpair failed and we were unable to recover it. 00:25:02.975 [2024-07-24 22:34:28.340453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.975 [2024-07-24 22:34:28.340486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.975 qpair failed and we were unable to recover it. 00:25:02.975 [2024-07-24 22:34:28.340599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.975 [2024-07-24 22:34:28.340627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.975 qpair failed and we were unable to recover it. 00:25:02.975 [2024-07-24 22:34:28.340732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.975 [2024-07-24 22:34:28.340759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.975 qpair failed and we were unable to recover it. 00:25:02.975 [2024-07-24 22:34:28.340858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.975 [2024-07-24 22:34:28.340885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.975 qpair failed and we were unable to recover it. 00:25:02.975 [2024-07-24 22:34:28.340988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.975 [2024-07-24 22:34:28.341015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.975 qpair failed and we were unable to recover it. 00:25:02.975 [2024-07-24 22:34:28.341112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.975 [2024-07-24 22:34:28.341138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.975 qpair failed and we were unable to recover it. 00:25:02.975 [2024-07-24 22:34:28.341240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.975 [2024-07-24 22:34:28.341266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.975 qpair failed and we were unable to recover it. 00:25:02.975 [2024-07-24 22:34:28.341367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.975 [2024-07-24 22:34:28.341393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.975 qpair failed and we were unable to recover it. 00:25:02.975 [2024-07-24 22:34:28.341498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.975 [2024-07-24 22:34:28.341525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.975 qpair failed and we were unable to recover it. 00:25:02.975 [2024-07-24 22:34:28.341630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.975 [2024-07-24 22:34:28.341659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.975 qpair failed and we were unable to recover it. 00:25:02.975 [2024-07-24 22:34:28.341772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.975 [2024-07-24 22:34:28.341799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.975 qpair failed and we were unable to recover it. 00:25:02.975 [2024-07-24 22:34:28.341907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.975 [2024-07-24 22:34:28.341936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.975 qpair failed and we were unable to recover it. 00:25:02.975 [2024-07-24 22:34:28.342038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.975 [2024-07-24 22:34:28.342064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.975 qpair failed and we were unable to recover it. 00:25:02.975 [2024-07-24 22:34:28.342167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.975 [2024-07-24 22:34:28.342195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.975 qpair failed and we were unable to recover it. 00:25:02.975 [2024-07-24 22:34:28.342305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.975 [2024-07-24 22:34:28.342334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.975 qpair failed and we were unable to recover it. 00:25:02.975 [2024-07-24 22:34:28.342438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.975 [2024-07-24 22:34:28.342464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.975 qpair failed and we were unable to recover it. 00:25:02.975 [2024-07-24 22:34:28.342575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.975 [2024-07-24 22:34:28.342601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.976 qpair failed and we were unable to recover it. 00:25:02.976 [2024-07-24 22:34:28.342708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.976 [2024-07-24 22:34:28.342735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.976 qpair failed and we were unable to recover it. 00:25:02.976 [2024-07-24 22:34:28.342846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.976 [2024-07-24 22:34:28.342872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.976 qpair failed and we were unable to recover it. 00:25:02.976 [2024-07-24 22:34:28.342971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.976 [2024-07-24 22:34:28.342997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.976 qpair failed and we were unable to recover it. 00:25:02.976 [2024-07-24 22:34:28.343111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.976 [2024-07-24 22:34:28.343138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.976 qpair failed and we were unable to recover it. 00:25:02.976 [2024-07-24 22:34:28.343244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.976 [2024-07-24 22:34:28.343272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.976 qpair failed and we were unable to recover it. 00:25:02.976 [2024-07-24 22:34:28.343376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.976 [2024-07-24 22:34:28.343402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.976 qpair failed and we were unable to recover it. 00:25:02.976 [2024-07-24 22:34:28.343514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.976 [2024-07-24 22:34:28.343544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.976 qpair failed and we were unable to recover it. 00:25:02.976 [2024-07-24 22:34:28.343651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.976 [2024-07-24 22:34:28.343677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.976 qpair failed and we were unable to recover it. 00:25:02.976 [2024-07-24 22:34:28.343778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.976 [2024-07-24 22:34:28.343804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.976 qpair failed and we were unable to recover it. 00:25:02.976 [2024-07-24 22:34:28.343907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.976 [2024-07-24 22:34:28.343934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.976 qpair failed and we were unable to recover it. 00:25:02.976 [2024-07-24 22:34:28.344038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.976 [2024-07-24 22:34:28.344064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.976 qpair failed and we were unable to recover it. 00:25:02.976 [2024-07-24 22:34:28.344169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.976 [2024-07-24 22:34:28.344195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.976 qpair failed and we were unable to recover it. 00:25:02.976 [2024-07-24 22:34:28.344304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.976 [2024-07-24 22:34:28.344330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.976 qpair failed and we were unable to recover it. 00:25:02.976 [2024-07-24 22:34:28.344432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.976 [2024-07-24 22:34:28.344459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.976 qpair failed and we were unable to recover it. 00:25:02.976 [2024-07-24 22:34:28.344570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.976 [2024-07-24 22:34:28.344600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.976 qpair failed and we were unable to recover it. 00:25:02.976 [2024-07-24 22:34:28.344701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.976 [2024-07-24 22:34:28.344729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.976 qpair failed and we were unable to recover it. 00:25:02.976 [2024-07-24 22:34:28.344834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.976 [2024-07-24 22:34:28.344861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.976 qpair failed and we were unable to recover it. 00:25:02.976 [2024-07-24 22:34:28.344966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.976 [2024-07-24 22:34:28.344995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.976 qpair failed and we were unable to recover it. 00:25:02.976 [2024-07-24 22:34:28.345099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.976 [2024-07-24 22:34:28.345126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.976 qpair failed and we were unable to recover it. 00:25:02.976 [2024-07-24 22:34:28.345274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.976 [2024-07-24 22:34:28.345304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.976 qpair failed and we were unable to recover it. 00:25:02.976 [2024-07-24 22:34:28.345452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.976 [2024-07-24 22:34:28.345478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.976 qpair failed and we were unable to recover it. 00:25:02.976 [2024-07-24 22:34:28.345587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.976 [2024-07-24 22:34:28.345613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.976 qpair failed and we were unable to recover it. 00:25:02.976 [2024-07-24 22:34:28.345716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.976 [2024-07-24 22:34:28.345742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.976 qpair failed and we were unable to recover it. 00:25:02.976 [2024-07-24 22:34:28.345852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.976 [2024-07-24 22:34:28.345878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.976 qpair failed and we were unable to recover it. 00:25:02.976 [2024-07-24 22:34:28.345988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.976 [2024-07-24 22:34:28.346015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.976 qpair failed and we were unable to recover it. 00:25:02.976 [2024-07-24 22:34:28.346119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.976 [2024-07-24 22:34:28.346146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.976 qpair failed and we were unable to recover it. 00:25:02.976 [2024-07-24 22:34:28.346252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.976 [2024-07-24 22:34:28.346280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.976 qpair failed and we were unable to recover it. 00:25:02.976 [2024-07-24 22:34:28.346386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.976 [2024-07-24 22:34:28.346414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.976 qpair failed and we were unable to recover it. 00:25:02.976 [2024-07-24 22:34:28.346523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.976 [2024-07-24 22:34:28.346554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.976 qpair failed and we were unable to recover it. 00:25:02.976 [2024-07-24 22:34:28.346666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.976 [2024-07-24 22:34:28.346693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.976 qpair failed and we were unable to recover it. 00:25:02.976 [2024-07-24 22:34:28.346808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.976 [2024-07-24 22:34:28.346834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.976 qpair failed and we were unable to recover it. 00:25:02.976 [2024-07-24 22:34:28.346941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.976 [2024-07-24 22:34:28.346966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.976 qpair failed and we were unable to recover it. 00:25:02.976 [2024-07-24 22:34:28.347064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.976 [2024-07-24 22:34:28.347090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.976 qpair failed and we were unable to recover it. 00:25:02.976 [2024-07-24 22:34:28.347203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.976 [2024-07-24 22:34:28.347232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.976 qpair failed and we were unable to recover it. 00:25:02.976 [2024-07-24 22:34:28.347340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.976 [2024-07-24 22:34:28.347367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.976 qpair failed and we were unable to recover it. 00:25:02.976 [2024-07-24 22:34:28.347468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.976 [2024-07-24 22:34:28.347501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.976 qpair failed and we were unable to recover it. 00:25:02.976 [2024-07-24 22:34:28.347609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.976 [2024-07-24 22:34:28.347635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.976 qpair failed and we were unable to recover it. 00:25:02.976 [2024-07-24 22:34:28.347746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.976 [2024-07-24 22:34:28.347771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.976 qpair failed and we were unable to recover it. 00:25:02.976 [2024-07-24 22:34:28.347877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.976 [2024-07-24 22:34:28.347905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.976 qpair failed and we were unable to recover it. 00:25:02.976 [2024-07-24 22:34:28.348016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.976 [2024-07-24 22:34:28.348044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.976 qpair failed and we were unable to recover it. 00:25:02.976 [2024-07-24 22:34:28.348151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.976 [2024-07-24 22:34:28.348177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.976 qpair failed and we were unable to recover it. 00:25:02.976 [2024-07-24 22:34:28.348281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.976 [2024-07-24 22:34:28.348307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.976 qpair failed and we were unable to recover it. 00:25:02.976 [2024-07-24 22:34:28.348416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.976 [2024-07-24 22:34:28.348442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.976 qpair failed and we were unable to recover it. 00:25:02.976 [2024-07-24 22:34:28.348551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.976 [2024-07-24 22:34:28.348577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.976 qpair failed and we were unable to recover it. 00:25:02.976 [2024-07-24 22:34:28.348678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.976 [2024-07-24 22:34:28.348704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.976 qpair failed and we were unable to recover it. 00:25:02.976 [2024-07-24 22:34:28.348811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.976 [2024-07-24 22:34:28.348839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.976 qpair failed and we were unable to recover it. 00:25:02.976 [2024-07-24 22:34:28.348957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.976 [2024-07-24 22:34:28.348987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.976 qpair failed and we were unable to recover it. 00:25:02.976 [2024-07-24 22:34:28.349088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.976 [2024-07-24 22:34:28.349114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.976 qpair failed and we were unable to recover it. 00:25:02.976 [2024-07-24 22:34:28.349220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.976 [2024-07-24 22:34:28.349246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.976 qpair failed and we were unable to recover it. 00:25:02.976 [2024-07-24 22:34:28.349346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.976 [2024-07-24 22:34:28.349372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.976 qpair failed and we were unable to recover it. 00:25:02.976 [2024-07-24 22:34:28.349471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.976 [2024-07-24 22:34:28.349505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.976 qpair failed and we were unable to recover it. 00:25:02.976 [2024-07-24 22:34:28.349617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.976 [2024-07-24 22:34:28.349644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.976 qpair failed and we were unable to recover it. 00:25:02.977 [2024-07-24 22:34:28.349749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.977 [2024-07-24 22:34:28.349774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.977 qpair failed and we were unable to recover it. 00:25:02.977 [2024-07-24 22:34:28.349880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.977 [2024-07-24 22:34:28.349907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.977 qpair failed and we were unable to recover it. 00:25:02.977 [2024-07-24 22:34:28.350015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.977 [2024-07-24 22:34:28.350043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.977 qpair failed and we were unable to recover it. 00:25:02.977 [2024-07-24 22:34:28.350150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.977 [2024-07-24 22:34:28.350177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.977 qpair failed and we were unable to recover it. 00:25:02.977 [2024-07-24 22:34:28.350281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.977 [2024-07-24 22:34:28.350308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.977 qpair failed and we were unable to recover it. 00:25:02.977 [2024-07-24 22:34:28.350421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.977 [2024-07-24 22:34:28.350450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.977 qpair failed and we were unable to recover it. 00:25:02.977 [2024-07-24 22:34:28.350560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.977 [2024-07-24 22:34:28.350587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.977 qpair failed and we were unable to recover it. 00:25:02.977 [2024-07-24 22:34:28.350683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.977 [2024-07-24 22:34:28.350714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.977 qpair failed and we were unable to recover it. 00:25:02.977 [2024-07-24 22:34:28.350826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.977 [2024-07-24 22:34:28.350853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.977 qpair failed and we were unable to recover it. 00:25:02.977 [2024-07-24 22:34:28.350957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.977 [2024-07-24 22:34:28.350983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.977 qpair failed and we were unable to recover it. 00:25:02.977 [2024-07-24 22:34:28.351097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.977 [2024-07-24 22:34:28.351125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.977 qpair failed and we were unable to recover it. 00:25:02.977 [2024-07-24 22:34:28.351233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.977 [2024-07-24 22:34:28.351260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.977 qpair failed and we were unable to recover it. 00:25:02.977 [2024-07-24 22:34:28.351367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.977 [2024-07-24 22:34:28.351392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.977 qpair failed and we were unable to recover it. 00:25:02.977 [2024-07-24 22:34:28.351498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.977 [2024-07-24 22:34:28.351526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.977 qpair failed and we were unable to recover it. 00:25:02.977 [2024-07-24 22:34:28.351624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.977 [2024-07-24 22:34:28.351650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.977 qpair failed and we were unable to recover it. 00:25:02.977 [2024-07-24 22:34:28.351753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.977 [2024-07-24 22:34:28.351779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.977 qpair failed and we were unable to recover it. 00:25:02.977 [2024-07-24 22:34:28.351888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.977 [2024-07-24 22:34:28.351914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.977 qpair failed and we were unable to recover it. 00:25:02.977 [2024-07-24 22:34:28.352010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.977 [2024-07-24 22:34:28.352036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.977 qpair failed and we were unable to recover it. 00:25:02.977 [2024-07-24 22:34:28.352138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.977 [2024-07-24 22:34:28.352164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.977 qpair failed and we were unable to recover it. 00:25:02.977 [2024-07-24 22:34:28.352263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.977 [2024-07-24 22:34:28.352289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.977 qpair failed and we were unable to recover it. 00:25:02.977 [2024-07-24 22:34:28.352395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.977 [2024-07-24 22:34:28.352421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.977 qpair failed and we were unable to recover it. 00:25:02.977 [2024-07-24 22:34:28.352554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.977 [2024-07-24 22:34:28.352582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.977 qpair failed and we were unable to recover it. 00:25:02.977 [2024-07-24 22:34:28.352688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.977 [2024-07-24 22:34:28.352714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.977 qpair failed and we were unable to recover it. 00:25:02.977 [2024-07-24 22:34:28.352818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.977 [2024-07-24 22:34:28.352844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.977 qpair failed and we were unable to recover it. 00:25:02.977 [2024-07-24 22:34:28.352957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.977 [2024-07-24 22:34:28.352983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.977 qpair failed and we were unable to recover it. 00:25:02.977 [2024-07-24 22:34:28.353084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.977 [2024-07-24 22:34:28.353111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.977 qpair failed and we were unable to recover it. 00:25:02.977 [2024-07-24 22:34:28.353218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.977 [2024-07-24 22:34:28.353243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.977 qpair failed and we were unable to recover it. 00:25:02.977 [2024-07-24 22:34:28.353344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.977 [2024-07-24 22:34:28.353371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.977 qpair failed and we were unable to recover it. 00:25:02.977 [2024-07-24 22:34:28.353474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.977 [2024-07-24 22:34:28.353517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.977 qpair failed and we were unable to recover it. 00:25:02.977 [2024-07-24 22:34:28.353626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.977 [2024-07-24 22:34:28.353653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.977 qpair failed and we were unable to recover it. 00:25:02.977 [2024-07-24 22:34:28.353757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.977 [2024-07-24 22:34:28.353785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.977 qpair failed and we were unable to recover it. 00:25:02.977 [2024-07-24 22:34:28.353916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.977 [2024-07-24 22:34:28.353943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.977 qpair failed and we were unable to recover it. 00:25:02.977 [2024-07-24 22:34:28.354048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.977 [2024-07-24 22:34:28.354075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.977 qpair failed and we were unable to recover it. 00:25:02.977 [2024-07-24 22:34:28.354192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.977 [2024-07-24 22:34:28.354222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.977 qpair failed and we were unable to recover it. 00:25:02.977 [2024-07-24 22:34:28.354334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.977 [2024-07-24 22:34:28.354364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.977 qpair failed and we were unable to recover it. 00:25:02.977 [2024-07-24 22:34:28.354470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.977 [2024-07-24 22:34:28.354505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.977 qpair failed and we were unable to recover it. 00:25:02.977 [2024-07-24 22:34:28.354618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.977 [2024-07-24 22:34:28.354645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.977 qpair failed and we were unable to recover it. 00:25:02.977 [2024-07-24 22:34:28.354753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.977 [2024-07-24 22:34:28.354780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.977 qpair failed and we were unable to recover it. 00:25:02.977 [2024-07-24 22:34:28.354893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.977 [2024-07-24 22:34:28.354920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.977 qpair failed and we were unable to recover it. 00:25:02.977 [2024-07-24 22:34:28.355025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.977 [2024-07-24 22:34:28.355051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.977 qpair failed and we were unable to recover it. 00:25:02.977 [2024-07-24 22:34:28.355158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.977 [2024-07-24 22:34:28.355185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.977 qpair failed and we were unable to recover it. 00:25:02.977 [2024-07-24 22:34:28.355294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.977 [2024-07-24 22:34:28.355321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.977 qpair failed and we were unable to recover it. 00:25:02.977 [2024-07-24 22:34:28.355427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.977 [2024-07-24 22:34:28.355452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.977 qpair failed and we were unable to recover it. 00:25:02.977 [2024-07-24 22:34:28.355560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.977 [2024-07-24 22:34:28.355587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.977 qpair failed and we were unable to recover it. 00:25:02.977 [2024-07-24 22:34:28.355691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.977 [2024-07-24 22:34:28.355717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.977 qpair failed and we were unable to recover it. 00:25:02.978 [2024-07-24 22:34:28.355817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.978 [2024-07-24 22:34:28.355842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.978 qpair failed and we were unable to recover it. 00:25:02.978 [2024-07-24 22:34:28.355946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.978 [2024-07-24 22:34:28.355973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.978 qpair failed and we were unable to recover it. 00:25:02.978 [2024-07-24 22:34:28.356087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.978 [2024-07-24 22:34:28.356113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.978 qpair failed and we were unable to recover it. 00:25:02.978 [2024-07-24 22:34:28.356229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.978 [2024-07-24 22:34:28.356255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.978 qpair failed and we were unable to recover it. 00:25:02.978 [2024-07-24 22:34:28.356366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.978 [2024-07-24 22:34:28.356392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.978 qpair failed and we were unable to recover it. 00:25:02.978 [2024-07-24 22:34:28.356498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.978 [2024-07-24 22:34:28.356528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.978 qpair failed and we were unable to recover it. 00:25:02.978 [2024-07-24 22:34:28.356663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.978 [2024-07-24 22:34:28.356689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.978 qpair failed and we were unable to recover it. 00:25:02.978 [2024-07-24 22:34:28.356792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.978 [2024-07-24 22:34:28.356819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.978 qpair failed and we were unable to recover it. 00:25:02.978 [2024-07-24 22:34:28.356916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.978 [2024-07-24 22:34:28.356942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.978 qpair failed and we were unable to recover it. 00:25:02.978 [2024-07-24 22:34:28.357046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.978 [2024-07-24 22:34:28.357072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.978 qpair failed and we were unable to recover it. 00:25:02.978 [2024-07-24 22:34:28.357183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.978 [2024-07-24 22:34:28.357208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.978 qpair failed and we were unable to recover it. 00:25:02.978 [2024-07-24 22:34:28.357308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.978 [2024-07-24 22:34:28.357333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.978 qpair failed and we were unable to recover it. 00:25:02.978 [2024-07-24 22:34:28.357434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.978 [2024-07-24 22:34:28.357459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.978 qpair failed and we were unable to recover it. 00:25:02.978 [2024-07-24 22:34:28.357600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.978 [2024-07-24 22:34:28.357626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.978 qpair failed and we were unable to recover it. 00:25:02.978 [2024-07-24 22:34:28.357725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.978 [2024-07-24 22:34:28.357751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.978 qpair failed and we were unable to recover it. 00:25:02.978 [2024-07-24 22:34:28.357888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.978 [2024-07-24 22:34:28.357913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.978 qpair failed and we were unable to recover it. 00:25:02.978 [2024-07-24 22:34:28.358049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.978 [2024-07-24 22:34:28.358075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.978 qpair failed and we were unable to recover it. 00:25:02.978 [2024-07-24 22:34:28.358216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.978 [2024-07-24 22:34:28.358246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.978 qpair failed and we were unable to recover it. 00:25:02.978 [2024-07-24 22:34:28.358395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.978 [2024-07-24 22:34:28.358421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.978 qpair failed and we were unable to recover it. 00:25:02.978 [2024-07-24 22:34:28.358531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.978 [2024-07-24 22:34:28.358559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.978 qpair failed and we were unable to recover it. 00:25:02.978 [2024-07-24 22:34:28.358663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.978 [2024-07-24 22:34:28.358691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.978 qpair failed and we were unable to recover it. 00:25:02.978 [2024-07-24 22:34:28.358796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.978 [2024-07-24 22:34:28.358823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.978 qpair failed and we were unable to recover it. 00:25:02.978 [2024-07-24 22:34:28.358928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.978 [2024-07-24 22:34:28.358955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.978 qpair failed and we were unable to recover it. 00:25:02.978 [2024-07-24 22:34:28.359056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.978 [2024-07-24 22:34:28.359082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.978 qpair failed and we were unable to recover it. 00:25:02.978 [2024-07-24 22:34:28.359187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.978 [2024-07-24 22:34:28.359215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.978 qpair failed and we were unable to recover it. 00:25:02.978 [2024-07-24 22:34:28.359310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.978 [2024-07-24 22:34:28.359336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.978 qpair failed and we were unable to recover it. 00:25:02.978 [2024-07-24 22:34:28.359462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.978 [2024-07-24 22:34:28.359496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.978 qpair failed and we were unable to recover it. 00:25:02.978 [2024-07-24 22:34:28.359598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.978 [2024-07-24 22:34:28.359625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.978 qpair failed and we were unable to recover it. 00:25:02.978 [2024-07-24 22:34:28.359724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.978 [2024-07-24 22:34:28.359750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.978 qpair failed and we were unable to recover it. 00:25:02.978 [2024-07-24 22:34:28.359848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.978 [2024-07-24 22:34:28.359879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.978 qpair failed and we were unable to recover it. 00:25:02.978 [2024-07-24 22:34:28.359982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.978 [2024-07-24 22:34:28.360009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.978 qpair failed and we were unable to recover it. 00:25:02.978 [2024-07-24 22:34:28.360120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.978 [2024-07-24 22:34:28.360146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.978 qpair failed and we were unable to recover it. 00:25:02.978 [2024-07-24 22:34:28.360250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.978 [2024-07-24 22:34:28.360276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.978 qpair failed and we were unable to recover it. 00:25:02.978 [2024-07-24 22:34:28.360380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.978 [2024-07-24 22:34:28.360406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.978 qpair failed and we were unable to recover it. 00:25:02.978 [2024-07-24 22:34:28.360514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.978 [2024-07-24 22:34:28.360542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.978 qpair failed and we were unable to recover it. 00:25:02.978 [2024-07-24 22:34:28.360658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.978 [2024-07-24 22:34:28.360684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.978 qpair failed and we were unable to recover it. 00:25:02.978 [2024-07-24 22:34:28.360800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.978 [2024-07-24 22:34:28.360830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.978 qpair failed and we were unable to recover it. 00:25:02.978 [2024-07-24 22:34:28.360934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.978 [2024-07-24 22:34:28.360961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.978 qpair failed and we were unable to recover it. 00:25:02.978 [2024-07-24 22:34:28.361062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.978 [2024-07-24 22:34:28.361088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.978 qpair failed and we were unable to recover it. 00:25:02.978 [2024-07-24 22:34:28.361192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.978 [2024-07-24 22:34:28.361217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.978 qpair failed and we were unable to recover it. 00:25:02.978 [2024-07-24 22:34:28.361322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.978 [2024-07-24 22:34:28.361349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.978 qpair failed and we were unable to recover it. 00:25:02.978 [2024-07-24 22:34:28.361455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.978 [2024-07-24 22:34:28.361493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.978 qpair failed and we were unable to recover it. 00:25:02.978 [2024-07-24 22:34:28.361654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.978 [2024-07-24 22:34:28.361682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.978 qpair failed and we were unable to recover it. 00:25:02.978 [2024-07-24 22:34:28.361796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.978 [2024-07-24 22:34:28.361823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.978 qpair failed and we were unable to recover it. 00:25:02.978 [2024-07-24 22:34:28.361923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.978 [2024-07-24 22:34:28.361949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.978 qpair failed and we were unable to recover it. 00:25:02.978 [2024-07-24 22:34:28.362055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.978 [2024-07-24 22:34:28.362085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.978 qpair failed and we were unable to recover it. 00:25:02.978 [2024-07-24 22:34:28.362187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.978 [2024-07-24 22:34:28.362213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.978 qpair failed and we were unable to recover it. 00:25:02.978 [2024-07-24 22:34:28.362318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.978 [2024-07-24 22:34:28.362344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.978 qpair failed and we were unable to recover it. 00:25:02.978 [2024-07-24 22:34:28.362447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.978 [2024-07-24 22:34:28.362475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.978 qpair failed and we were unable to recover it. 00:25:02.978 [2024-07-24 22:34:28.362591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.978 [2024-07-24 22:34:28.362618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.978 qpair failed and we were unable to recover it. 00:25:02.978 [2024-07-24 22:34:28.362722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.978 [2024-07-24 22:34:28.362748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.978 qpair failed and we were unable to recover it. 00:25:02.978 [2024-07-24 22:34:28.362895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.978 [2024-07-24 22:34:28.362922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.978 qpair failed and we were unable to recover it. 00:25:02.978 [2024-07-24 22:34:28.363027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.978 [2024-07-24 22:34:28.363053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.978 qpair failed and we were unable to recover it. 00:25:02.978 [2024-07-24 22:34:28.363158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.978 [2024-07-24 22:34:28.363187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.978 qpair failed and we were unable to recover it. 00:25:02.978 [2024-07-24 22:34:28.363289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.978 [2024-07-24 22:34:28.363315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.978 qpair failed and we were unable to recover it. 00:25:02.978 [2024-07-24 22:34:28.363415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.978 [2024-07-24 22:34:28.363442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.978 qpair failed and we were unable to recover it. 00:25:02.979 [2024-07-24 22:34:28.363560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.979 [2024-07-24 22:34:28.363587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.979 qpair failed and we were unable to recover it. 00:25:02.979 [2024-07-24 22:34:28.363695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.979 [2024-07-24 22:34:28.363722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.979 qpair failed and we were unable to recover it. 00:25:02.979 [2024-07-24 22:34:28.363829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.979 [2024-07-24 22:34:28.363856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.979 qpair failed and we were unable to recover it. 00:25:02.979 [2024-07-24 22:34:28.363958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.979 [2024-07-24 22:34:28.363984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.979 qpair failed and we were unable to recover it. 00:25:02.979 [2024-07-24 22:34:28.364079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.979 [2024-07-24 22:34:28.364104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.979 qpair failed and we were unable to recover it. 00:25:02.979 [2024-07-24 22:34:28.364212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.979 [2024-07-24 22:34:28.364238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.979 qpair failed and we were unable to recover it. 00:25:02.979 [2024-07-24 22:34:28.364344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.979 [2024-07-24 22:34:28.364370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.979 qpair failed and we were unable to recover it. 00:25:02.979 [2024-07-24 22:34:28.364467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.979 [2024-07-24 22:34:28.364498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.979 qpair failed and we were unable to recover it. 00:25:02.979 [2024-07-24 22:34:28.364608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.979 [2024-07-24 22:34:28.364634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.979 qpair failed and we were unable to recover it. 00:25:02.979 [2024-07-24 22:34:28.364745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.979 [2024-07-24 22:34:28.364771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.979 qpair failed and we were unable to recover it. 00:25:02.979 [2024-07-24 22:34:28.364873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.979 [2024-07-24 22:34:28.364898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.979 qpair failed and we were unable to recover it. 00:25:02.979 [2024-07-24 22:34:28.365005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.979 [2024-07-24 22:34:28.365035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.979 qpair failed and we were unable to recover it. 00:25:02.979 [2024-07-24 22:34:28.365137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.979 [2024-07-24 22:34:28.365167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.979 qpair failed and we were unable to recover it. 00:25:02.979 [2024-07-24 22:34:28.365264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.979 [2024-07-24 22:34:28.365295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.979 qpair failed and we were unable to recover it. 00:25:02.979 [2024-07-24 22:34:28.365413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.979 [2024-07-24 22:34:28.365440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.979 qpair failed and we were unable to recover it. 00:25:02.979 [2024-07-24 22:34:28.365558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.979 [2024-07-24 22:34:28.365585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.979 qpair failed and we were unable to recover it. 00:25:02.979 [2024-07-24 22:34:28.365690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.979 [2024-07-24 22:34:28.365716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.979 qpair failed and we were unable to recover it. 00:25:02.979 [2024-07-24 22:34:28.365851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.979 [2024-07-24 22:34:28.365878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.979 qpair failed and we were unable to recover it. 00:25:02.979 [2024-07-24 22:34:28.365982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.979 [2024-07-24 22:34:28.366007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.979 qpair failed and we were unable to recover it. 00:25:02.979 [2024-07-24 22:34:28.366113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.979 [2024-07-24 22:34:28.366141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.979 qpair failed and we were unable to recover it. 00:25:02.979 [2024-07-24 22:34:28.366276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.979 [2024-07-24 22:34:28.366302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.979 qpair failed and we were unable to recover it. 00:25:02.979 [2024-07-24 22:34:28.366407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.979 [2024-07-24 22:34:28.366433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.979 qpair failed and we were unable to recover it. 00:25:02.979 [2024-07-24 22:34:28.366549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.979 [2024-07-24 22:34:28.366575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.979 qpair failed and we were unable to recover it. 00:25:02.979 [2024-07-24 22:34:28.366685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.979 [2024-07-24 22:34:28.366712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.979 qpair failed and we were unable to recover it. 00:25:02.979 [2024-07-24 22:34:28.366819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.979 [2024-07-24 22:34:28.366845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.979 qpair failed and we were unable to recover it. 00:25:02.979 [2024-07-24 22:34:28.366956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.979 [2024-07-24 22:34:28.366983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.979 qpair failed and we were unable to recover it. 00:25:02.979 [2024-07-24 22:34:28.367083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.979 [2024-07-24 22:34:28.367108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.979 qpair failed and we were unable to recover it. 00:25:02.979 [2024-07-24 22:34:28.367218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.979 [2024-07-24 22:34:28.367244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.979 qpair failed and we were unable to recover it. 00:25:02.979 [2024-07-24 22:34:28.367340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.979 [2024-07-24 22:34:28.367366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.979 qpair failed and we were unable to recover it. 00:25:02.979 [2024-07-24 22:34:28.367484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.979 [2024-07-24 22:34:28.367514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.979 qpair failed and we were unable to recover it. 00:25:02.979 [2024-07-24 22:34:28.367626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.979 [2024-07-24 22:34:28.367654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.979 qpair failed and we were unable to recover it. 00:25:02.979 [2024-07-24 22:34:28.367756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.979 [2024-07-24 22:34:28.367782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.979 qpair failed and we were unable to recover it. 00:25:02.979 [2024-07-24 22:34:28.367885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.979 [2024-07-24 22:34:28.367912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.979 qpair failed and we were unable to recover it. 00:25:02.979 [2024-07-24 22:34:28.368008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.979 [2024-07-24 22:34:28.368034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.979 qpair failed and we were unable to recover it. 00:25:02.979 [2024-07-24 22:34:28.368140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.979 [2024-07-24 22:34:28.368167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.979 qpair failed and we were unable to recover it. 00:25:02.979 [2024-07-24 22:34:28.368315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.979 [2024-07-24 22:34:28.368341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.979 qpair failed and we were unable to recover it. 00:25:02.979 [2024-07-24 22:34:28.368450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.979 [2024-07-24 22:34:28.368477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.979 qpair failed and we were unable to recover it. 00:25:02.979 [2024-07-24 22:34:28.368591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.979 [2024-07-24 22:34:28.368617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.979 qpair failed and we were unable to recover it. 00:25:02.979 [2024-07-24 22:34:28.368732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.979 [2024-07-24 22:34:28.368758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.979 qpair failed and we were unable to recover it. 00:25:02.979 [2024-07-24 22:34:28.368870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.979 [2024-07-24 22:34:28.368895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.979 qpair failed and we were unable to recover it. 00:25:02.979 [2024-07-24 22:34:28.369008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.979 [2024-07-24 22:34:28.369034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.979 qpair failed and we were unable to recover it. 00:25:02.979 [2024-07-24 22:34:28.369145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.979 [2024-07-24 22:34:28.369171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.979 qpair failed and we were unable to recover it. 00:25:02.979 [2024-07-24 22:34:28.369318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.979 [2024-07-24 22:34:28.369343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.979 qpair failed and we were unable to recover it. 00:25:02.979 [2024-07-24 22:34:28.369449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.979 [2024-07-24 22:34:28.369476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.979 qpair failed and we were unable to recover it. 00:25:02.979 [2024-07-24 22:34:28.369603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.979 [2024-07-24 22:34:28.369629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.979 qpair failed and we were unable to recover it. 00:25:02.979 [2024-07-24 22:34:28.369730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.979 [2024-07-24 22:34:28.369756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.979 qpair failed and we were unable to recover it. 00:25:02.979 [2024-07-24 22:34:28.369867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.979 [2024-07-24 22:34:28.369894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.979 qpair failed and we were unable to recover it. 00:25:02.979 [2024-07-24 22:34:28.370000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.979 [2024-07-24 22:34:28.370027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.979 qpair failed and we were unable to recover it. 00:25:02.979 [2024-07-24 22:34:28.370136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.979 [2024-07-24 22:34:28.370163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.979 qpair failed and we were unable to recover it. 00:25:02.979 [2024-07-24 22:34:28.370263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.979 [2024-07-24 22:34:28.370289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.979 qpair failed and we were unable to recover it. 00:25:02.979 [2024-07-24 22:34:28.370387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.979 [2024-07-24 22:34:28.370415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.979 qpair failed and we were unable to recover it. 00:25:02.979 [2024-07-24 22:34:28.370525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.979 [2024-07-24 22:34:28.370552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.979 qpair failed and we were unable to recover it. 00:25:02.979 [2024-07-24 22:34:28.370658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.979 [2024-07-24 22:34:28.370685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.979 qpair failed and we were unable to recover it. 00:25:02.979 [2024-07-24 22:34:28.370794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.979 [2024-07-24 22:34:28.370827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.979 qpair failed and we were unable to recover it. 00:25:02.979 [2024-07-24 22:34:28.370933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.979 [2024-07-24 22:34:28.370958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.979 qpair failed and we were unable to recover it. 00:25:02.979 [2024-07-24 22:34:28.371057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.979 [2024-07-24 22:34:28.371083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.979 qpair failed and we were unable to recover it. 00:25:02.979 [2024-07-24 22:34:28.371194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.980 [2024-07-24 22:34:28.371220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.980 qpair failed and we were unable to recover it. 00:25:02.980 [2024-07-24 22:34:28.371322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.980 [2024-07-24 22:34:28.371347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.980 qpair failed and we were unable to recover it. 00:25:02.980 [2024-07-24 22:34:28.371454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.980 [2024-07-24 22:34:28.371487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.980 qpair failed and we were unable to recover it. 00:25:02.980 [2024-07-24 22:34:28.371587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.980 [2024-07-24 22:34:28.371613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.980 qpair failed and we were unable to recover it. 00:25:02.980 [2024-07-24 22:34:28.371718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.980 [2024-07-24 22:34:28.371745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.980 qpair failed and we were unable to recover it. 00:25:02.980 [2024-07-24 22:34:28.371856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.980 [2024-07-24 22:34:28.371882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.980 qpair failed and we were unable to recover it. 00:25:02.980 [2024-07-24 22:34:28.371986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.980 [2024-07-24 22:34:28.372012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.980 qpair failed and we were unable to recover it. 00:25:02.980 [2024-07-24 22:34:28.372123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.980 [2024-07-24 22:34:28.372149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.980 qpair failed and we were unable to recover it. 00:25:02.980 [2024-07-24 22:34:28.372256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.980 [2024-07-24 22:34:28.372282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.980 qpair failed and we were unable to recover it. 00:25:02.980 [2024-07-24 22:34:28.372395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.980 [2024-07-24 22:34:28.372424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.980 qpair failed and we were unable to recover it. 00:25:02.980 [2024-07-24 22:34:28.372536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.980 [2024-07-24 22:34:28.372564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.980 qpair failed and we were unable to recover it. 00:25:02.980 [2024-07-24 22:34:28.372669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.980 [2024-07-24 22:34:28.372695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.980 qpair failed and we were unable to recover it. 00:25:02.980 [2024-07-24 22:34:28.372801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.980 [2024-07-24 22:34:28.372827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.980 qpair failed and we were unable to recover it. 00:25:02.980 [2024-07-24 22:34:28.372927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.980 [2024-07-24 22:34:28.372954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.980 qpair failed and we were unable to recover it. 00:25:02.980 [2024-07-24 22:34:28.373103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.980 [2024-07-24 22:34:28.373129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.980 qpair failed and we were unable to recover it. 00:25:02.980 [2024-07-24 22:34:28.373237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.980 [2024-07-24 22:34:28.373265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.980 qpair failed and we were unable to recover it. 00:25:02.980 [2024-07-24 22:34:28.373377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.980 [2024-07-24 22:34:28.373407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.980 qpair failed and we were unable to recover it. 00:25:02.980 [2024-07-24 22:34:28.373525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.980 [2024-07-24 22:34:28.373553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.980 qpair failed and we were unable to recover it. 00:25:02.980 [2024-07-24 22:34:28.373662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.980 [2024-07-24 22:34:28.373689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.980 qpair failed and we were unable to recover it. 00:25:02.980 [2024-07-24 22:34:28.373794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.980 [2024-07-24 22:34:28.373820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.980 qpair failed and we were unable to recover it. 00:25:02.980 [2024-07-24 22:34:28.373922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.980 [2024-07-24 22:34:28.373948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.980 qpair failed and we were unable to recover it. 00:25:02.980 [2024-07-24 22:34:28.374050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.980 [2024-07-24 22:34:28.374077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.980 qpair failed and we were unable to recover it. 00:25:02.980 [2024-07-24 22:34:28.374183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.980 [2024-07-24 22:34:28.374210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.980 qpair failed and we were unable to recover it. 00:25:02.980 [2024-07-24 22:34:28.374317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.980 [2024-07-24 22:34:28.374343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.980 qpair failed and we were unable to recover it. 00:25:02.980 [2024-07-24 22:34:28.374456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.980 [2024-07-24 22:34:28.374489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.980 qpair failed and we were unable to recover it. 00:25:02.980 [2024-07-24 22:34:28.374602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.980 [2024-07-24 22:34:28.374628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.980 qpair failed and we were unable to recover it. 00:25:02.980 [2024-07-24 22:34:28.374726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.980 [2024-07-24 22:34:28.374752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.980 qpair failed and we were unable to recover it. 00:25:02.980 [2024-07-24 22:34:28.374858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.980 [2024-07-24 22:34:28.374885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.980 qpair failed and we were unable to recover it. 00:25:02.980 [2024-07-24 22:34:28.374993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.980 [2024-07-24 22:34:28.375020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.980 qpair failed and we were unable to recover it. 00:25:02.980 [2024-07-24 22:34:28.375127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.980 [2024-07-24 22:34:28.375155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.980 qpair failed and we were unable to recover it. 00:25:02.980 [2024-07-24 22:34:28.375264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.980 [2024-07-24 22:34:28.375291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.980 qpair failed and we were unable to recover it. 00:25:02.980 [2024-07-24 22:34:28.375395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.980 [2024-07-24 22:34:28.375422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.980 qpair failed and we were unable to recover it. 00:25:02.980 [2024-07-24 22:34:28.375545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.980 [2024-07-24 22:34:28.375575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.980 qpair failed and we were unable to recover it. 00:25:02.980 [2024-07-24 22:34:28.375681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.980 [2024-07-24 22:34:28.375707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.980 qpair failed and we were unable to recover it. 00:25:02.980 [2024-07-24 22:34:28.375805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.980 [2024-07-24 22:34:28.375832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.980 qpair failed and we were unable to recover it. 00:25:02.980 [2024-07-24 22:34:28.375982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.980 [2024-07-24 22:34:28.376008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.980 qpair failed and we were unable to recover it. 00:25:02.980 [2024-07-24 22:34:28.376115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.980 [2024-07-24 22:34:28.376140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.980 qpair failed and we were unable to recover it. 00:25:02.980 [2024-07-24 22:34:28.376245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.980 [2024-07-24 22:34:28.376277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.980 qpair failed and we were unable to recover it. 00:25:02.980 [2024-07-24 22:34:28.376385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.980 [2024-07-24 22:34:28.376412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.980 qpair failed and we were unable to recover it. 00:25:02.980 [2024-07-24 22:34:28.376540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.980 [2024-07-24 22:34:28.376570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.980 qpair failed and we were unable to recover it. 00:25:02.980 [2024-07-24 22:34:28.376678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.980 [2024-07-24 22:34:28.376704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.980 qpair failed and we were unable to recover it. 00:25:02.980 [2024-07-24 22:34:28.376817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.980 [2024-07-24 22:34:28.376844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.980 qpair failed and we were unable to recover it. 00:25:02.980 [2024-07-24 22:34:28.376949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.980 [2024-07-24 22:34:28.376975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.980 qpair failed and we were unable to recover it. 00:25:02.980 [2024-07-24 22:34:28.377079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.980 [2024-07-24 22:34:28.377105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.980 qpair failed and we were unable to recover it. 00:25:02.980 [2024-07-24 22:34:28.377206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.980 [2024-07-24 22:34:28.377232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.980 qpair failed and we were unable to recover it. 00:25:02.980 [2024-07-24 22:34:28.377331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.980 [2024-07-24 22:34:28.377356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.980 qpair failed and we were unable to recover it. 00:25:02.980 [2024-07-24 22:34:28.377464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.980 [2024-07-24 22:34:28.377502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.980 qpair failed and we were unable to recover it. 00:25:02.980 [2024-07-24 22:34:28.377603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.980 [2024-07-24 22:34:28.377629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.980 qpair failed and we were unable to recover it. 00:25:02.980 [2024-07-24 22:34:28.377743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.980 [2024-07-24 22:34:28.377768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.980 qpair failed and we were unable to recover it. 00:25:02.980 [2024-07-24 22:34:28.377868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.980 [2024-07-24 22:34:28.377894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.980 qpair failed and we were unable to recover it. 00:25:02.981 [2024-07-24 22:34:28.377995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.981 [2024-07-24 22:34:28.378021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.981 qpair failed and we were unable to recover it. 00:25:02.981 [2024-07-24 22:34:28.378145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.981 [2024-07-24 22:34:28.378171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.981 qpair failed and we were unable to recover it. 00:25:02.981 [2024-07-24 22:34:28.378270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.981 [2024-07-24 22:34:28.378296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.981 qpair failed and we were unable to recover it. 00:25:02.981 [2024-07-24 22:34:28.378442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.981 [2024-07-24 22:34:28.378468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.981 qpair failed and we were unable to recover it. 00:25:02.981 [2024-07-24 22:34:28.378588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.981 [2024-07-24 22:34:28.378614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.981 qpair failed and we were unable to recover it. 00:25:02.981 [2024-07-24 22:34:28.378735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.981 [2024-07-24 22:34:28.378761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.981 qpair failed and we were unable to recover it. 00:25:02.981 [2024-07-24 22:34:28.378865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.981 [2024-07-24 22:34:28.378891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.981 qpair failed and we were unable to recover it. 00:25:02.981 [2024-07-24 22:34:28.378997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.981 [2024-07-24 22:34:28.379023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.981 qpair failed and we were unable to recover it. 00:25:02.981 [2024-07-24 22:34:28.379132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.981 [2024-07-24 22:34:28.379159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.981 qpair failed and we were unable to recover it. 00:25:02.981 [2024-07-24 22:34:28.379259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.981 [2024-07-24 22:34:28.379285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.981 qpair failed and we were unable to recover it. 00:25:02.981 [2024-07-24 22:34:28.379388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.981 [2024-07-24 22:34:28.379414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.981 qpair failed and we were unable to recover it. 00:25:02.981 [2024-07-24 22:34:28.379523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.981 [2024-07-24 22:34:28.379554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.981 qpair failed and we were unable to recover it. 00:25:02.981 [2024-07-24 22:34:28.379667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.981 [2024-07-24 22:34:28.379694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.981 qpair failed and we were unable to recover it. 00:25:02.981 [2024-07-24 22:34:28.379804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.981 [2024-07-24 22:34:28.379830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.981 qpair failed and we were unable to recover it. 00:25:02.981 [2024-07-24 22:34:28.379934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.981 [2024-07-24 22:34:28.379961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.981 qpair failed and we were unable to recover it. 00:25:02.981 [2024-07-24 22:34:28.380067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.981 [2024-07-24 22:34:28.380093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.981 qpair failed and we were unable to recover it. 00:25:02.981 [2024-07-24 22:34:28.380196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.981 [2024-07-24 22:34:28.380222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.981 qpair failed and we were unable to recover it. 00:25:02.981 [2024-07-24 22:34:28.380331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.981 [2024-07-24 22:34:28.380358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.981 qpair failed and we were unable to recover it. 00:25:02.981 [2024-07-24 22:34:28.380469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.981 [2024-07-24 22:34:28.380502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.981 qpair failed and we were unable to recover it. 00:25:02.981 [2024-07-24 22:34:28.380600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.981 [2024-07-24 22:34:28.380626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.981 qpair failed and we were unable to recover it. 00:25:02.981 [2024-07-24 22:34:28.380732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.981 [2024-07-24 22:34:28.380759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.981 qpair failed and we were unable to recover it. 00:25:02.981 [2024-07-24 22:34:28.380877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.981 [2024-07-24 22:34:28.380904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.981 qpair failed and we were unable to recover it. 00:25:02.981 [2024-07-24 22:34:28.381018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.981 [2024-07-24 22:34:28.381044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.981 qpair failed and we were unable to recover it. 00:25:02.981 [2024-07-24 22:34:28.381147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.981 [2024-07-24 22:34:28.381174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.981 qpair failed and we were unable to recover it. 00:25:02.981 [2024-07-24 22:34:28.381277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.981 [2024-07-24 22:34:28.381303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.981 qpair failed and we were unable to recover it. 00:25:02.981 [2024-07-24 22:34:28.381417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.981 [2024-07-24 22:34:28.381443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.981 qpair failed and we were unable to recover it. 00:25:02.981 [2024-07-24 22:34:28.381557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.981 [2024-07-24 22:34:28.381583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b0000b90 with addr=10.0.0.2, port=4420 00:25:02.981 qpair failed and we were unable to recover it. 00:25:02.981 [2024-07-24 22:34:28.381697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.981 [2024-07-24 22:34:28.381730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.981 qpair failed and we were unable to recover it. 00:25:02.981 [2024-07-24 22:34:28.381837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.981 [2024-07-24 22:34:28.381865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.981 qpair failed and we were unable to recover it. 00:25:02.981 [2024-07-24 22:34:28.381978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.981 [2024-07-24 22:34:28.382004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.981 qpair failed and we were unable to recover it. 00:25:02.981 [2024-07-24 22:34:28.382114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.981 [2024-07-24 22:34:28.382140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.981 qpair failed and we were unable to recover it. 00:25:02.981 [2024-07-24 22:34:28.382242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.981 [2024-07-24 22:34:28.382269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.981 qpair failed and we were unable to recover it. 00:25:02.981 [2024-07-24 22:34:28.382373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.981 [2024-07-24 22:34:28.382399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.981 qpair failed and we were unable to recover it. 00:25:02.981 [2024-07-24 22:34:28.382500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.981 [2024-07-24 22:34:28.382526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.981 qpair failed and we were unable to recover it. 00:25:02.981 [2024-07-24 22:34:28.382632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.981 [2024-07-24 22:34:28.382659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.981 qpair failed and we were unable to recover it. 00:25:02.981 [2024-07-24 22:34:28.382760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.981 [2024-07-24 22:34:28.382786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.981 qpair failed and we were unable to recover it. 00:25:02.981 [2024-07-24 22:34:28.382903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.981 [2024-07-24 22:34:28.382929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.981 qpair failed and we were unable to recover it. 00:25:02.981 [2024-07-24 22:34:28.383037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.981 [2024-07-24 22:34:28.383063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.981 qpair failed and we were unable to recover it. 00:25:02.981 [2024-07-24 22:34:28.383167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.981 [2024-07-24 22:34:28.383193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.981 qpair failed and we were unable to recover it. 00:25:02.981 [2024-07-24 22:34:28.383300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.981 [2024-07-24 22:34:28.383326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.981 qpair failed and we were unable to recover it. 00:25:02.981 [2024-07-24 22:34:28.383436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.981 [2024-07-24 22:34:28.383463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.981 qpair failed and we were unable to recover it. 00:25:02.981 [2024-07-24 22:34:28.383580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.981 [2024-07-24 22:34:28.383607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.981 qpair failed and we were unable to recover it. 00:25:02.981 [2024-07-24 22:34:28.383718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.981 [2024-07-24 22:34:28.383745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.981 qpair failed and we were unable to recover it. 00:25:02.981 [2024-07-24 22:34:28.383844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.981 [2024-07-24 22:34:28.383870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.981 qpair failed and we were unable to recover it. 00:25:02.981 [2024-07-24 22:34:28.383967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.981 [2024-07-24 22:34:28.383993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.981 qpair failed and we were unable to recover it. 00:25:02.981 [2024-07-24 22:34:28.384091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.981 [2024-07-24 22:34:28.384117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.981 qpair failed and we were unable to recover it. 00:25:02.981 [2024-07-24 22:34:28.384223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.981 [2024-07-24 22:34:28.384249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.981 qpair failed and we were unable to recover it. 00:25:02.981 [2024-07-24 22:34:28.384358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.981 [2024-07-24 22:34:28.384383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.981 qpair failed and we were unable to recover it. 00:25:02.981 [2024-07-24 22:34:28.384497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.981 [2024-07-24 22:34:28.384523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.981 qpair failed and we were unable to recover it. 00:25:02.981 [2024-07-24 22:34:28.384632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.981 [2024-07-24 22:34:28.384659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.981 qpair failed and we were unable to recover it. 00:25:02.981 [2024-07-24 22:34:28.384771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.981 [2024-07-24 22:34:28.384798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.981 qpair failed and we were unable to recover it. 00:25:02.981 [2024-07-24 22:34:28.384906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.981 [2024-07-24 22:34:28.384932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.981 qpair failed and we were unable to recover it. 00:25:02.981 [2024-07-24 22:34:28.385035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.981 [2024-07-24 22:34:28.385061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.981 qpair failed and we were unable to recover it. 00:25:02.981 [2024-07-24 22:34:28.385164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.981 [2024-07-24 22:34:28.385190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.981 qpair failed and we were unable to recover it. 00:25:02.981 [2024-07-24 22:34:28.385304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.981 [2024-07-24 22:34:28.385330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.981 qpair failed and we were unable to recover it. 00:25:02.981 [2024-07-24 22:34:28.385453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.981 [2024-07-24 22:34:28.385478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.981 qpair failed and we were unable to recover it. 00:25:02.981 [2024-07-24 22:34:28.385592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.981 [2024-07-24 22:34:28.385618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.981 qpair failed and we were unable to recover it. 00:25:02.981 [2024-07-24 22:34:28.385731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.981 [2024-07-24 22:34:28.385756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.981 qpair failed and we were unable to recover it. 00:25:02.981 [2024-07-24 22:34:28.385862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.981 [2024-07-24 22:34:28.385896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.982 qpair failed and we were unable to recover it. 00:25:02.982 [2024-07-24 22:34:28.386001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.982 [2024-07-24 22:34:28.386028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.982 qpair failed and we were unable to recover it. 00:25:02.982 [2024-07-24 22:34:28.386132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.982 [2024-07-24 22:34:28.386158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.982 qpair failed and we were unable to recover it. 00:25:02.982 [2024-07-24 22:34:28.386265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.982 [2024-07-24 22:34:28.386291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.982 qpair failed and we were unable to recover it. 00:25:02.982 [2024-07-24 22:34:28.386389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.982 [2024-07-24 22:34:28.386414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.982 qpair failed and we were unable to recover it. 00:25:02.982 [2024-07-24 22:34:28.386524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.982 [2024-07-24 22:34:28.386550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.982 qpair failed and we were unable to recover it. 00:25:02.982 [2024-07-24 22:34:28.386653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.982 [2024-07-24 22:34:28.386679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.982 qpair failed and we were unable to recover it. 00:25:02.982 [2024-07-24 22:34:28.386782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.982 [2024-07-24 22:34:28.386808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.982 qpair failed and we were unable to recover it. 00:25:02.982 [2024-07-24 22:34:28.386918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.982 [2024-07-24 22:34:28.386943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.982 qpair failed and we were unable to recover it. 00:25:02.982 [2024-07-24 22:34:28.387049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.982 [2024-07-24 22:34:28.387081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.982 qpair failed and we were unable to recover it. 00:25:02.982 [2024-07-24 22:34:28.387186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.982 [2024-07-24 22:34:28.387212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.982 qpair failed and we were unable to recover it. 00:25:02.982 [2024-07-24 22:34:28.387315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.982 [2024-07-24 22:34:28.387340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.982 qpair failed and we were unable to recover it. 00:25:02.982 [2024-07-24 22:34:28.387446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.982 [2024-07-24 22:34:28.387471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.982 qpair failed and we were unable to recover it. 00:25:02.982 [2024-07-24 22:34:28.387584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.982 [2024-07-24 22:34:28.387610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.982 qpair failed and we were unable to recover it. 00:25:02.982 [2024-07-24 22:34:28.387716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.982 [2024-07-24 22:34:28.387742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.982 qpair failed and we were unable to recover it. 00:25:02.982 [2024-07-24 22:34:28.387844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.982 [2024-07-24 22:34:28.387871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.982 qpair failed and we were unable to recover it. 00:25:02.982 [2024-07-24 22:34:28.387984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.982 [2024-07-24 22:34:28.388009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.982 qpair failed and we were unable to recover it. 00:25:02.982 [2024-07-24 22:34:28.388112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.982 [2024-07-24 22:34:28.388138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.982 qpair failed and we were unable to recover it. 00:25:02.982 [2024-07-24 22:34:28.388232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.982 [2024-07-24 22:34:28.388257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.982 qpair failed and we were unable to recover it. 00:25:02.982 [2024-07-24 22:34:28.388366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.982 [2024-07-24 22:34:28.388392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.982 qpair failed and we were unable to recover it. 00:25:02.982 [2024-07-24 22:34:28.388495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.982 [2024-07-24 22:34:28.388524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.982 qpair failed and we were unable to recover it. 00:25:02.982 [2024-07-24 22:34:28.388632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.982 [2024-07-24 22:34:28.388659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.982 qpair failed and we were unable to recover it. 00:25:02.982 [2024-07-24 22:34:28.388770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.982 [2024-07-24 22:34:28.388797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.982 qpair failed and we were unable to recover it. 00:25:02.982 [2024-07-24 22:34:28.388909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.982 [2024-07-24 22:34:28.388934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.982 qpair failed and we were unable to recover it. 00:25:02.982 [2024-07-24 22:34:28.389041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.982 [2024-07-24 22:34:28.389067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.982 qpair failed and we were unable to recover it. 00:25:02.982 [2024-07-24 22:34:28.389186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.982 [2024-07-24 22:34:28.389212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.982 qpair failed and we were unable to recover it. 00:25:02.982 [2024-07-24 22:34:28.389325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.982 [2024-07-24 22:34:28.389351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.982 qpair failed and we were unable to recover it. 00:25:02.982 [2024-07-24 22:34:28.389459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.982 [2024-07-24 22:34:28.389490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.982 qpair failed and we were unable to recover it. 00:25:02.982 [2024-07-24 22:34:28.389599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.982 [2024-07-24 22:34:28.389626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.982 qpair failed and we were unable to recover it. 00:25:02.982 [2024-07-24 22:34:28.389734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.982 [2024-07-24 22:34:28.389761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.982 qpair failed and we were unable to recover it. 00:25:02.982 [2024-07-24 22:34:28.389869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.982 [2024-07-24 22:34:28.389895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.982 qpair failed and we were unable to recover it. 00:25:02.982 [2024-07-24 22:34:28.389999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.982 [2024-07-24 22:34:28.390026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.982 qpair failed and we were unable to recover it. 00:25:02.982 [2024-07-24 22:34:28.390131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.982 [2024-07-24 22:34:28.390159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.982 qpair failed and we were unable to recover it. 00:25:02.982 [2024-07-24 22:34:28.390269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.982 [2024-07-24 22:34:28.390297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.982 qpair failed and we were unable to recover it. 00:25:02.982 [2024-07-24 22:34:28.390400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.982 [2024-07-24 22:34:28.390426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.982 qpair failed and we were unable to recover it. 00:25:02.982 [2024-07-24 22:34:28.390526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.982 [2024-07-24 22:34:28.390553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.982 qpair failed and we were unable to recover it. 00:25:02.982 [2024-07-24 22:34:28.390663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.982 [2024-07-24 22:34:28.390689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.982 qpair failed and we were unable to recover it. 00:25:02.982 [2024-07-24 22:34:28.390794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.982 [2024-07-24 22:34:28.390820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.982 qpair failed and we were unable to recover it. 00:25:02.982 [2024-07-24 22:34:28.390921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.982 [2024-07-24 22:34:28.390947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.982 qpair failed and we were unable to recover it. 00:25:02.982 [2024-07-24 22:34:28.391053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.982 [2024-07-24 22:34:28.391080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.982 qpair failed and we were unable to recover it. 00:25:02.982 [2024-07-24 22:34:28.391182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.982 [2024-07-24 22:34:28.391208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.982 qpair failed and we were unable to recover it. 00:25:02.982 [2024-07-24 22:34:28.391306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.982 [2024-07-24 22:34:28.391332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.982 qpair failed and we were unable to recover it. 00:25:02.982 [2024-07-24 22:34:28.391432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.982 [2024-07-24 22:34:28.391459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.982 qpair failed and we were unable to recover it. 00:25:02.982 [2024-07-24 22:34:28.391578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.982 [2024-07-24 22:34:28.391604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.982 qpair failed and we were unable to recover it. 00:25:02.982 [2024-07-24 22:34:28.391707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.982 [2024-07-24 22:34:28.391733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.982 qpair failed and we were unable to recover it. 00:25:02.982 [2024-07-24 22:34:28.391843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.982 [2024-07-24 22:34:28.391869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.982 qpair failed and we were unable to recover it. 00:25:02.982 [2024-07-24 22:34:28.391974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.982 [2024-07-24 22:34:28.392000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.982 qpair failed and we were unable to recover it. 00:25:02.982 [2024-07-24 22:34:28.392103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.982 [2024-07-24 22:34:28.392130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.982 qpair failed and we were unable to recover it. 00:25:02.982 [2024-07-24 22:34:28.392236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.982 [2024-07-24 22:34:28.392263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.982 qpair failed and we were unable to recover it. 00:25:02.982 [2024-07-24 22:34:28.392378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.982 [2024-07-24 22:34:28.392408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.982 qpair failed and we were unable to recover it. 00:25:02.982 [2024-07-24 22:34:28.392521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.982 [2024-07-24 22:34:28.392548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.982 qpair failed and we were unable to recover it. 00:25:02.982 [2024-07-24 22:34:28.392654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.982 [2024-07-24 22:34:28.392682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.982 qpair failed and we were unable to recover it. 00:25:02.982 [2024-07-24 22:34:28.392793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.982 [2024-07-24 22:34:28.392819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.982 qpair failed and we were unable to recover it. 00:25:02.982 [2024-07-24 22:34:28.392925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.982 [2024-07-24 22:34:28.392951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.982 qpair failed and we were unable to recover it. 00:25:02.982 [2024-07-24 22:34:28.393053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.982 [2024-07-24 22:34:28.393078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.982 qpair failed and we were unable to recover it. 00:25:02.982 [2024-07-24 22:34:28.393177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.982 [2024-07-24 22:34:28.393203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.982 qpair failed and we were unable to recover it. 00:25:02.982 [2024-07-24 22:34:28.393300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.982 [2024-07-24 22:34:28.393326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.982 qpair failed and we were unable to recover it. 00:25:02.982 [2024-07-24 22:34:28.393440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.982 [2024-07-24 22:34:28.393466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.982 qpair failed and we were unable to recover it. 00:25:02.982 [2024-07-24 22:34:28.393621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.982 [2024-07-24 22:34:28.393647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.982 qpair failed and we were unable to recover it. 00:25:02.982 [2024-07-24 22:34:28.393747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.982 [2024-07-24 22:34:28.393773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.982 qpair failed and we were unable to recover it. 00:25:02.982 [2024-07-24 22:34:28.393871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.983 [2024-07-24 22:34:28.393897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.983 qpair failed and we were unable to recover it. 00:25:02.983 [2024-07-24 22:34:28.394003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.983 [2024-07-24 22:34:28.394030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.983 qpair failed and we were unable to recover it. 00:25:02.983 [2024-07-24 22:34:28.394139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.983 [2024-07-24 22:34:28.394166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.983 qpair failed and we were unable to recover it. 00:25:02.983 [2024-07-24 22:34:28.394274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.983 [2024-07-24 22:34:28.394300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.983 qpair failed and we were unable to recover it. 00:25:02.983 [2024-07-24 22:34:28.394403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.983 [2024-07-24 22:34:28.394430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.983 qpair failed and we were unable to recover it. 00:25:02.983 [2024-07-24 22:34:28.394576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.983 [2024-07-24 22:34:28.394602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.983 qpair failed and we were unable to recover it. 00:25:02.983 [2024-07-24 22:34:28.394703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.983 [2024-07-24 22:34:28.394731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.983 qpair failed and we were unable to recover it. 00:25:02.983 [2024-07-24 22:34:28.394841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.983 [2024-07-24 22:34:28.394869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.983 qpair failed and we were unable to recover it. 00:25:02.983 [2024-07-24 22:34:28.394971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.983 [2024-07-24 22:34:28.394997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.983 qpair failed and we were unable to recover it. 00:25:02.983 [2024-07-24 22:34:28.395101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.983 [2024-07-24 22:34:28.395128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.983 qpair failed and we were unable to recover it. 00:25:02.983 [2024-07-24 22:34:28.395234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.983 [2024-07-24 22:34:28.395261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.983 qpair failed and we were unable to recover it. 00:25:02.983 [2024-07-24 22:34:28.395367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.983 [2024-07-24 22:34:28.395393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.983 qpair failed and we were unable to recover it. 00:25:02.983 [2024-07-24 22:34:28.395500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.983 [2024-07-24 22:34:28.395526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.983 qpair failed and we were unable to recover it. 00:25:02.983 [2024-07-24 22:34:28.395632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.983 [2024-07-24 22:34:28.395658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.983 qpair failed and we were unable to recover it. 00:25:02.983 [2024-07-24 22:34:28.395762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.983 [2024-07-24 22:34:28.395788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.983 qpair failed and we were unable to recover it. 00:25:02.983 [2024-07-24 22:34:28.395889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.983 [2024-07-24 22:34:28.395915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.983 qpair failed and we were unable to recover it. 00:25:02.983 [2024-07-24 22:34:28.396030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.983 [2024-07-24 22:34:28.396061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.983 qpair failed and we were unable to recover it. 00:25:02.983 [2024-07-24 22:34:28.396165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.983 [2024-07-24 22:34:28.396192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.983 qpair failed and we were unable to recover it. 00:25:02.983 22:34:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:02.983 [2024-07-24 22:34:28.396294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.983 [2024-07-24 22:34:28.396321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.983 qpair failed and we were unable to recover it. 00:25:02.983 [2024-07-24 22:34:28.396427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.983 [2024-07-24 22:34:28.396453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02b8000b90 with addr=10.0.0.2, port=4420 00:25:02.983 qpair failed and we were unable to recover it. 00:25:02.983 22:34:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:25:02.983 [2024-07-24 22:34:28.396568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.983 [2024-07-24 22:34:28.396596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.983 qpair failed and we were unable to recover it. 00:25:02.983 [2024-07-24 22:34:28.396700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.983 [2024-07-24 22:34:28.396726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.983 qpair failed and we were unable to recover it. 00:25:02.983 [2024-07-24 22:34:28.396830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.983 [2024-07-24 22:34:28.396856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f02c0000b90 with addr=10.0.0.2, port=4420 00:25:02.983 qpair failed and we were unable to recover it. 00:25:02.983 22:34:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:02.983 [2024-07-24 22:34:28.397008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.983 [2024-07-24 22:34:28.397054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1699190 with addr=10.0.0.2, port=4420 00:25:02.983 [2024-07-24 22:34:28.397076] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1699190 is same with the state(5) to be set 00:25:02.983 [2024-07-24 22:34:28.397105] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1699190 (9): Bad file descriptor 00:25:02.983 [2024-07-24 22:34:28.397129] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.983 [2024-07-24 22:34:28.397152] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.983 [2024-07-24 22:34:28.397172] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.983 22:34:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:02.983 Unable to reset the controller. 00:25:02.983 22:34:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:02.983 22:34:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:02.983 22:34:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:02.983 22:34:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.983 22:34:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:02.983 Malloc0 00:25:02.983 22:34:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.983 22:34:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:02.983 22:34:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.983 22:34:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:02.983 [2024-07-24 22:34:28.447139] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:02.983 22:34:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.983 22:34:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:02.983 22:34:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.983 22:34:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:02.983 22:34:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.983 22:34:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:02.983 22:34:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.983 22:34:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:02.983 22:34:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.983 22:34:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:02.983 22:34:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.983 22:34:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:02.983 [2024-07-24 22:34:28.475426] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:02.983 22:34:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.983 22:34:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:02.983 22:34:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.983 22:34:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:02.983 22:34:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.983 22:34:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3928842 00:25:03.919 Controller properly reset. 00:25:09.196 Initializing NVMe Controllers 00:25:09.196 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:09.196 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:09.196 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:25:09.196 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:25:09.196 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:25:09.196 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:25:09.196 Initialization complete. Launching workers. 00:25:09.196 Starting thread on core 1 00:25:09.196 Starting thread on core 2 00:25:09.196 Starting thread on core 3 00:25:09.196 Starting thread on core 0 00:25:09.196 22:34:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:25:09.196 00:25:09.196 real 0m10.776s 00:25:09.196 user 0m32.429s 00:25:09.196 sys 0m8.053s 00:25:09.196 22:34:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:09.196 22:34:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:09.197 ************************************ 00:25:09.197 END TEST nvmf_target_disconnect_tc2 00:25:09.197 ************************************ 00:25:09.197 22:34:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:25:09.197 22:34:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:25:09.197 22:34:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:25:09.197 22:34:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:09.197 22:34:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:25:09.197 22:34:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:09.197 22:34:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:25:09.197 22:34:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:09.197 22:34:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:09.197 rmmod nvme_tcp 00:25:09.197 rmmod nvme_fabrics 00:25:09.197 rmmod nvme_keyring 00:25:09.197 22:34:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:09.197 22:34:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:25:09.197 22:34:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:25:09.197 22:34:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 3929157 ']' 00:25:09.197 22:34:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 3929157 00:25:09.197 22:34:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 3929157 ']' 00:25:09.197 22:34:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 3929157 00:25:09.197 22:34:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:25:09.197 22:34:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:09.197 22:34:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3929157 00:25:09.197 22:34:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:25:09.197 22:34:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:25:09.197 22:34:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3929157' 00:25:09.197 killing process with pid 3929157 00:25:09.197 22:34:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 3929157 00:25:09.197 22:34:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 3929157 00:25:09.197 22:34:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:09.197 22:34:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:09.197 22:34:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:09.197 22:34:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:09.197 22:34:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:09.197 22:34:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:09.197 22:34:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:09.197 22:34:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:11.106 22:34:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:11.106 00:25:11.106 real 0m15.144s 00:25:11.106 user 0m57.091s 00:25:11.106 sys 0m10.511s 00:25:11.106 22:34:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:11.106 22:34:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:11.106 ************************************ 00:25:11.106 END TEST nvmf_target_disconnect 00:25:11.106 ************************************ 00:25:11.106 22:34:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:25:11.106 00:25:11.106 real 5m0.793s 00:25:11.106 user 10m58.839s 00:25:11.106 sys 1m12.137s 00:25:11.106 22:34:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:11.106 22:34:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.106 ************************************ 00:25:11.106 END TEST nvmf_host 00:25:11.106 ************************************ 00:25:11.106 00:25:11.106 real 19m40.676s 00:25:11.106 user 47m12.242s 00:25:11.106 sys 4m44.262s 00:25:11.106 22:34:36 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:11.106 22:34:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:11.106 ************************************ 00:25:11.106 END TEST nvmf_tcp 00:25:11.106 ************************************ 00:25:11.106 22:34:36 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:25:11.106 22:34:36 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:11.106 22:34:36 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:11.106 22:34:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:11.106 22:34:36 -- common/autotest_common.sh@10 -- # set +x 00:25:11.364 ************************************ 00:25:11.364 START TEST spdkcli_nvmf_tcp 00:25:11.364 ************************************ 00:25:11.364 22:34:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:11.364 * Looking for test storage... 00:25:11.364 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:25:11.364 22:34:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:25:11.364 22:34:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:25:11.364 22:34:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:25:11.364 22:34:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:11.364 22:34:36 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:25:11.365 22:34:36 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:11.365 22:34:36 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:11.365 22:34:36 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:11.365 22:34:36 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:11.365 22:34:36 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:11.365 22:34:36 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:11.365 22:34:36 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:11.365 22:34:36 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:11.365 22:34:36 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:11.365 22:34:36 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:11.365 22:34:36 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:25:11.365 22:34:36 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:25:11.365 22:34:36 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:11.365 22:34:36 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:11.365 22:34:36 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:11.365 22:34:36 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:11.365 22:34:36 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:11.365 22:34:36 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:11.365 22:34:36 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:11.365 22:34:36 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:11.365 22:34:36 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.365 22:34:36 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.365 22:34:36 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.365 22:34:36 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:25:11.365 22:34:36 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:11.365 22:34:36 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:25:11.365 22:34:36 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:11.365 22:34:36 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:11.365 22:34:36 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:11.365 22:34:36 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:11.365 22:34:36 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:11.365 22:34:36 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:11.365 22:34:36 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:11.365 22:34:36 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:11.365 22:34:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:25:11.365 22:34:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:25:11.365 22:34:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:25:11.365 22:34:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:25:11.365 22:34:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:11.365 22:34:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:11.365 22:34:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:25:11.365 22:34:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3930098 00:25:11.365 22:34:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:25:11.365 22:34:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3930098 00:25:11.365 22:34:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 3930098 ']' 00:25:11.365 22:34:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:11.365 22:34:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:11.365 22:34:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:11.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:11.365 22:34:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:11.365 22:34:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:11.365 [2024-07-24 22:34:36.943574] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:25:11.365 [2024-07-24 22:34:36.943675] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3930098 ] 00:25:11.365 EAL: No free 2048 kB hugepages reported on node 1 00:25:11.365 [2024-07-24 22:34:37.002677] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:11.624 [2024-07-24 22:34:37.120947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:11.624 [2024-07-24 22:34:37.121016] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:11.624 22:34:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:11.624 22:34:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:25:11.624 22:34:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:25:11.624 22:34:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:11.624 22:34:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:11.624 22:34:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:25:11.624 22:34:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:25:11.624 22:34:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:25:11.624 22:34:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:11.624 22:34:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:11.624 22:34:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:25:11.624 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:25:11.624 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:25:11.624 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:25:11.624 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:25:11.624 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:25:11.624 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:25:11.624 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:11.624 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:25:11.624 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:25:11.624 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:11.624 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:11.624 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:25:11.624 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:11.624 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:11.624 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:25:11.624 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:11.624 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:11.624 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:11.624 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:11.624 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:25:11.624 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:25:11.624 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:11.624 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:25:11.624 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:11.624 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:25:11.624 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:25:11.624 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:25:11.624 ' 00:25:14.159 [2024-07-24 22:34:39.844745] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:15.536 [2024-07-24 22:34:41.084930] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:25:18.065 [2024-07-24 22:34:43.387917] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:25:19.972 [2024-07-24 22:34:45.366084] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:25:21.345 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:25:21.345 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:25:21.345 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:25:21.345 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:25:21.345 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:25:21.345 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:25:21.345 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:25:21.345 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:21.345 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:25:21.345 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:25:21.345 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:21.345 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:21.345 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:25:21.345 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:21.345 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:21.345 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:25:21.345 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:21.345 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:21.345 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:21.345 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:21.345 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:25:21.345 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:25:21.345 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:21.345 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:25:21.345 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:21.345 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:25:21.345 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:25:21.345 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:25:21.345 22:34:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:25:21.345 22:34:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:21.345 22:34:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:21.345 22:34:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:25:21.345 22:34:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:21.345 22:34:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:21.345 22:34:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:25:21.345 22:34:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:25:21.913 22:34:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:25:21.913 22:34:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:25:21.913 22:34:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:25:21.913 22:34:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:21.913 22:34:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:21.913 22:34:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:25:21.913 22:34:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:21.913 22:34:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:21.913 22:34:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:25:21.913 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:25:21.913 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:21.913 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:25:21.913 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:25:21.913 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:25:21.913 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:25:21.913 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:21.913 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:25:21.913 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:25:21.913 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:25:21.913 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:25:21.913 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:25:21.913 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:25:21.913 ' 00:25:27.189 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:25:27.189 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:25:27.189 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:27.189 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:25:27.189 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:25:27.189 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:25:27.189 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:25:27.189 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:27.190 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:25:27.190 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:25:27.190 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:25:27.190 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:25:27.190 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:25:27.190 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:25:27.190 22:34:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:25:27.190 22:34:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:27.190 22:34:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:27.190 22:34:52 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3930098 00:25:27.190 22:34:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 3930098 ']' 00:25:27.190 22:34:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 3930098 00:25:27.190 22:34:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:25:27.190 22:34:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:27.190 22:34:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3930098 00:25:27.190 22:34:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:27.190 22:34:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:27.190 22:34:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3930098' 00:25:27.190 killing process with pid 3930098 00:25:27.190 22:34:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 3930098 00:25:27.190 22:34:52 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 3930098 00:25:27.449 22:34:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:25:27.449 22:34:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:25:27.449 22:34:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3930098 ']' 00:25:27.449 22:34:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3930098 00:25:27.449 22:34:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 3930098 ']' 00:25:27.449 22:34:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 3930098 00:25:27.449 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3930098) - No such process 00:25:27.449 22:34:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 3930098 is not found' 00:25:27.449 Process with pid 3930098 is not found 00:25:27.449 22:34:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:25:27.449 22:34:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:25:27.449 22:34:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:25:27.449 00:25:27.449 real 0m16.253s 00:25:27.449 user 0m34.596s 00:25:27.449 sys 0m0.806s 00:25:27.449 22:34:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:27.449 22:34:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:27.449 ************************************ 00:25:27.449 END TEST spdkcli_nvmf_tcp 00:25:27.449 ************************************ 00:25:27.449 22:34:53 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:27.449 22:34:53 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:27.449 22:34:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:27.449 22:34:53 -- common/autotest_common.sh@10 -- # set +x 00:25:27.449 ************************************ 00:25:27.449 START TEST nvmf_identify_passthru 00:25:27.449 ************************************ 00:25:27.449 22:34:53 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:27.707 * Looking for test storage... 00:25:27.707 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:27.707 22:34:53 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:27.707 22:34:53 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:25:27.707 22:34:53 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:27.707 22:34:53 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:27.707 22:34:53 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:27.707 22:34:53 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:27.707 22:34:53 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:27.707 22:34:53 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:27.707 22:34:53 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:27.707 22:34:53 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:27.707 22:34:53 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:27.707 22:34:53 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:27.707 22:34:53 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:25:27.707 22:34:53 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:25:27.707 22:34:53 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:27.707 22:34:53 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:27.707 22:34:53 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:27.707 22:34:53 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:27.707 22:34:53 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:27.707 22:34:53 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:27.707 22:34:53 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:27.707 22:34:53 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:27.707 22:34:53 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.707 22:34:53 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.707 22:34:53 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.707 22:34:53 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:25:27.707 22:34:53 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.707 22:34:53 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:25:27.707 22:34:53 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:27.707 22:34:53 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:27.707 22:34:53 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:27.707 22:34:53 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:27.707 22:34:53 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:27.707 22:34:53 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:27.707 22:34:53 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:27.707 22:34:53 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:27.708 22:34:53 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:27.708 22:34:53 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:27.708 22:34:53 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:27.708 22:34:53 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:27.708 22:34:53 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.708 22:34:53 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.708 22:34:53 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.708 22:34:53 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:25:27.708 22:34:53 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.708 22:34:53 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:25:27.708 22:34:53 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:27.708 22:34:53 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:27.708 22:34:53 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:27.708 22:34:53 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:27.708 22:34:53 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:27.708 22:34:53 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:27.708 22:34:53 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:27.708 22:34:53 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:27.708 22:34:53 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:27.708 22:34:53 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:27.708 22:34:53 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:25:27.708 22:34:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:29.625 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:29.625 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:25:29.625 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:29.625 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:29.625 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:29.625 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:29.625 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:29.625 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:25:29.625 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:29.625 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:25:29.625 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:25:29.625 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:25:29.625 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:25:29.625 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:25:29.625 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:25:29.625 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:29.625 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:29.625 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:29.625 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:29.625 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:29.625 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:29.625 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:29.625 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:29.625 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:29.625 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:29.625 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:29.625 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:29.625 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:29.625 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:29.625 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:29.625 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:29.625 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:29.625 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:29.625 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:25:29.625 Found 0000:08:00.0 (0x8086 - 0x159b) 00:25:29.625 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:29.625 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:29.625 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:29.625 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:29.625 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:29.625 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:29.625 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:25:29.625 Found 0000:08:00.1 (0x8086 - 0x159b) 00:25:29.625 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:29.625 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:29.625 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:29.625 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:29.625 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:29.626 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:29.626 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:29.626 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:29.626 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:29.626 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:29.626 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:29.626 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:29.626 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:29.626 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:29.626 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:29.626 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:25:29.626 Found net devices under 0000:08:00.0: cvl_0_0 00:25:29.626 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:29.626 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:29.626 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:29.626 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:29.626 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:29.626 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:29.626 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:29.626 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:29.626 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:25:29.626 Found net devices under 0000:08:00.1: cvl_0_1 00:25:29.626 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:29.626 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:29.626 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:25:29.626 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:29.626 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:29.626 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:29.626 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:29.626 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:29.626 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:29.626 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:29.626 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:29.626 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:29.626 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:29.626 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:29.626 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:29.626 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:29.626 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:29.626 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:29.626 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:29.626 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:29.626 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:29.626 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:29.626 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:29.626 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:29.626 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:29.626 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:29.626 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:29.626 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:25:29.626 00:25:29.626 --- 10.0.0.2 ping statistics --- 00:25:29.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:29.626 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:25:29.626 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:29.626 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:29.626 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.065 ms 00:25:29.626 00:25:29.626 --- 10.0.0.1 ping statistics --- 00:25:29.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:29.626 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:25:29.626 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:29.626 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:25:29.626 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:29.626 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:29.626 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:29.626 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:29.626 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:29.626 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:29.626 22:34:54 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:29.626 22:34:54 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:25:29.626 22:34:54 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:29.626 22:34:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:29.626 22:34:54 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:25:29.626 22:34:54 nvmf_identify_passthru -- common/autotest_common.sh@1522 -- # bdfs=() 00:25:29.626 22:34:54 nvmf_identify_passthru -- common/autotest_common.sh@1522 -- # local bdfs 00:25:29.626 22:34:54 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # bdfs=($(get_nvme_bdfs)) 00:25:29.626 22:34:54 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # get_nvme_bdfs 00:25:29.626 22:34:54 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # bdfs=() 00:25:29.626 22:34:54 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # local bdfs 00:25:29.626 22:34:54 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:25:29.626 22:34:54 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:29.626 22:34:54 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # jq -r '.config[].params.traddr' 00:25:29.626 22:34:55 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # (( 1 == 0 )) 00:25:29.626 22:34:55 nvmf_identify_passthru -- common/autotest_common.sh@1517 -- # printf '%s\n' 0000:84:00.0 00:25:29.626 22:34:55 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # echo 0000:84:00.0 00:25:29.626 22:34:55 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:84:00.0 00:25:29.626 22:34:55 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:84:00.0 ']' 00:25:29.626 22:34:55 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:84:00.0' -i 0 00:25:29.626 22:34:55 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:25:29.626 22:34:55 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:25:29.626 EAL: No free 2048 kB hugepages reported on node 1 00:25:33.810 22:34:59 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ8275016S1P0FGN 00:25:33.810 22:34:59 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:84:00.0' -i 0 00:25:33.810 22:34:59 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:25:33.810 22:34:59 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:25:33.810 EAL: No free 2048 kB hugepages reported on node 1 00:25:37.990 22:35:03 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:25:37.990 22:35:03 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:25:37.990 22:35:03 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:37.990 22:35:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:37.990 22:35:03 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:25:37.990 22:35:03 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:37.990 22:35:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:37.990 22:35:03 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3933654 00:25:37.990 22:35:03 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:37.990 22:35:03 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:37.990 22:35:03 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3933654 00:25:37.990 22:35:03 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 3933654 ']' 00:25:37.990 22:35:03 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:37.990 22:35:03 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:37.990 22:35:03 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:37.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:37.990 22:35:03 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:37.990 22:35:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:37.990 [2024-07-24 22:35:03.433276] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:25:37.990 [2024-07-24 22:35:03.433375] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:37.990 EAL: No free 2048 kB hugepages reported on node 1 00:25:37.990 [2024-07-24 22:35:03.500136] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:37.990 [2024-07-24 22:35:03.616984] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:37.990 [2024-07-24 22:35:03.617035] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:37.990 [2024-07-24 22:35:03.617051] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:37.990 [2024-07-24 22:35:03.617064] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:37.990 [2024-07-24 22:35:03.617076] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:37.990 [2024-07-24 22:35:03.618051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:37.990 [2024-07-24 22:35:03.618133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:37.990 [2024-07-24 22:35:03.618182] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:37.990 [2024-07-24 22:35:03.618185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:37.990 22:35:03 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:37.990 22:35:03 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:25:37.990 22:35:03 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:25:37.990 22:35:03 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.990 22:35:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:37.990 INFO: Log level set to 20 00:25:37.990 INFO: Requests: 00:25:37.990 { 00:25:37.990 "jsonrpc": "2.0", 00:25:37.990 "method": "nvmf_set_config", 00:25:37.990 "id": 1, 00:25:37.990 "params": { 00:25:37.990 "admin_cmd_passthru": { 00:25:37.990 "identify_ctrlr": true 00:25:37.990 } 00:25:37.990 } 00:25:37.990 } 00:25:37.990 00:25:37.990 INFO: response: 00:25:37.990 { 00:25:37.990 "jsonrpc": "2.0", 00:25:37.990 "id": 1, 00:25:37.990 "result": true 00:25:37.990 } 00:25:37.990 00:25:37.990 22:35:03 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.990 22:35:03 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:25:37.990 22:35:03 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.990 22:35:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:37.990 INFO: Setting log level to 20 00:25:37.990 INFO: Setting log level to 20 00:25:37.990 INFO: Log level set to 20 00:25:37.990 INFO: Log level set to 20 00:25:37.990 INFO: Requests: 00:25:37.990 { 00:25:37.990 "jsonrpc": "2.0", 00:25:37.990 "method": "framework_start_init", 00:25:37.990 "id": 1 00:25:37.990 } 00:25:37.990 00:25:37.990 INFO: Requests: 00:25:37.990 { 00:25:37.990 "jsonrpc": "2.0", 00:25:37.990 "method": "framework_start_init", 00:25:37.990 "id": 1 00:25:37.990 } 00:25:37.990 00:25:38.249 [2024-07-24 22:35:03.777588] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:25:38.249 INFO: response: 00:25:38.249 { 00:25:38.249 "jsonrpc": "2.0", 00:25:38.249 "id": 1, 00:25:38.249 "result": true 00:25:38.249 } 00:25:38.249 00:25:38.249 INFO: response: 00:25:38.249 { 00:25:38.249 "jsonrpc": "2.0", 00:25:38.249 "id": 1, 00:25:38.249 "result": true 00:25:38.249 } 00:25:38.249 00:25:38.249 22:35:03 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.249 22:35:03 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:38.249 22:35:03 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.249 22:35:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:38.249 INFO: Setting log level to 40 00:25:38.249 INFO: Setting log level to 40 00:25:38.250 INFO: Setting log level to 40 00:25:38.250 [2024-07-24 22:35:03.787566] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:38.250 22:35:03 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.250 22:35:03 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:25:38.250 22:35:03 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:38.250 22:35:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:38.250 22:35:03 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:84:00.0 00:25:38.250 22:35:03 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.250 22:35:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:41.526 Nvme0n1 00:25:41.526 22:35:06 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.526 22:35:06 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:25:41.526 22:35:06 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.526 22:35:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:41.526 22:35:06 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.526 22:35:06 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:41.526 22:35:06 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.526 22:35:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:41.526 22:35:06 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.526 22:35:06 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:41.526 22:35:06 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.526 22:35:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:41.526 [2024-07-24 22:35:06.663241] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:41.526 22:35:06 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.526 22:35:06 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:25:41.526 22:35:06 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.526 22:35:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:41.526 [ 00:25:41.526 { 00:25:41.526 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:41.526 "subtype": "Discovery", 00:25:41.526 "listen_addresses": [], 00:25:41.526 "allow_any_host": true, 00:25:41.526 "hosts": [] 00:25:41.526 }, 00:25:41.526 { 00:25:41.526 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:41.526 "subtype": "NVMe", 00:25:41.526 "listen_addresses": [ 00:25:41.526 { 00:25:41.526 "trtype": "TCP", 00:25:41.526 "adrfam": "IPv4", 00:25:41.526 "traddr": "10.0.0.2", 00:25:41.526 "trsvcid": "4420" 00:25:41.526 } 00:25:41.526 ], 00:25:41.526 "allow_any_host": true, 00:25:41.526 "hosts": [], 00:25:41.526 "serial_number": "SPDK00000000000001", 00:25:41.526 "model_number": "SPDK bdev Controller", 00:25:41.526 "max_namespaces": 1, 00:25:41.526 "min_cntlid": 1, 00:25:41.526 "max_cntlid": 65519, 00:25:41.526 "namespaces": [ 00:25:41.526 { 00:25:41.526 "nsid": 1, 00:25:41.526 "bdev_name": "Nvme0n1", 00:25:41.526 "name": "Nvme0n1", 00:25:41.526 "nguid": "DDCFA57A18C948559D8E08C744F44263", 00:25:41.526 "uuid": "ddcfa57a-18c9-4855-9d8e-08c744f44263" 00:25:41.526 } 00:25:41.526 ] 00:25:41.526 } 00:25:41.526 ] 00:25:41.526 22:35:06 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.526 22:35:06 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:41.526 22:35:06 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:25:41.526 22:35:06 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:25:41.526 EAL: No free 2048 kB hugepages reported on node 1 00:25:41.526 22:35:06 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ8275016S1P0FGN 00:25:41.527 22:35:06 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:41.527 22:35:06 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:25:41.527 22:35:06 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:25:41.527 EAL: No free 2048 kB hugepages reported on node 1 00:25:41.527 22:35:07 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:25:41.527 22:35:07 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ8275016S1P0FGN '!=' PHLJ8275016S1P0FGN ']' 00:25:41.527 22:35:07 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:25:41.527 22:35:07 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:41.527 22:35:07 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.527 22:35:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:41.527 22:35:07 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.527 22:35:07 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:25:41.527 22:35:07 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:25:41.527 22:35:07 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:41.527 22:35:07 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:25:41.527 22:35:07 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:41.527 22:35:07 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:25:41.527 22:35:07 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:41.527 22:35:07 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:41.527 rmmod nvme_tcp 00:25:41.527 rmmod nvme_fabrics 00:25:41.527 rmmod nvme_keyring 00:25:41.784 22:35:07 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:41.784 22:35:07 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:25:41.784 22:35:07 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:25:41.784 22:35:07 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 3933654 ']' 00:25:41.784 22:35:07 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 3933654 00:25:41.784 22:35:07 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 3933654 ']' 00:25:41.784 22:35:07 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 3933654 00:25:41.784 22:35:07 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:25:41.784 22:35:07 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:41.784 22:35:07 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3933654 00:25:41.784 22:35:07 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:41.784 22:35:07 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:41.784 22:35:07 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3933654' 00:25:41.784 killing process with pid 3933654 00:25:41.784 22:35:07 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 3933654 00:25:41.784 22:35:07 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 3933654 00:25:43.154 22:35:08 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:43.154 22:35:08 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:43.154 22:35:08 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:43.154 22:35:08 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:43.154 22:35:08 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:43.154 22:35:08 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:43.154 22:35:08 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:43.154 22:35:08 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:45.687 22:35:10 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:45.687 00:25:45.687 real 0m17.716s 00:25:45.687 user 0m26.858s 00:25:45.687 sys 0m2.126s 00:25:45.687 22:35:10 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:45.687 22:35:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:45.687 ************************************ 00:25:45.687 END TEST nvmf_identify_passthru 00:25:45.687 ************************************ 00:25:45.687 22:35:10 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:25:45.687 22:35:10 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:45.687 22:35:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:45.687 22:35:10 -- common/autotest_common.sh@10 -- # set +x 00:25:45.687 ************************************ 00:25:45.687 START TEST nvmf_dif 00:25:45.687 ************************************ 00:25:45.687 22:35:10 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:25:45.687 * Looking for test storage... 00:25:45.687 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:45.687 22:35:10 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:45.687 22:35:10 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:25:45.687 22:35:10 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:45.687 22:35:10 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:45.687 22:35:10 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:45.687 22:35:10 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:45.687 22:35:10 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:45.687 22:35:10 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:45.687 22:35:10 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:45.687 22:35:10 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:45.687 22:35:10 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:45.687 22:35:10 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:45.687 22:35:10 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:25:45.687 22:35:10 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:25:45.687 22:35:10 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:45.687 22:35:10 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:45.687 22:35:10 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:45.687 22:35:10 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:45.687 22:35:10 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:45.687 22:35:10 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:45.687 22:35:10 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:45.687 22:35:10 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:45.687 22:35:10 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.687 22:35:10 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.688 22:35:10 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.688 22:35:10 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:25:45.688 22:35:10 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.688 22:35:10 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:25:45.688 22:35:10 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:45.688 22:35:10 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:45.688 22:35:10 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:45.688 22:35:10 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:45.688 22:35:10 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:45.688 22:35:10 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:45.688 22:35:10 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:45.688 22:35:10 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:45.688 22:35:10 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:25:45.688 22:35:10 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:25:45.688 22:35:10 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:25:45.688 22:35:10 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:25:45.688 22:35:10 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:25:45.688 22:35:10 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:45.688 22:35:10 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:45.688 22:35:10 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:45.688 22:35:10 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:45.688 22:35:10 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:45.688 22:35:10 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:45.688 22:35:10 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:45.688 22:35:10 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:45.688 22:35:10 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:45.688 22:35:10 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:45.688 22:35:10 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:25:45.688 22:35:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:25:47.062 Found 0000:08:00.0 (0x8086 - 0x159b) 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:25:47.062 Found 0000:08:00.1 (0x8086 - 0x159b) 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:25:47.062 Found net devices under 0000:08:00.0: cvl_0_0 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:25:47.062 Found net devices under 0000:08:00.1: cvl_0_1 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:47.062 22:35:12 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:47.063 22:35:12 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:47.063 22:35:12 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:47.063 22:35:12 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:47.063 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:47.063 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.424 ms 00:25:47.063 00:25:47.063 --- 10.0.0.2 ping statistics --- 00:25:47.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:47.063 rtt min/avg/max/mdev = 0.424/0.424/0.424/0.000 ms 00:25:47.063 22:35:12 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:47.063 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:47.063 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:25:47.063 00:25:47.063 --- 10.0.0.1 ping statistics --- 00:25:47.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:47.063 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:25:47.063 22:35:12 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:47.063 22:35:12 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:25:47.063 22:35:12 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:25:47.063 22:35:12 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:47.999 0000:00:04.7 (8086 3c27): Already using the vfio-pci driver 00:25:47.999 0000:84:00.0 (8086 0a54): Already using the vfio-pci driver 00:25:47.999 0000:00:04.6 (8086 3c26): Already using the vfio-pci driver 00:25:47.999 0000:00:04.5 (8086 3c25): Already using the vfio-pci driver 00:25:47.999 0000:00:04.4 (8086 3c24): Already using the vfio-pci driver 00:25:47.999 0000:00:04.3 (8086 3c23): Already using the vfio-pci driver 00:25:47.999 0000:00:04.2 (8086 3c22): Already using the vfio-pci driver 00:25:47.999 0000:00:04.1 (8086 3c21): Already using the vfio-pci driver 00:25:47.999 0000:00:04.0 (8086 3c20): Already using the vfio-pci driver 00:25:47.999 0000:80:04.7 (8086 3c27): Already using the vfio-pci driver 00:25:47.999 0000:80:04.6 (8086 3c26): Already using the vfio-pci driver 00:25:47.999 0000:80:04.5 (8086 3c25): Already using the vfio-pci driver 00:25:47.999 0000:80:04.4 (8086 3c24): Already using the vfio-pci driver 00:25:47.999 0000:80:04.3 (8086 3c23): Already using the vfio-pci driver 00:25:47.999 0000:80:04.2 (8086 3c22): Already using the vfio-pci driver 00:25:47.999 0000:80:04.1 (8086 3c21): Already using the vfio-pci driver 00:25:47.999 0000:80:04.0 (8086 3c20): Already using the vfio-pci driver 00:25:47.999 22:35:13 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:47.999 22:35:13 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:47.999 22:35:13 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:47.999 22:35:13 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:47.999 22:35:13 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:47.999 22:35:13 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:47.999 22:35:13 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:25:47.999 22:35:13 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:25:47.999 22:35:13 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:47.999 22:35:13 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:47.999 22:35:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:47.999 22:35:13 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=3936158 00:25:47.999 22:35:13 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:47.999 22:35:13 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 3936158 00:25:47.999 22:35:13 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 3936158 ']' 00:25:47.999 22:35:13 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:47.999 22:35:13 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:47.999 22:35:13 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:47.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:47.999 22:35:13 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:47.999 22:35:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:48.257 [2024-07-24 22:35:13.725243] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:25:48.257 [2024-07-24 22:35:13.725342] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:48.257 EAL: No free 2048 kB hugepages reported on node 1 00:25:48.257 [2024-07-24 22:35:13.789800] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:48.257 [2024-07-24 22:35:13.905727] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:48.257 [2024-07-24 22:35:13.905790] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:48.257 [2024-07-24 22:35:13.905807] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:48.257 [2024-07-24 22:35:13.905820] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:48.257 [2024-07-24 22:35:13.905832] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:48.257 [2024-07-24 22:35:13.905863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:48.515 22:35:14 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:48.515 22:35:14 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:25:48.515 22:35:14 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:48.515 22:35:14 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:48.515 22:35:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:48.515 22:35:14 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:48.515 22:35:14 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:25:48.515 22:35:14 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:25:48.515 22:35:14 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.515 22:35:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:48.515 [2024-07-24 22:35:14.035752] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:48.515 22:35:14 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.515 22:35:14 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:25:48.515 22:35:14 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:48.515 22:35:14 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:48.515 22:35:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:48.515 ************************************ 00:25:48.515 START TEST fio_dif_1_default 00:25:48.515 ************************************ 00:25:48.515 22:35:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:25:48.515 22:35:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:25:48.515 22:35:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:25:48.515 22:35:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:25:48.515 22:35:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:25:48.515 22:35:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:25:48.515 22:35:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:48.515 22:35:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.515 22:35:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:48.515 bdev_null0 00:25:48.515 22:35:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.515 22:35:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:48.515 22:35:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.515 22:35:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:48.515 22:35:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.515 22:35:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:48.515 22:35:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.515 22:35:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:48.515 22:35:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.515 22:35:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:48.515 22:35:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.515 22:35:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:48.515 [2024-07-24 22:35:14.092014] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:48.515 22:35:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.515 22:35:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:25:48.515 22:35:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:25:48.515 22:35:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:48.515 22:35:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:25:48.515 22:35:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:25:48.515 22:35:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:48.515 22:35:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:48.515 22:35:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:25:48.515 22:35:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:48.515 22:35:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:48.515 { 00:25:48.515 "params": { 00:25:48.515 "name": "Nvme$subsystem", 00:25:48.515 "trtype": "$TEST_TRANSPORT", 00:25:48.515 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:48.515 "adrfam": "ipv4", 00:25:48.515 "trsvcid": "$NVMF_PORT", 00:25:48.515 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:48.515 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:48.515 "hdgst": ${hdgst:-false}, 00:25:48.515 "ddgst": ${ddgst:-false} 00:25:48.515 }, 00:25:48.515 "method": "bdev_nvme_attach_controller" 00:25:48.515 } 00:25:48.515 EOF 00:25:48.515 )") 00:25:48.515 22:35:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:25:48.515 22:35:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:25:48.515 22:35:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:25:48.515 22:35:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:48.515 22:35:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local sanitizers 00:25:48.515 22:35:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1338 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:48.515 22:35:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # shift 00:25:48.515 22:35:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local asan_lib= 00:25:48.515 22:35:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:25:48.515 22:35:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:25:48.515 22:35:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:25:48.515 22:35:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:48.515 22:35:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:25:48.515 22:35:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # grep libasan 00:25:48.515 22:35:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:25:48.515 22:35:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:25:48.515 22:35:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:25:48.515 22:35:14 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:48.515 "params": { 00:25:48.515 "name": "Nvme0", 00:25:48.515 "trtype": "tcp", 00:25:48.515 "traddr": "10.0.0.2", 00:25:48.515 "adrfam": "ipv4", 00:25:48.515 "trsvcid": "4420", 00:25:48.515 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:48.515 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:48.515 "hdgst": false, 00:25:48.515 "ddgst": false 00:25:48.515 }, 00:25:48.515 "method": "bdev_nvme_attach_controller" 00:25:48.515 }' 00:25:48.515 22:35:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # asan_lib= 00:25:48.515 22:35:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:25:48.515 22:35:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:25:48.515 22:35:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:48.515 22:35:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # grep libclang_rt.asan 00:25:48.515 22:35:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:25:48.515 22:35:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # asan_lib= 00:25:48.515 22:35:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:25:48.515 22:35:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:25:48.515 22:35:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:48.773 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:48.773 fio-3.35 00:25:48.773 Starting 1 thread 00:25:48.773 EAL: No free 2048 kB hugepages reported on node 1 00:26:00.968 00:26:00.968 filename0: (groupid=0, jobs=1): err= 0: pid=3936324: Wed Jul 24 22:35:24 2024 00:26:00.968 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10008msec) 00:26:00.968 slat (nsec): min=7422, max=39255, avg=9023.73, stdev=2922.09 00:26:00.968 clat (usec): min=40873, max=44152, avg=40985.33, stdev=203.20 00:26:00.968 lat (usec): min=40881, max=44187, avg=40994.35, stdev=203.79 00:26:00.968 clat percentiles (usec): 00:26:00.968 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:26:00.968 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:26:00.968 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:26:00.968 | 99.00th=[41157], 99.50th=[41157], 99.90th=[44303], 99.95th=[44303], 00:26:00.968 | 99.99th=[44303] 00:26:00.968 bw ( KiB/s): min= 384, max= 416, per=99.46%, avg=388.80, stdev=11.72, samples=20 00:26:00.968 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:26:00.968 lat (msec) : 50=100.00% 00:26:00.968 cpu : usr=90.90%, sys=8.77%, ctx=13, majf=0, minf=150 00:26:00.968 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:00.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.968 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.968 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:00.968 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:00.968 00:26:00.968 Run status group 0 (all jobs): 00:26:00.968 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10008-10008msec 00:26:00.968 22:35:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:26:00.968 22:35:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:26:00.968 22:35:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:26:00.968 22:35:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:00.968 22:35:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:26:00.968 22:35:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:00.968 22:35:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.968 22:35:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:00.968 22:35:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.968 22:35:25 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:00.968 22:35:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.968 22:35:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:00.968 22:35:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.968 00:26:00.968 real 0m11.042s 00:26:00.968 user 0m9.971s 00:26:00.968 sys 0m1.117s 00:26:00.968 22:35:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:00.968 22:35:25 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:00.968 ************************************ 00:26:00.968 END TEST fio_dif_1_default 00:26:00.968 ************************************ 00:26:00.968 22:35:25 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:26:00.968 22:35:25 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:00.968 22:35:25 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:00.968 22:35:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:00.968 ************************************ 00:26:00.968 START TEST fio_dif_1_multi_subsystems 00:26:00.968 ************************************ 00:26:00.968 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:26:00.968 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:26:00.968 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:26:00.968 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:26:00.968 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:26:00.968 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:26:00.968 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:26:00.968 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:00.968 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.968 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:00.968 bdev_null0 00:26:00.968 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.968 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:00.968 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.968 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:00.968 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.968 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:00.968 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.968 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:00.968 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.968 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:00.968 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.968 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:00.968 [2024-07-24 22:35:25.190719] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:00.968 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.968 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:26:00.968 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:26:00.968 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:26:00.968 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:00.968 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.968 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:00.968 bdev_null1 00:26:00.968 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.968 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:00.968 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.968 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:00.968 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.968 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:00.968 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.968 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:00.968 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.968 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:00.968 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.968 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:00.968 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.968 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:26:00.968 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:00.968 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:26:00.968 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:00.968 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:26:00.968 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:00.968 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:00.968 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local sanitizers 00:26:00.968 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1338 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:00.968 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:26:00.968 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:26:00.968 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # shift 00:26:00.968 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local asan_lib= 00:26:00.968 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:26:00.968 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:26:00.968 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:26:00.968 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:00.968 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:26:00.968 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:00.968 { 00:26:00.968 "params": { 00:26:00.968 "name": "Nvme$subsystem", 00:26:00.968 "trtype": "$TEST_TRANSPORT", 00:26:00.968 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:00.968 "adrfam": "ipv4", 00:26:00.969 "trsvcid": "$NVMF_PORT", 00:26:00.969 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:00.969 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:00.969 "hdgst": ${hdgst:-false}, 00:26:00.969 "ddgst": ${ddgst:-false} 00:26:00.969 }, 00:26:00.969 "method": "bdev_nvme_attach_controller" 00:26:00.969 } 00:26:00.969 EOF 00:26:00.969 )") 00:26:00.969 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:26:00.969 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:00.969 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # grep libasan 00:26:00.969 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:26:00.969 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:26:00.969 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:26:00.969 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:26:00.969 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:00.969 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:00.969 { 00:26:00.969 "params": { 00:26:00.969 "name": "Nvme$subsystem", 00:26:00.969 "trtype": "$TEST_TRANSPORT", 00:26:00.969 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:00.969 "adrfam": "ipv4", 00:26:00.969 "trsvcid": "$NVMF_PORT", 00:26:00.969 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:00.969 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:00.969 "hdgst": ${hdgst:-false}, 00:26:00.969 "ddgst": ${ddgst:-false} 00:26:00.969 }, 00:26:00.969 "method": "bdev_nvme_attach_controller" 00:26:00.969 } 00:26:00.969 EOF 00:26:00.969 )") 00:26:00.969 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:26:00.969 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:26:00.969 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:26:00.969 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:26:00.969 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:26:00.969 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:00.969 "params": { 00:26:00.969 "name": "Nvme0", 00:26:00.969 "trtype": "tcp", 00:26:00.969 "traddr": "10.0.0.2", 00:26:00.969 "adrfam": "ipv4", 00:26:00.969 "trsvcid": "4420", 00:26:00.969 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:00.969 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:00.969 "hdgst": false, 00:26:00.969 "ddgst": false 00:26:00.969 }, 00:26:00.969 "method": "bdev_nvme_attach_controller" 00:26:00.969 },{ 00:26:00.969 "params": { 00:26:00.969 "name": "Nvme1", 00:26:00.969 "trtype": "tcp", 00:26:00.969 "traddr": "10.0.0.2", 00:26:00.969 "adrfam": "ipv4", 00:26:00.969 "trsvcid": "4420", 00:26:00.969 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:00.969 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:00.969 "hdgst": false, 00:26:00.969 "ddgst": false 00:26:00.969 }, 00:26:00.969 "method": "bdev_nvme_attach_controller" 00:26:00.969 }' 00:26:00.969 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # asan_lib= 00:26:00.969 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:26:00.969 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:26:00.969 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:00.969 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # grep libclang_rt.asan 00:26:00.969 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:26:00.969 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # asan_lib= 00:26:00.969 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:26:00.969 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:00.969 22:35:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:00.969 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:00.969 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:00.969 fio-3.35 00:26:00.969 Starting 2 threads 00:26:00.969 EAL: No free 2048 kB hugepages reported on node 1 00:26:10.934 00:26:10.934 filename0: (groupid=0, jobs=1): err= 0: pid=3937427: Wed Jul 24 22:35:36 2024 00:26:10.934 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10008msec) 00:26:10.934 slat (nsec): min=7639, max=73211, avg=9264.27, stdev=3334.64 00:26:10.934 clat (usec): min=40801, max=43234, avg=40985.26, stdev=156.87 00:26:10.934 lat (usec): min=40809, max=43307, avg=40994.53, stdev=158.58 00:26:10.934 clat percentiles (usec): 00:26:10.934 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:26:10.934 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:26:10.934 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:26:10.934 | 99.00th=[41157], 99.50th=[41681], 99.90th=[43254], 99.95th=[43254], 00:26:10.934 | 99.99th=[43254] 00:26:10.934 bw ( KiB/s): min= 384, max= 416, per=49.74%, avg=388.80, stdev=11.72, samples=20 00:26:10.934 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:26:10.934 lat (msec) : 50=100.00% 00:26:10.934 cpu : usr=94.23%, sys=5.43%, ctx=25, majf=0, minf=107 00:26:10.934 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:10.934 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:10.934 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:10.934 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:10.934 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:10.934 filename1: (groupid=0, jobs=1): err= 0: pid=3937428: Wed Jul 24 22:35:36 2024 00:26:10.934 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10009msec) 00:26:10.934 slat (nsec): min=7591, max=73168, avg=9255.16, stdev=3524.81 00:26:10.934 clat (usec): min=40835, max=42856, avg=40989.28, stdev=155.05 00:26:10.934 lat (usec): min=40851, max=42876, avg=40998.54, stdev=156.16 00:26:10.934 clat percentiles (usec): 00:26:10.934 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:26:10.934 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:26:10.934 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:26:10.934 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:26:10.934 | 99.99th=[42730] 00:26:10.934 bw ( KiB/s): min= 384, max= 416, per=49.74%, avg=388.80, stdev=11.72, samples=20 00:26:10.934 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:26:10.934 lat (msec) : 50=100.00% 00:26:10.934 cpu : usr=94.75%, sys=4.92%, ctx=14, majf=0, minf=153 00:26:10.934 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:10.934 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:10.934 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:10.934 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:10.934 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:10.934 00:26:10.934 Run status group 0 (all jobs): 00:26:10.934 READ: bw=780KiB/s (799kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=7808KiB (7995kB), run=10008-10009msec 00:26:10.935 22:35:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:26:10.935 22:35:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:26:10.935 22:35:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:26:10.935 22:35:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:10.935 22:35:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:26:10.935 22:35:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:10.935 22:35:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.935 22:35:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:10.935 22:35:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.935 22:35:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:10.935 22:35:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.935 22:35:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:10.935 22:35:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.935 22:35:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:26:10.935 22:35:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:10.935 22:35:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:26:10.935 22:35:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:10.935 22:35:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.935 22:35:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:10.935 22:35:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.935 22:35:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:10.935 22:35:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.935 22:35:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:10.935 22:35:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.935 00:26:10.935 real 0m11.186s 00:26:10.935 user 0m19.894s 00:26:10.935 sys 0m1.311s 00:26:10.935 22:35:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:10.935 22:35:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:10.935 ************************************ 00:26:10.935 END TEST fio_dif_1_multi_subsystems 00:26:10.935 ************************************ 00:26:10.935 22:35:36 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:26:10.935 22:35:36 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:10.935 22:35:36 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:10.935 22:35:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:10.935 ************************************ 00:26:10.935 START TEST fio_dif_rand_params 00:26:10.935 ************************************ 00:26:10.935 22:35:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:26:10.935 22:35:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:26:10.935 22:35:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:26:10.935 22:35:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:26:10.935 22:35:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:26:10.935 22:35:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:26:10.935 22:35:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:26:10.935 22:35:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:26:10.935 22:35:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:26:10.935 22:35:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:10.935 22:35:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:10.935 22:35:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:10.935 22:35:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:10.935 22:35:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:10.935 22:35:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.935 22:35:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:10.935 bdev_null0 00:26:10.935 22:35:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.935 22:35:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:10.935 22:35:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.935 22:35:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:10.935 22:35:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.935 22:35:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:10.935 22:35:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.935 22:35:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:10.935 22:35:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.935 22:35:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:10.935 22:35:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:10.935 22:35:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:10.935 [2024-07-24 22:35:36.429068] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:10.935 22:35:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:10.935 22:35:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:26:10.935 22:35:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:26:10.935 22:35:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:10.935 22:35:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:26:10.935 22:35:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:26:10.935 22:35:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:10.935 22:35:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:10.935 22:35:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:10.935 22:35:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:10.935 22:35:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:10.935 { 00:26:10.935 "params": { 00:26:10.935 "name": "Nvme$subsystem", 00:26:10.935 "trtype": "$TEST_TRANSPORT", 00:26:10.935 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:10.935 "adrfam": "ipv4", 00:26:10.935 "trsvcid": "$NVMF_PORT", 00:26:10.935 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:10.935 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:10.935 "hdgst": ${hdgst:-false}, 00:26:10.935 "ddgst": ${ddgst:-false} 00:26:10.935 }, 00:26:10.935 "method": "bdev_nvme_attach_controller" 00:26:10.935 } 00:26:10.935 EOF 00:26:10.935 )") 00:26:10.935 22:35:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:10.935 22:35:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:26:10.935 22:35:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:10.935 22:35:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:10.935 22:35:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local sanitizers 00:26:10.935 22:35:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:10.935 22:35:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # shift 00:26:10.935 22:35:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local asan_lib= 00:26:10.935 22:35:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:26:10.935 22:35:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:10.935 22:35:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:10.935 22:35:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:10.935 22:35:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:10.935 22:35:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # grep libasan 00:26:10.935 22:35:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:26:10.935 22:35:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:26:10.935 22:35:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:26:10.935 22:35:36 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:10.935 "params": { 00:26:10.935 "name": "Nvme0", 00:26:10.935 "trtype": "tcp", 00:26:10.935 "traddr": "10.0.0.2", 00:26:10.935 "adrfam": "ipv4", 00:26:10.935 "trsvcid": "4420", 00:26:10.935 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:10.935 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:10.935 "hdgst": false, 00:26:10.935 "ddgst": false 00:26:10.935 }, 00:26:10.935 "method": "bdev_nvme_attach_controller" 00:26:10.935 }' 00:26:10.935 22:35:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # asan_lib= 00:26:10.935 22:35:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:26:10.935 22:35:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:26:10.935 22:35:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:10.935 22:35:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # grep libclang_rt.asan 00:26:10.935 22:35:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:26:10.935 22:35:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # asan_lib= 00:26:10.935 22:35:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:26:10.935 22:35:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:10.936 22:35:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:11.194 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:11.194 ... 00:26:11.194 fio-3.35 00:26:11.194 Starting 3 threads 00:26:11.194 EAL: No free 2048 kB hugepages reported on node 1 00:26:17.834 00:26:17.834 filename0: (groupid=0, jobs=1): err= 0: pid=3938490: Wed Jul 24 22:35:42 2024 00:26:17.834 read: IOPS=189, BW=23.7MiB/s (24.8MB/s)(120MiB/5048msec) 00:26:17.834 slat (nsec): min=7707, max=37604, avg=13754.87, stdev=3608.72 00:26:17.834 clat (usec): min=4722, max=59029, avg=15777.03, stdev=12488.35 00:26:17.834 lat (usec): min=4736, max=59046, avg=15790.78, stdev=12488.32 00:26:17.834 clat percentiles (usec): 00:26:17.834 | 1.00th=[ 5669], 5.00th=[ 6915], 10.00th=[ 8717], 20.00th=[ 9503], 00:26:17.834 | 30.00th=[ 9896], 40.00th=[10814], 50.00th=[12518], 60.00th=[13829], 00:26:17.834 | 70.00th=[14353], 80.00th=[15139], 90.00th=[19530], 95.00th=[53216], 00:26:17.834 | 99.00th=[55837], 99.50th=[55837], 99.90th=[58983], 99.95th=[58983], 00:26:17.834 | 99.99th=[58983] 00:26:17.834 bw ( KiB/s): min=18432, max=32000, per=32.34%, avg=24401.60, stdev=4344.48, samples=10 00:26:17.834 iops : min= 144, max= 250, avg=190.60, stdev=33.94, samples=10 00:26:17.834 lat (msec) : 10=31.59%, 20=58.47%, 50=2.41%, 100=7.53% 00:26:17.834 cpu : usr=93.80%, sys=5.69%, ctx=12, majf=0, minf=79 00:26:17.834 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:17.834 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.834 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.834 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.834 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:17.834 filename0: (groupid=0, jobs=1): err= 0: pid=3938491: Wed Jul 24 22:35:42 2024 00:26:17.834 read: IOPS=191, BW=23.9MiB/s (25.0MB/s)(121MiB/5045msec) 00:26:17.834 slat (nsec): min=7724, max=61476, avg=14249.81, stdev=3762.35 00:26:17.834 clat (usec): min=5461, max=57840, avg=15637.08, stdev=12036.89 00:26:17.834 lat (usec): min=5471, max=57851, avg=15651.33, stdev=12037.11 00:26:17.834 clat percentiles (usec): 00:26:17.834 | 1.00th=[ 5997], 5.00th=[ 6783], 10.00th=[ 8717], 20.00th=[ 9634], 00:26:17.834 | 30.00th=[10159], 40.00th=[10945], 50.00th=[12649], 60.00th=[13960], 00:26:17.834 | 70.00th=[14484], 80.00th=[15139], 90.00th=[17433], 95.00th=[52691], 00:26:17.834 | 99.00th=[55313], 99.50th=[55837], 99.90th=[57934], 99.95th=[57934], 00:26:17.834 | 99.99th=[57934] 00:26:17.834 bw ( KiB/s): min=17152, max=31488, per=32.60%, avg=24605.00, stdev=5718.89, samples=10 00:26:17.834 iops : min= 134, max= 246, avg=192.20, stdev=44.72, samples=10 00:26:17.834 lat (msec) : 10=26.76%, 20=64.00%, 50=1.97%, 100=7.26% 00:26:17.834 cpu : usr=93.42%, sys=6.03%, ctx=14, majf=0, minf=141 00:26:17.834 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:17.834 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.834 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.834 issued rwts: total=964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.834 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:17.834 filename0: (groupid=0, jobs=1): err= 0: pid=3938492: Wed Jul 24 22:35:42 2024 00:26:17.834 read: IOPS=209, BW=26.1MiB/s (27.4MB/s)(132MiB/5048msec) 00:26:17.834 slat (nsec): min=8303, max=52372, avg=22452.14, stdev=7224.24 00:26:17.834 clat (usec): min=4847, max=90922, avg=14268.98, stdev=11515.70 00:26:17.834 lat (usec): min=4866, max=90934, avg=14291.43, stdev=11515.58 00:26:17.834 clat percentiles (usec): 00:26:17.834 | 1.00th=[ 5604], 5.00th=[ 6194], 10.00th=[ 8160], 20.00th=[ 9110], 00:26:17.834 | 30.00th=[ 9634], 40.00th=[10159], 50.00th=[11338], 60.00th=[12518], 00:26:17.834 | 70.00th=[13173], 80.00th=[13829], 90.00th=[15139], 95.00th=[51119], 00:26:17.834 | 99.00th=[54264], 99.50th=[54789], 99.90th=[56361], 99.95th=[90702], 00:26:17.834 | 99.99th=[90702] 00:26:17.834 bw ( KiB/s): min=18432, max=36096, per=35.72%, avg=26956.80, stdev=5034.02, samples=10 00:26:17.834 iops : min= 144, max= 282, avg=210.60, stdev=39.33, samples=10 00:26:17.834 lat (msec) : 10=38.92%, 20=53.03%, 50=2.46%, 100=5.59% 00:26:17.834 cpu : usr=90.77%, sys=7.15%, ctx=250, majf=0, minf=120 00:26:17.834 IO depths : 1=2.2%, 2=97.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:17.834 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.834 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.834 issued rwts: total=1056,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.834 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:17.834 00:26:17.834 Run status group 0 (all jobs): 00:26:17.834 READ: bw=73.7MiB/s (77.3MB/s), 23.7MiB/s-26.1MiB/s (24.8MB/s-27.4MB/s), io=372MiB (390MB), run=5045-5048msec 00:26:17.834 22:35:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:26:17.834 22:35:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:17.834 22:35:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:17.834 22:35:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:17.834 22:35:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:17.834 22:35:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:17.834 22:35:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.834 22:35:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:17.834 22:35:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.834 22:35:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:17.834 22:35:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.834 22:35:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:17.834 22:35:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.834 22:35:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:26:17.834 22:35:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:26:17.834 22:35:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:26:17.834 22:35:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:26:17.834 22:35:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:26:17.834 22:35:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:26:17.834 22:35:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:26:17.834 22:35:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:17.834 22:35:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:17.834 22:35:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:17.834 22:35:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:17.834 22:35:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:26:17.834 22:35:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.834 22:35:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:17.834 bdev_null0 00:26:17.834 22:35:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.834 22:35:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:17.834 22:35:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.834 22:35:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:17.834 22:35:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.834 22:35:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:17.834 22:35:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.834 22:35:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:17.834 22:35:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.834 22:35:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:17.834 22:35:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.834 22:35:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:17.834 [2024-07-24 22:35:42.507437] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:17.834 22:35:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.834 22:35:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:17.834 22:35:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:26:17.834 22:35:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:26:17.834 22:35:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:26:17.834 22:35:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.834 22:35:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:17.834 bdev_null1 00:26:17.834 22:35:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.835 22:35:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:17.835 22:35:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.835 22:35:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:17.835 22:35:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.835 22:35:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:17.835 22:35:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.835 22:35:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:17.835 22:35:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.835 22:35:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:17.835 22:35:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.835 22:35:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:17.835 22:35:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.835 22:35:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:17.835 22:35:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:26:17.835 22:35:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:26:17.835 22:35:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:26:17.835 22:35:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.835 22:35:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:17.835 bdev_null2 00:26:17.835 22:35:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.835 22:35:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:26:17.835 22:35:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.835 22:35:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:17.835 22:35:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.835 22:35:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:26:17.835 22:35:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.835 22:35:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:17.835 22:35:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.835 22:35:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:17.835 22:35:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:17.835 22:35:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:17.835 22:35:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:17.835 22:35:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:26:17.835 22:35:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:26:17.835 22:35:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:17.835 22:35:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:26:17.835 22:35:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:17.835 22:35:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:26:17.835 22:35:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:17.835 22:35:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:26:17.835 22:35:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:26:17.835 22:35:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:17.835 22:35:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:17.835 22:35:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:17.835 22:35:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local sanitizers 00:26:17.835 22:35:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:17.835 22:35:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:17.835 { 00:26:17.835 "params": { 00:26:17.835 "name": "Nvme$subsystem", 00:26:17.835 "trtype": "$TEST_TRANSPORT", 00:26:17.835 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:17.835 "adrfam": "ipv4", 00:26:17.835 "trsvcid": "$NVMF_PORT", 00:26:17.835 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:17.835 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:17.835 "hdgst": ${hdgst:-false}, 00:26:17.835 "ddgst": ${ddgst:-false} 00:26:17.835 }, 00:26:17.835 "method": "bdev_nvme_attach_controller" 00:26:17.835 } 00:26:17.835 EOF 00:26:17.835 )") 00:26:17.835 22:35:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:17.835 22:35:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # shift 00:26:17.835 22:35:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local asan_lib= 00:26:17.835 22:35:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:26:17.835 22:35:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:17.835 22:35:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:17.835 22:35:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # grep libasan 00:26:17.835 22:35:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:17.835 22:35:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:17.835 22:35:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:26:17.835 22:35:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:17.835 22:35:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:17.835 22:35:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:17.835 { 00:26:17.835 "params": { 00:26:17.835 "name": "Nvme$subsystem", 00:26:17.835 "trtype": "$TEST_TRANSPORT", 00:26:17.835 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:17.835 "adrfam": "ipv4", 00:26:17.835 "trsvcid": "$NVMF_PORT", 00:26:17.835 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:17.835 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:17.835 "hdgst": ${hdgst:-false}, 00:26:17.835 "ddgst": ${ddgst:-false} 00:26:17.835 }, 00:26:17.835 "method": "bdev_nvme_attach_controller" 00:26:17.835 } 00:26:17.835 EOF 00:26:17.835 )") 00:26:17.835 22:35:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:17.835 22:35:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:17.835 22:35:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:17.835 22:35:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:17.835 22:35:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:17.835 22:35:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:17.835 { 00:26:17.835 "params": { 00:26:17.835 "name": "Nvme$subsystem", 00:26:17.835 "trtype": "$TEST_TRANSPORT", 00:26:17.835 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:17.835 "adrfam": "ipv4", 00:26:17.835 "trsvcid": "$NVMF_PORT", 00:26:17.835 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:17.835 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:17.835 "hdgst": ${hdgst:-false}, 00:26:17.835 "ddgst": ${ddgst:-false} 00:26:17.835 }, 00:26:17.835 "method": "bdev_nvme_attach_controller" 00:26:17.835 } 00:26:17.835 EOF 00:26:17.835 )") 00:26:17.835 22:35:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:17.835 22:35:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:17.835 22:35:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:17.835 22:35:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:26:17.835 22:35:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:26:17.835 22:35:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:17.835 "params": { 00:26:17.835 "name": "Nvme0", 00:26:17.835 "trtype": "tcp", 00:26:17.835 "traddr": "10.0.0.2", 00:26:17.835 "adrfam": "ipv4", 00:26:17.835 "trsvcid": "4420", 00:26:17.835 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:17.835 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:17.835 "hdgst": false, 00:26:17.835 "ddgst": false 00:26:17.835 }, 00:26:17.835 "method": "bdev_nvme_attach_controller" 00:26:17.835 },{ 00:26:17.835 "params": { 00:26:17.835 "name": "Nvme1", 00:26:17.835 "trtype": "tcp", 00:26:17.835 "traddr": "10.0.0.2", 00:26:17.835 "adrfam": "ipv4", 00:26:17.835 "trsvcid": "4420", 00:26:17.835 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:17.835 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:17.835 "hdgst": false, 00:26:17.835 "ddgst": false 00:26:17.835 }, 00:26:17.835 "method": "bdev_nvme_attach_controller" 00:26:17.835 },{ 00:26:17.835 "params": { 00:26:17.835 "name": "Nvme2", 00:26:17.835 "trtype": "tcp", 00:26:17.835 "traddr": "10.0.0.2", 00:26:17.835 "adrfam": "ipv4", 00:26:17.835 "trsvcid": "4420", 00:26:17.835 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:17.835 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:17.835 "hdgst": false, 00:26:17.835 "ddgst": false 00:26:17.835 }, 00:26:17.835 "method": "bdev_nvme_attach_controller" 00:26:17.835 }' 00:26:17.835 22:35:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # asan_lib= 00:26:17.835 22:35:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:26:17.835 22:35:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:26:17.836 22:35:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:17.836 22:35:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # grep libclang_rt.asan 00:26:17.836 22:35:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:26:17.836 22:35:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # asan_lib= 00:26:17.836 22:35:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:26:17.836 22:35:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:17.836 22:35:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:17.836 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:17.836 ... 00:26:17.836 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:17.836 ... 00:26:17.836 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:17.836 ... 00:26:17.836 fio-3.35 00:26:17.836 Starting 24 threads 00:26:17.836 EAL: No free 2048 kB hugepages reported on node 1 00:26:30.038 00:26:30.038 filename0: (groupid=0, jobs=1): err= 0: pid=3939150: Wed Jul 24 22:35:53 2024 00:26:30.038 read: IOPS=111, BW=445KiB/s (456kB/s)(4536KiB/10182msec) 00:26:30.038 slat (usec): min=14, max=166, avg=69.28, stdev=36.53 00:26:30.038 clat (msec): min=33, max=515, avg=142.91, stdev=144.54 00:26:30.038 lat (msec): min=33, max=515, avg=142.98, stdev=144.55 00:26:30.038 clat percentiles (msec): 00:26:30.038 | 1.00th=[ 39], 5.00th=[ 40], 10.00th=[ 40], 20.00th=[ 40], 00:26:30.038 | 30.00th=[ 41], 40.00th=[ 41], 50.00th=[ 43], 60.00th=[ 45], 00:26:30.038 | 70.00th=[ 317], 80.00th=[ 334], 90.00th=[ 359], 95.00th=[ 368], 00:26:30.038 | 99.00th=[ 439], 99.50th=[ 468], 99.90th=[ 514], 99.95th=[ 514], 00:26:30.038 | 99.99th=[ 514] 00:26:30.038 bw ( KiB/s): min= 128, max= 1536, per=4.10%, avg=447.20, stdev=545.74, samples=20 00:26:30.038 iops : min= 32, max= 384, avg=111.80, stdev=136.44, samples=20 00:26:30.038 lat (msec) : 50=63.49%, 100=2.82%, 250=1.76%, 500=31.75%, 750=0.18% 00:26:30.038 cpu : usr=97.55%, sys=1.48%, ctx=110, majf=0, minf=77 00:26:30.038 IO depths : 1=5.2%, 2=11.5%, 4=25.0%, 8=51.1%, 16=7.2%, 32=0.0%, >=64=0.0% 00:26:30.038 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.038 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.038 issued rwts: total=1134,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.038 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:30.038 filename0: (groupid=0, jobs=1): err= 0: pid=3939151: Wed Jul 24 22:35:53 2024 00:26:30.039 read: IOPS=114, BW=459KiB/s (470kB/s)(4672KiB/10182msec) 00:26:30.039 slat (usec): min=10, max=153, avg=48.69, stdev=39.71 00:26:30.039 clat (msec): min=29, max=411, avg=138.38, stdev=134.44 00:26:30.039 lat (msec): min=29, max=411, avg=138.43, stdev=134.43 00:26:30.039 clat percentiles (msec): 00:26:30.039 | 1.00th=[ 39], 5.00th=[ 39], 10.00th=[ 40], 20.00th=[ 40], 00:26:30.039 | 30.00th=[ 41], 40.00th=[ 41], 50.00th=[ 42], 60.00th=[ 44], 00:26:30.039 | 70.00th=[ 251], 80.00th=[ 326], 90.00th=[ 338], 95.00th=[ 363], 00:26:30.039 | 99.00th=[ 368], 99.50th=[ 368], 99.90th=[ 414], 99.95th=[ 414], 00:26:30.039 | 99.99th=[ 414] 00:26:30.039 bw ( KiB/s): min= 128, max= 1536, per=4.22%, avg=460.80, stdev=538.91, samples=20 00:26:30.039 iops : min= 32, max= 384, avg=115.20, stdev=134.73, samples=20 00:26:30.039 lat (msec) : 50=64.21%, 100=0.17%, 250=4.79%, 500=30.82% 00:26:30.039 cpu : usr=95.74%, sys=2.44%, ctx=244, majf=0, minf=46 00:26:30.039 IO depths : 1=5.9%, 2=12.1%, 4=24.7%, 8=50.7%, 16=6.6%, 32=0.0%, >=64=0.0% 00:26:30.039 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.039 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.039 issued rwts: total=1168,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.039 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:30.039 filename0: (groupid=0, jobs=1): err= 0: pid=3939152: Wed Jul 24 22:35:53 2024 00:26:30.039 read: IOPS=111, BW=445KiB/s (456kB/s)(4536KiB/10184msec) 00:26:30.039 slat (usec): min=26, max=143, avg=97.18, stdev=15.45 00:26:30.039 clat (msec): min=38, max=493, avg=142.67, stdev=143.77 00:26:30.039 lat (msec): min=38, max=493, avg=142.76, stdev=143.78 00:26:30.039 clat percentiles (msec): 00:26:30.039 | 1.00th=[ 39], 5.00th=[ 39], 10.00th=[ 40], 20.00th=[ 40], 00:26:30.039 | 30.00th=[ 40], 40.00th=[ 41], 50.00th=[ 42], 60.00th=[ 45], 00:26:30.039 | 70.00th=[ 326], 80.00th=[ 334], 90.00th=[ 359], 95.00th=[ 368], 00:26:30.039 | 99.00th=[ 405], 99.50th=[ 435], 99.90th=[ 493], 99.95th=[ 493], 00:26:30.039 | 99.99th=[ 493] 00:26:30.039 bw ( KiB/s): min= 128, max= 1536, per=4.10%, avg=447.30, stdev=545.76, samples=20 00:26:30.039 iops : min= 32, max= 384, avg=111.80, stdev=136.39, samples=20 00:26:30.039 lat (msec) : 50=63.49%, 100=2.82%, 250=1.41%, 500=32.28% 00:26:30.039 cpu : usr=98.46%, sys=1.12%, ctx=47, majf=0, minf=45 00:26:30.039 IO depths : 1=5.8%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:26:30.039 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.039 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.039 issued rwts: total=1134,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.039 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:30.039 filename0: (groupid=0, jobs=1): err= 0: pid=3939153: Wed Jul 24 22:35:53 2024 00:26:30.039 read: IOPS=110, BW=440KiB/s (451kB/s)(4472KiB/10163msec) 00:26:30.039 slat (usec): min=9, max=169, avg=77.96, stdev=35.91 00:26:30.039 clat (msec): min=36, max=533, avg=144.79, stdev=144.83 00:26:30.039 lat (msec): min=36, max=533, avg=144.87, stdev=144.84 00:26:30.039 clat percentiles (msec): 00:26:30.039 | 1.00th=[ 39], 5.00th=[ 39], 10.00th=[ 40], 20.00th=[ 40], 00:26:30.039 | 30.00th=[ 41], 40.00th=[ 41], 50.00th=[ 42], 60.00th=[ 44], 00:26:30.039 | 70.00th=[ 317], 80.00th=[ 334], 90.00th=[ 351], 95.00th=[ 368], 00:26:30.039 | 99.00th=[ 418], 99.50th=[ 493], 99.90th=[ 535], 99.95th=[ 535], 00:26:30.039 | 99.99th=[ 535] 00:26:30.039 bw ( KiB/s): min= 128, max= 1536, per=4.04%, avg=440.80, stdev=530.72, samples=20 00:26:30.039 iops : min= 32, max= 384, avg=110.20, stdev=132.68, samples=20 00:26:30.039 lat (msec) : 50=64.04%, 100=1.43%, 250=0.89%, 500=33.27%, 750=0.36% 00:26:30.039 cpu : usr=97.97%, sys=1.22%, ctx=52, majf=0, minf=33 00:26:30.039 IO depths : 1=3.4%, 2=9.7%, 4=25.0%, 8=52.9%, 16=9.0%, 32=0.0%, >=64=0.0% 00:26:30.039 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.039 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.039 issued rwts: total=1118,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.039 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:30.039 filename0: (groupid=0, jobs=1): err= 0: pid=3939154: Wed Jul 24 22:35:53 2024 00:26:30.039 read: IOPS=113, BW=456KiB/s (467kB/s)(4608KiB/10111msec) 00:26:30.039 slat (usec): min=5, max=148, avg=95.96, stdev=17.39 00:26:30.039 clat (msec): min=20, max=509, avg=139.61, stdev=144.31 00:26:30.039 lat (msec): min=20, max=509, avg=139.70, stdev=144.31 00:26:30.039 clat percentiles (msec): 00:26:30.039 | 1.00th=[ 22], 5.00th=[ 39], 10.00th=[ 39], 20.00th=[ 40], 00:26:30.039 | 30.00th=[ 40], 40.00th=[ 41], 50.00th=[ 42], 60.00th=[ 44], 00:26:30.039 | 70.00th=[ 309], 80.00th=[ 334], 90.00th=[ 359], 95.00th=[ 368], 00:26:30.039 | 99.00th=[ 460], 99.50th=[ 493], 99.90th=[ 510], 99.95th=[ 510], 00:26:30.039 | 99.99th=[ 510] 00:26:30.039 bw ( KiB/s): min= 128, max= 1539, per=4.16%, avg=454.55, stdev=557.87, samples=20 00:26:30.039 iops : min= 32, max= 384, avg=113.60, stdev=139.39, samples=20 00:26:30.039 lat (msec) : 50=66.67%, 250=2.60%, 500=30.38%, 750=0.35% 00:26:30.039 cpu : usr=98.51%, sys=1.07%, ctx=15, majf=0, minf=29 00:26:30.039 IO depths : 1=5.3%, 2=11.5%, 4=25.0%, 8=51.0%, 16=7.2%, 32=0.0%, >=64=0.0% 00:26:30.039 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.039 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.039 issued rwts: total=1152,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.039 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:30.039 filename0: (groupid=0, jobs=1): err= 0: pid=3939155: Wed Jul 24 22:35:53 2024 00:26:30.039 read: IOPS=113, BW=452KiB/s (463kB/s)(4600KiB/10174msec) 00:26:30.039 slat (nsec): min=4943, max=59662, avg=19694.50, stdev=8860.65 00:26:30.039 clat (msec): min=24, max=442, avg=141.20, stdev=136.09 00:26:30.039 lat (msec): min=24, max=442, avg=141.22, stdev=136.09 00:26:30.039 clat percentiles (msec): 00:26:30.039 | 1.00th=[ 27], 5.00th=[ 40], 10.00th=[ 41], 20.00th=[ 41], 00:26:30.039 | 30.00th=[ 41], 40.00th=[ 41], 50.00th=[ 43], 60.00th=[ 45], 00:26:30.039 | 70.00th=[ 271], 80.00th=[ 330], 90.00th=[ 342], 95.00th=[ 363], 00:26:30.039 | 99.00th=[ 368], 99.50th=[ 368], 99.90th=[ 443], 99.95th=[ 443], 00:26:30.039 | 99.99th=[ 443] 00:26:30.039 bw ( KiB/s): min= 128, max= 1536, per=4.15%, avg=453.60, stdev=526.41, samples=20 00:26:30.039 iops : min= 32, max= 384, avg=113.40, stdev=131.60, samples=20 00:26:30.039 lat (msec) : 50=61.57%, 100=2.43%, 250=4.00%, 500=32.00% 00:26:30.039 cpu : usr=97.12%, sys=1.77%, ctx=106, majf=0, minf=36 00:26:30.039 IO depths : 1=4.6%, 2=10.9%, 4=25.0%, 8=51.7%, 16=7.8%, 32=0.0%, >=64=0.0% 00:26:30.039 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.039 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.039 issued rwts: total=1150,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.039 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:30.039 filename0: (groupid=0, jobs=1): err= 0: pid=3939156: Wed Jul 24 22:35:53 2024 00:26:30.039 read: IOPS=114, BW=457KiB/s (468kB/s)(4608KiB/10092msec) 00:26:30.039 slat (usec): min=10, max=181, avg=33.77, stdev=18.90 00:26:30.039 clat (msec): min=39, max=495, avg=139.89, stdev=137.60 00:26:30.039 lat (msec): min=39, max=495, avg=139.92, stdev=137.61 00:26:30.039 clat percentiles (msec): 00:26:30.039 | 1.00th=[ 40], 5.00th=[ 40], 10.00th=[ 40], 20.00th=[ 41], 00:26:30.039 | 30.00th=[ 41], 40.00th=[ 41], 50.00th=[ 42], 60.00th=[ 45], 00:26:30.039 | 70.00th=[ 271], 80.00th=[ 330], 90.00th=[ 351], 95.00th=[ 363], 00:26:30.039 | 99.00th=[ 372], 99.50th=[ 405], 99.90th=[ 498], 99.95th=[ 498], 00:26:30.039 | 99.99th=[ 498] 00:26:30.039 bw ( KiB/s): min= 128, max= 1536, per=4.16%, avg=454.40, stdev=542.05, samples=20 00:26:30.039 iops : min= 32, max= 384, avg=113.60, stdev=135.51, samples=20 00:26:30.039 lat (msec) : 50=63.89%, 100=1.39%, 250=1.56%, 500=33.16% 00:26:30.039 cpu : usr=97.36%, sys=1.80%, ctx=67, majf=0, minf=38 00:26:30.039 IO depths : 1=5.4%, 2=11.6%, 4=25.0%, 8=50.9%, 16=7.1%, 32=0.0%, >=64=0.0% 00:26:30.039 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.039 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.039 issued rwts: total=1152,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.039 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:30.039 filename0: (groupid=0, jobs=1): err= 0: pid=3939157: Wed Jul 24 22:35:53 2024 00:26:30.039 read: IOPS=111, BW=445KiB/s (456kB/s)(4480KiB/10067msec) 00:26:30.039 slat (usec): min=8, max=149, avg=73.73, stdev=36.40 00:26:30.039 clat (msec): min=37, max=516, avg=143.15, stdev=143.20 00:26:30.039 lat (msec): min=38, max=516, avg=143.22, stdev=143.18 00:26:30.039 clat percentiles (msec): 00:26:30.039 | 1.00th=[ 39], 5.00th=[ 39], 10.00th=[ 40], 20.00th=[ 40], 00:26:30.039 | 30.00th=[ 40], 40.00th=[ 41], 50.00th=[ 42], 60.00th=[ 44], 00:26:30.039 | 70.00th=[ 317], 80.00th=[ 334], 90.00th=[ 351], 95.00th=[ 368], 00:26:30.039 | 99.00th=[ 409], 99.50th=[ 451], 99.90th=[ 518], 99.95th=[ 518], 00:26:30.039 | 99.99th=[ 518] 00:26:30.039 bw ( KiB/s): min= 128, max= 1664, per=4.04%, avg=441.50, stdev=535.86, samples=20 00:26:30.039 iops : min= 32, max= 416, avg=110.35, stdev=133.97, samples=20 00:26:30.039 lat (msec) : 50=62.86%, 100=2.86%, 250=1.96%, 500=32.14%, 750=0.18% 00:26:30.039 cpu : usr=98.46%, sys=1.10%, ctx=15, majf=0, minf=38 00:26:30.039 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:26:30.039 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.039 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.039 issued rwts: total=1120,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.039 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:30.039 filename1: (groupid=0, jobs=1): err= 0: pid=3939158: Wed Jul 24 22:35:53 2024 00:26:30.039 read: IOPS=114, BW=459KiB/s (470kB/s)(4672KiB/10183msec) 00:26:30.039 slat (usec): min=10, max=153, avg=72.29, stdev=37.68 00:26:30.039 clat (msec): min=38, max=449, avg=138.85, stdev=135.12 00:26:30.039 lat (msec): min=38, max=449, avg=138.93, stdev=135.09 00:26:30.039 clat percentiles (msec): 00:26:30.039 | 1.00th=[ 39], 5.00th=[ 39], 10.00th=[ 40], 20.00th=[ 40], 00:26:30.039 | 30.00th=[ 40], 40.00th=[ 41], 50.00th=[ 43], 60.00th=[ 47], 00:26:30.039 | 70.00th=[ 253], 80.00th=[ 330], 90.00th=[ 351], 95.00th=[ 363], 00:26:30.039 | 99.00th=[ 368], 99.50th=[ 414], 99.90th=[ 451], 99.95th=[ 451], 00:26:30.039 | 99.99th=[ 451] 00:26:30.039 bw ( KiB/s): min= 128, max= 1536, per=4.22%, avg=460.80, stdev=538.39, samples=20 00:26:30.039 iops : min= 32, max= 384, avg=115.20, stdev=134.60, samples=20 00:26:30.039 lat (msec) : 50=61.64%, 100=2.74%, 250=4.45%, 500=31.16% 00:26:30.039 cpu : usr=97.68%, sys=1.58%, ctx=109, majf=0, minf=39 00:26:30.039 IO depths : 1=5.4%, 2=11.6%, 4=25.0%, 8=50.9%, 16=7.1%, 32=0.0%, >=64=0.0% 00:26:30.039 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.039 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.039 issued rwts: total=1168,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.039 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:30.039 filename1: (groupid=0, jobs=1): err= 0: pid=3939159: Wed Jul 24 22:35:53 2024 00:26:30.039 read: IOPS=125, BW=501KiB/s (513kB/s)(5112KiB/10200msec) 00:26:30.039 slat (usec): min=5, max=143, avg=39.16, stdev=27.23 00:26:30.039 clat (msec): min=12, max=412, avg=127.06, stdev=113.77 00:26:30.039 lat (msec): min=12, max=412, avg=127.10, stdev=113.77 00:26:30.039 clat percentiles (msec): 00:26:30.039 | 1.00th=[ 14], 5.00th=[ 40], 10.00th=[ 40], 20.00th=[ 40], 00:26:30.039 | 30.00th=[ 41], 40.00th=[ 41], 50.00th=[ 43], 60.00th=[ 58], 00:26:30.039 | 70.00th=[ 222], 80.00th=[ 255], 90.00th=[ 321], 95.00th=[ 338], 00:26:30.039 | 99.00th=[ 359], 99.50th=[ 363], 99.90th=[ 414], 99.95th=[ 414], 00:26:30.039 | 99.99th=[ 414] 00:26:30.039 bw ( KiB/s): min= 128, max= 1664, per=4.63%, avg=505.75, stdev=533.96, samples=20 00:26:30.039 iops : min= 32, max= 416, avg=126.40, stdev=133.41, samples=20 00:26:30.039 lat (msec) : 20=1.10%, 50=58.37%, 100=1.88%, 250=15.65%, 500=23.00% 00:26:30.039 cpu : usr=98.27%, sys=1.27%, ctx=49, majf=0, minf=41 00:26:30.039 IO depths : 1=3.8%, 2=10.1%, 4=25.0%, 8=52.4%, 16=8.6%, 32=0.0%, >=64=0.0% 00:26:30.039 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.039 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.039 issued rwts: total=1278,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.039 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:30.039 filename1: (groupid=0, jobs=1): err= 0: pid=3939160: Wed Jul 24 22:35:53 2024 00:26:30.039 read: IOPS=111, BW=446KiB/s (457kB/s)(4544KiB/10183msec) 00:26:30.039 slat (usec): min=14, max=170, avg=94.14, stdev=16.09 00:26:30.039 clat (msec): min=37, max=534, avg=142.61, stdev=145.89 00:26:30.039 lat (msec): min=38, max=534, avg=142.71, stdev=145.89 00:26:30.039 clat percentiles (msec): 00:26:30.039 | 1.00th=[ 39], 5.00th=[ 39], 10.00th=[ 40], 20.00th=[ 40], 00:26:30.039 | 30.00th=[ 40], 40.00th=[ 41], 50.00th=[ 42], 60.00th=[ 44], 00:26:30.039 | 70.00th=[ 313], 80.00th=[ 334], 90.00th=[ 351], 95.00th=[ 368], 00:26:30.039 | 99.00th=[ 481], 99.50th=[ 493], 99.90th=[ 535], 99.95th=[ 535], 00:26:30.039 | 99.99th=[ 535] 00:26:30.039 bw ( KiB/s): min= 128, max= 1536, per=4.11%, avg=448.00, stdev=544.92, samples=20 00:26:30.039 iops : min= 32, max= 384, avg=112.00, stdev=136.23, samples=20 00:26:30.039 lat (msec) : 50=64.79%, 100=1.41%, 250=1.58%, 500=31.87%, 750=0.35% 00:26:30.039 cpu : usr=98.63%, sys=0.93%, ctx=48, majf=0, minf=43 00:26:30.039 IO depths : 1=5.2%, 2=11.4%, 4=25.0%, 8=51.1%, 16=7.3%, 32=0.0%, >=64=0.0% 00:26:30.039 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.039 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.039 issued rwts: total=1136,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.039 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:30.039 filename1: (groupid=0, jobs=1): err= 0: pid=3939161: Wed Jul 24 22:35:53 2024 00:26:30.039 read: IOPS=128, BW=514KiB/s (527kB/s)(5248KiB/10203msec) 00:26:30.039 slat (usec): min=4, max=204, avg=41.36, stdev=23.79 00:26:30.039 clat (msec): min=10, max=363, avg=123.06, stdev=109.00 00:26:30.039 lat (msec): min=10, max=363, avg=123.10, stdev=109.00 00:26:30.039 clat percentiles (msec): 00:26:30.039 | 1.00th=[ 11], 5.00th=[ 35], 10.00th=[ 40], 20.00th=[ 40], 00:26:30.039 | 30.00th=[ 41], 40.00th=[ 41], 50.00th=[ 43], 60.00th=[ 69], 00:26:30.039 | 70.00th=[ 213], 80.00th=[ 251], 90.00th=[ 271], 95.00th=[ 330], 00:26:30.040 | 99.00th=[ 363], 99.50th=[ 363], 99.90th=[ 363], 99.95th=[ 363], 00:26:30.040 | 99.99th=[ 363] 00:26:30.040 bw ( KiB/s): min= 128, max= 1664, per=4.75%, avg=518.40, stdev=543.64, samples=20 00:26:30.040 iops : min= 32, max= 416, avg=129.60, stdev=135.91, samples=20 00:26:30.040 lat (msec) : 20=2.44%, 50=57.16%, 100=1.37%, 250=18.83%, 500=20.20% 00:26:30.040 cpu : usr=96.32%, sys=2.23%, ctx=123, majf=0, minf=57 00:26:30.040 IO depths : 1=5.0%, 2=11.3%, 4=25.0%, 8=51.2%, 16=7.5%, 32=0.0%, >=64=0.0% 00:26:30.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.040 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.040 issued rwts: total=1312,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.040 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:30.040 filename1: (groupid=0, jobs=1): err= 0: pid=3939162: Wed Jul 24 22:35:53 2024 00:26:30.040 read: IOPS=111, BW=445KiB/s (455kB/s)(4480KiB/10075msec) 00:26:30.040 slat (usec): min=9, max=158, avg=95.68, stdev=18.24 00:26:30.040 clat (msec): min=37, max=526, avg=143.09, stdev=144.35 00:26:30.040 lat (msec): min=38, max=526, avg=143.19, stdev=144.35 00:26:30.040 clat percentiles (msec): 00:26:30.040 | 1.00th=[ 39], 5.00th=[ 39], 10.00th=[ 40], 20.00th=[ 40], 00:26:30.040 | 30.00th=[ 40], 40.00th=[ 41], 50.00th=[ 42], 60.00th=[ 44], 00:26:30.040 | 70.00th=[ 317], 80.00th=[ 334], 90.00th=[ 359], 95.00th=[ 363], 00:26:30.040 | 99.00th=[ 439], 99.50th=[ 460], 99.90th=[ 527], 99.95th=[ 527], 00:26:30.040 | 99.99th=[ 527] 00:26:30.040 bw ( KiB/s): min= 128, max= 1536, per=4.04%, avg=441.60, stdev=531.89, samples=20 00:26:30.040 iops : min= 32, max= 384, avg=110.40, stdev=132.97, samples=20 00:26:30.040 lat (msec) : 50=64.29%, 100=1.43%, 250=2.32%, 500=31.79%, 750=0.18% 00:26:30.040 cpu : usr=98.58%, sys=0.98%, ctx=88, majf=0, minf=40 00:26:30.040 IO depths : 1=5.3%, 2=11.5%, 4=25.0%, 8=51.0%, 16=7.2%, 32=0.0%, >=64=0.0% 00:26:30.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.040 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.040 issued rwts: total=1120,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.040 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:30.040 filename1: (groupid=0, jobs=1): err= 0: pid=3939163: Wed Jul 24 22:35:53 2024 00:26:30.040 read: IOPS=110, BW=441KiB/s (451kB/s)(4480KiB/10167msec) 00:26:30.040 slat (usec): min=10, max=141, avg=96.20, stdev=18.10 00:26:30.040 clat (msec): min=30, max=481, avg=144.42, stdev=144.24 00:26:30.040 lat (msec): min=31, max=481, avg=144.52, stdev=144.24 00:26:30.040 clat percentiles (msec): 00:26:30.040 | 1.00th=[ 39], 5.00th=[ 39], 10.00th=[ 40], 20.00th=[ 40], 00:26:30.040 | 30.00th=[ 40], 40.00th=[ 41], 50.00th=[ 42], 60.00th=[ 44], 00:26:30.040 | 70.00th=[ 317], 80.00th=[ 334], 90.00th=[ 351], 95.00th=[ 368], 00:26:30.040 | 99.00th=[ 418], 99.50th=[ 418], 99.90th=[ 481], 99.95th=[ 481], 00:26:30.040 | 99.99th=[ 481] 00:26:30.040 bw ( KiB/s): min= 128, max= 1536, per=4.04%, avg=441.60, stdev=531.71, samples=20 00:26:30.040 iops : min= 32, max= 384, avg=110.40, stdev=132.93, samples=20 00:26:30.040 lat (msec) : 50=64.29%, 100=1.43%, 250=0.36%, 500=33.93% 00:26:30.040 cpu : usr=98.63%, sys=0.95%, ctx=14, majf=0, minf=44 00:26:30.040 IO depths : 1=4.8%, 2=11.1%, 4=25.0%, 8=51.4%, 16=7.7%, 32=0.0%, >=64=0.0% 00:26:30.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.040 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.040 issued rwts: total=1120,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.040 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:30.040 filename1: (groupid=0, jobs=1): err= 0: pid=3939164: Wed Jul 24 22:35:53 2024 00:26:30.040 read: IOPS=111, BW=446KiB/s (457kB/s)(4544KiB/10182msec) 00:26:30.040 slat (usec): min=16, max=170, avg=96.73, stdev=17.42 00:26:30.040 clat (msec): min=25, max=527, avg=141.49, stdev=143.69 00:26:30.040 lat (msec): min=25, max=528, avg=141.58, stdev=143.69 00:26:30.040 clat percentiles (msec): 00:26:30.040 | 1.00th=[ 39], 5.00th=[ 39], 10.00th=[ 40], 20.00th=[ 40], 00:26:30.040 | 30.00th=[ 40], 40.00th=[ 41], 50.00th=[ 42], 60.00th=[ 44], 00:26:30.040 | 70.00th=[ 317], 80.00th=[ 334], 90.00th=[ 359], 95.00th=[ 363], 00:26:30.040 | 99.00th=[ 401], 99.50th=[ 447], 99.90th=[ 527], 99.95th=[ 527], 00:26:30.040 | 99.99th=[ 527] 00:26:30.040 bw ( KiB/s): min= 128, max= 1536, per=4.11%, avg=448.00, stdev=544.22, samples=20 00:26:30.040 iops : min= 32, max= 384, avg=112.00, stdev=136.06, samples=20 00:26:30.040 lat (msec) : 50=66.02%, 100=0.18%, 250=1.76%, 500=31.87%, 750=0.18% 00:26:30.040 cpu : usr=98.01%, sys=1.34%, ctx=172, majf=0, minf=63 00:26:30.040 IO depths : 1=4.5%, 2=10.7%, 4=25.0%, 8=51.8%, 16=8.0%, 32=0.0%, >=64=0.0% 00:26:30.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.040 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.040 issued rwts: total=1136,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.040 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:30.040 filename1: (groupid=0, jobs=1): err= 0: pid=3939165: Wed Jul 24 22:35:53 2024 00:26:30.040 read: IOPS=114, BW=459KiB/s (470kB/s)(4672KiB/10174msec) 00:26:30.040 slat (usec): min=4, max=144, avg=36.85, stdev=33.30 00:26:30.040 clat (msec): min=24, max=365, avg=139.04, stdev=132.18 00:26:30.040 lat (msec): min=24, max=365, avg=139.08, stdev=132.17 00:26:30.040 clat percentiles (msec): 00:26:30.040 | 1.00th=[ 39], 5.00th=[ 40], 10.00th=[ 40], 20.00th=[ 41], 00:26:30.040 | 30.00th=[ 41], 40.00th=[ 41], 50.00th=[ 43], 60.00th=[ 49], 00:26:30.040 | 70.00th=[ 249], 80.00th=[ 330], 90.00th=[ 351], 95.00th=[ 363], 00:26:30.040 | 99.00th=[ 368], 99.50th=[ 368], 99.90th=[ 368], 99.95th=[ 368], 00:26:30.040 | 99.99th=[ 368] 00:26:30.040 bw ( KiB/s): min= 128, max= 1536, per=4.22%, avg=460.80, stdev=522.67, samples=20 00:26:30.040 iops : min= 32, max= 384, avg=115.20, stdev=130.67, samples=20 00:26:30.040 lat (msec) : 50=61.30%, 100=1.71%, 250=7.02%, 500=29.97% 00:26:30.040 cpu : usr=97.15%, sys=1.92%, ctx=67, majf=0, minf=43 00:26:30.040 IO depths : 1=5.7%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.8%, 32=0.0%, >=64=0.0% 00:26:30.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.040 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.040 issued rwts: total=1168,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.040 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:30.040 filename2: (groupid=0, jobs=1): err= 0: pid=3939166: Wed Jul 24 22:35:53 2024 00:26:30.040 read: IOPS=127, BW=508KiB/s (520kB/s)(5184KiB/10201msec) 00:26:30.040 slat (usec): min=6, max=135, avg=31.79, stdev=28.73 00:26:30.040 clat (msec): min=13, max=448, avg=124.70, stdev=107.97 00:26:30.040 lat (msec): min=13, max=448, avg=124.73, stdev=107.98 00:26:30.040 clat percentiles (msec): 00:26:30.040 | 1.00th=[ 19], 5.00th=[ 40], 10.00th=[ 40], 20.00th=[ 41], 00:26:30.040 | 30.00th=[ 41], 40.00th=[ 41], 50.00th=[ 43], 60.00th=[ 70], 00:26:30.040 | 70.00th=[ 220], 80.00th=[ 253], 90.00th=[ 271], 95.00th=[ 326], 00:26:30.040 | 99.00th=[ 342], 99.50th=[ 342], 99.90th=[ 447], 99.95th=[ 447], 00:26:30.040 | 99.99th=[ 447] 00:26:30.040 bw ( KiB/s): min= 128, max= 1664, per=4.70%, avg=512.00, stdev=529.87, samples=20 00:26:30.040 iops : min= 32, max= 416, avg=128.00, stdev=132.47, samples=20 00:26:30.040 lat (msec) : 20=1.08%, 50=58.18%, 100=1.23%, 250=18.21%, 500=21.30% 00:26:30.040 cpu : usr=98.29%, sys=1.17%, ctx=63, majf=0, minf=50 00:26:30.040 IO depths : 1=4.3%, 2=10.5%, 4=24.7%, 8=52.3%, 16=8.2%, 32=0.0%, >=64=0.0% 00:26:30.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.040 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.040 issued rwts: total=1296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.040 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:30.040 filename2: (groupid=0, jobs=1): err= 0: pid=3939167: Wed Jul 24 22:35:53 2024 00:26:30.040 read: IOPS=110, BW=441KiB/s (451kB/s)(4480KiB/10161msec) 00:26:30.040 slat (usec): min=8, max=136, avg=62.47, stdev=36.16 00:26:30.040 clat (msec): min=30, max=491, avg=143.48, stdev=143.83 00:26:30.040 lat (msec): min=30, max=491, avg=143.54, stdev=143.80 00:26:30.040 clat percentiles (msec): 00:26:30.040 | 1.00th=[ 39], 5.00th=[ 39], 10.00th=[ 40], 20.00th=[ 40], 00:26:30.040 | 30.00th=[ 40], 40.00th=[ 41], 50.00th=[ 42], 60.00th=[ 44], 00:26:30.040 | 70.00th=[ 309], 80.00th=[ 330], 90.00th=[ 359], 95.00th=[ 368], 00:26:30.040 | 99.00th=[ 439], 99.50th=[ 447], 99.90th=[ 493], 99.95th=[ 493], 00:26:30.040 | 99.99th=[ 493] 00:26:30.040 bw ( KiB/s): min= 128, max= 1536, per=4.04%, avg=441.70, stdev=532.08, samples=20 00:26:30.040 iops : min= 32, max= 384, avg=110.40, stdev=132.97, samples=20 00:26:30.040 lat (msec) : 50=63.57%, 100=2.14%, 250=1.96%, 500=32.32% 00:26:30.040 cpu : usr=98.75%, sys=0.88%, ctx=19, majf=0, minf=27 00:26:30.040 IO depths : 1=4.7%, 2=10.9%, 4=24.7%, 8=51.9%, 16=7.8%, 32=0.0%, >=64=0.0% 00:26:30.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.040 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.040 issued rwts: total=1120,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.040 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:30.040 filename2: (groupid=0, jobs=1): err= 0: pid=3939168: Wed Jul 24 22:35:53 2024 00:26:30.040 read: IOPS=113, BW=453KiB/s (464kB/s)(4608KiB/10167msec) 00:26:30.040 slat (nsec): min=8925, max=84550, avg=31904.03, stdev=11626.40 00:26:30.040 clat (msec): min=37, max=495, avg=140.92, stdev=135.90 00:26:30.040 lat (msec): min=37, max=495, avg=140.95, stdev=135.90 00:26:30.040 clat percentiles (msec): 00:26:30.040 | 1.00th=[ 40], 5.00th=[ 40], 10.00th=[ 40], 20.00th=[ 40], 00:26:30.040 | 30.00th=[ 41], 40.00th=[ 41], 50.00th=[ 42], 60.00th=[ 44], 00:26:30.040 | 70.00th=[ 255], 80.00th=[ 330], 90.00th=[ 351], 95.00th=[ 363], 00:26:30.040 | 99.00th=[ 414], 99.50th=[ 460], 99.90th=[ 498], 99.95th=[ 498], 00:26:30.040 | 99.99th=[ 498] 00:26:30.040 bw ( KiB/s): min= 128, max= 1536, per=4.16%, avg=454.40, stdev=525.90, samples=20 00:26:30.040 iops : min= 32, max= 384, avg=113.60, stdev=131.48, samples=20 00:26:30.040 lat (msec) : 50=62.50%, 100=1.39%, 250=5.03%, 500=31.08% 00:26:30.040 cpu : usr=98.52%, sys=1.08%, ctx=37, majf=0, minf=36 00:26:30.040 IO depths : 1=5.1%, 2=11.3%, 4=24.7%, 8=51.5%, 16=7.4%, 32=0.0%, >=64=0.0% 00:26:30.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.040 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.040 issued rwts: total=1152,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.040 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:30.040 filename2: (groupid=0, jobs=1): err= 0: pid=3939169: Wed Jul 24 22:35:53 2024 00:26:30.040 read: IOPS=112, BW=450KiB/s (461kB/s)(4544KiB/10092msec) 00:26:30.040 slat (usec): min=16, max=176, avg=53.34, stdev=35.43 00:26:30.040 clat (msec): min=33, max=527, avg=141.69, stdev=142.06 00:26:30.040 lat (msec): min=33, max=527, avg=141.75, stdev=142.09 00:26:30.040 clat percentiles (msec): 00:26:30.040 | 1.00th=[ 40], 5.00th=[ 40], 10.00th=[ 40], 20.00th=[ 41], 00:26:30.040 | 30.00th=[ 41], 40.00th=[ 41], 50.00th=[ 43], 60.00th=[ 45], 00:26:30.040 | 70.00th=[ 309], 80.00th=[ 334], 90.00th=[ 359], 95.00th=[ 363], 00:26:30.040 | 99.00th=[ 401], 99.50th=[ 456], 99.90th=[ 527], 99.95th=[ 527], 00:26:30.040 | 99.99th=[ 527] 00:26:30.040 bw ( KiB/s): min= 128, max= 1536, per=4.11%, avg=448.10, stdev=545.27, samples=20 00:26:30.040 iops : min= 32, max= 384, avg=112.00, stdev=136.27, samples=20 00:26:30.040 lat (msec) : 50=63.38%, 100=2.82%, 250=1.94%, 500=31.69%, 750=0.18% 00:26:30.040 cpu : usr=97.29%, sys=1.55%, ctx=186, majf=0, minf=34 00:26:30.040 IO depths : 1=5.6%, 2=11.9%, 4=25.0%, 8=50.6%, 16=6.9%, 32=0.0%, >=64=0.0% 00:26:30.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.040 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.040 issued rwts: total=1136,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.040 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:30.040 filename2: (groupid=0, jobs=1): err= 0: pid=3939170: Wed Jul 24 22:35:53 2024 00:26:30.040 read: IOPS=110, BW=441KiB/s (451kB/s)(4480KiB/10168msec) 00:26:30.040 slat (usec): min=4, max=165, avg=51.80, stdev=36.55 00:26:30.040 clat (msec): min=39, max=522, avg=144.80, stdev=144.38 00:26:30.040 lat (msec): min=39, max=522, avg=144.85, stdev=144.41 00:26:30.040 clat percentiles (msec): 00:26:30.040 | 1.00th=[ 40], 5.00th=[ 40], 10.00th=[ 40], 20.00th=[ 41], 00:26:30.040 | 30.00th=[ 41], 40.00th=[ 41], 50.00th=[ 42], 60.00th=[ 44], 00:26:30.040 | 70.00th=[ 317], 80.00th=[ 334], 90.00th=[ 351], 95.00th=[ 368], 00:26:30.040 | 99.00th=[ 418], 99.50th=[ 481], 99.90th=[ 523], 99.95th=[ 523], 00:26:30.040 | 99.99th=[ 523] 00:26:30.040 bw ( KiB/s): min= 128, max= 1536, per=4.04%, avg=441.60, stdev=532.42, samples=20 00:26:30.040 iops : min= 32, max= 384, avg=110.40, stdev=133.10, samples=20 00:26:30.040 lat (msec) : 50=64.29%, 100=1.43%, 250=0.71%, 500=33.39%, 750=0.18% 00:26:30.040 cpu : usr=97.63%, sys=1.58%, ctx=63, majf=0, minf=42 00:26:30.040 IO depths : 1=5.5%, 2=11.8%, 4=25.0%, 8=50.7%, 16=7.0%, 32=0.0%, >=64=0.0% 00:26:30.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.040 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.040 issued rwts: total=1120,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.040 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:30.040 filename2: (groupid=0, jobs=1): err= 0: pid=3939171: Wed Jul 24 22:35:53 2024 00:26:30.040 read: IOPS=110, BW=441KiB/s (451kB/s)(4480KiB/10161msec) 00:26:30.040 slat (usec): min=9, max=148, avg=61.67, stdev=32.01 00:26:30.040 clat (msec): min=38, max=504, avg=144.65, stdev=144.52 00:26:30.040 lat (msec): min=38, max=504, avg=144.71, stdev=144.54 00:26:30.040 clat percentiles (msec): 00:26:30.040 | 1.00th=[ 40], 5.00th=[ 40], 10.00th=[ 40], 20.00th=[ 40], 00:26:30.040 | 30.00th=[ 41], 40.00th=[ 41], 50.00th=[ 43], 60.00th=[ 45], 00:26:30.040 | 70.00th=[ 317], 80.00th=[ 334], 90.00th=[ 359], 95.00th=[ 368], 00:26:30.040 | 99.00th=[ 443], 99.50th=[ 451], 99.90th=[ 506], 99.95th=[ 506], 00:26:30.040 | 99.99th=[ 506] 00:26:30.040 bw ( KiB/s): min= 128, max= 1664, per=4.04%, avg=441.50, stdev=535.51, samples=20 00:26:30.040 iops : min= 32, max= 416, avg=110.35, stdev=133.89, samples=20 00:26:30.040 lat (msec) : 50=62.86%, 100=2.86%, 250=2.50%, 500=31.61%, 750=0.18% 00:26:30.040 cpu : usr=97.85%, sys=1.33%, ctx=68, majf=0, minf=51 00:26:30.040 IO depths : 1=5.3%, 2=11.5%, 4=25.0%, 8=51.0%, 16=7.2%, 32=0.0%, >=64=0.0% 00:26:30.040 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.040 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.040 issued rwts: total=1120,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.040 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:30.040 filename2: (groupid=0, jobs=1): err= 0: pid=3939172: Wed Jul 24 22:35:53 2024 00:26:30.040 read: IOPS=111, BW=445KiB/s (456kB/s)(4536KiB/10185msec) 00:26:30.040 slat (usec): min=16, max=179, avg=97.04, stdev=16.28 00:26:30.040 clat (msec): min=33, max=484, avg=142.69, stdev=143.85 00:26:30.040 lat (msec): min=33, max=484, avg=142.78, stdev=143.86 00:26:30.040 clat percentiles (msec): 00:26:30.040 | 1.00th=[ 39], 5.00th=[ 39], 10.00th=[ 40], 20.00th=[ 40], 00:26:30.040 | 30.00th=[ 40], 40.00th=[ 41], 50.00th=[ 42], 60.00th=[ 45], 00:26:30.040 | 70.00th=[ 326], 80.00th=[ 338], 90.00th=[ 359], 95.00th=[ 368], 00:26:30.040 | 99.00th=[ 405], 99.50th=[ 414], 99.90th=[ 485], 99.95th=[ 485], 00:26:30.040 | 99.99th=[ 485] 00:26:30.040 bw ( KiB/s): min= 128, max= 1536, per=4.10%, avg=447.20, stdev=545.74, samples=20 00:26:30.040 iops : min= 32, max= 384, avg=111.80, stdev=136.44, samples=20 00:26:30.041 lat (msec) : 50=63.49%, 100=2.82%, 250=1.41%, 500=32.28% 00:26:30.041 cpu : usr=98.34%, sys=1.06%, ctx=70, majf=0, minf=37 00:26:30.041 IO depths : 1=5.8%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:26:30.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.041 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.041 issued rwts: total=1134,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.041 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:30.041 filename2: (groupid=0, jobs=1): err= 0: pid=3939173: Wed Jul 24 22:35:53 2024 00:26:30.041 read: IOPS=114, BW=459KiB/s (470kB/s)(4672KiB/10185msec) 00:26:30.041 slat (usec): min=9, max=136, avg=33.44, stdev=24.71 00:26:30.041 clat (msec): min=38, max=448, avg=139.23, stdev=134.91 00:26:30.041 lat (msec): min=38, max=448, avg=139.26, stdev=134.92 00:26:30.041 clat percentiles (msec): 00:26:30.041 | 1.00th=[ 40], 5.00th=[ 40], 10.00th=[ 40], 20.00th=[ 41], 00:26:30.041 | 30.00th=[ 41], 40.00th=[ 41], 50.00th=[ 42], 60.00th=[ 45], 00:26:30.041 | 70.00th=[ 262], 80.00th=[ 330], 90.00th=[ 351], 95.00th=[ 363], 00:26:30.041 | 99.00th=[ 368], 99.50th=[ 439], 99.90th=[ 447], 99.95th=[ 447], 00:26:30.041 | 99.99th=[ 447] 00:26:30.041 bw ( KiB/s): min= 128, max= 1536, per=4.22%, avg=460.80, stdev=538.56, samples=20 00:26:30.041 iops : min= 32, max= 384, avg=115.20, stdev=134.64, samples=20 00:26:30.041 lat (msec) : 50=63.01%, 100=1.37%, 250=3.25%, 500=32.36% 00:26:30.041 cpu : usr=97.45%, sys=1.78%, ctx=58, majf=0, minf=46 00:26:30.041 IO depths : 1=5.4%, 2=11.6%, 4=25.0%, 8=50.9%, 16=7.1%, 32=0.0%, >=64=0.0% 00:26:30.041 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.041 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.041 issued rwts: total=1168,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.041 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:30.041 00:26:30.041 Run status group 0 (all jobs): 00:26:30.041 READ: bw=10.6MiB/s (11.2MB/s), 440KiB/s-514KiB/s (451kB/s-527kB/s), io=109MiB (114MB), run=10067-10203msec 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:30.041 bdev_null0 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:30.041 [2024-07-24 22:35:54.266083] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:30.041 bdev_null1 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local sanitizers 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:30.041 { 00:26:30.041 "params": { 00:26:30.041 "name": "Nvme$subsystem", 00:26:30.041 "trtype": "$TEST_TRANSPORT", 00:26:30.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:30.041 "adrfam": "ipv4", 00:26:30.041 "trsvcid": "$NVMF_PORT", 00:26:30.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:30.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:30.041 "hdgst": ${hdgst:-false}, 00:26:30.041 "ddgst": ${ddgst:-false} 00:26:30.041 }, 00:26:30.041 "method": "bdev_nvme_attach_controller" 00:26:30.041 } 00:26:30.041 EOF 00:26:30.041 )") 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # shift 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local asan_lib= 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # grep libasan 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:30.041 { 00:26:30.041 "params": { 00:26:30.041 "name": "Nvme$subsystem", 00:26:30.041 "trtype": "$TEST_TRANSPORT", 00:26:30.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:30.041 "adrfam": "ipv4", 00:26:30.041 "trsvcid": "$NVMF_PORT", 00:26:30.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:30.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:30.041 "hdgst": ${hdgst:-false}, 00:26:30.041 "ddgst": ${ddgst:-false} 00:26:30.041 }, 00:26:30.041 "method": "bdev_nvme_attach_controller" 00:26:30.041 } 00:26:30.041 EOF 00:26:30.041 )") 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:30.041 "params": { 00:26:30.041 "name": "Nvme0", 00:26:30.041 "trtype": "tcp", 00:26:30.041 "traddr": "10.0.0.2", 00:26:30.041 "adrfam": "ipv4", 00:26:30.041 "trsvcid": "4420", 00:26:30.041 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:30.041 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:30.041 "hdgst": false, 00:26:30.041 "ddgst": false 00:26:30.041 }, 00:26:30.041 "method": "bdev_nvme_attach_controller" 00:26:30.041 },{ 00:26:30.041 "params": { 00:26:30.041 "name": "Nvme1", 00:26:30.041 "trtype": "tcp", 00:26:30.041 "traddr": "10.0.0.2", 00:26:30.041 "adrfam": "ipv4", 00:26:30.041 "trsvcid": "4420", 00:26:30.041 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:30.041 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:30.041 "hdgst": false, 00:26:30.041 "ddgst": false 00:26:30.041 }, 00:26:30.041 "method": "bdev_nvme_attach_controller" 00:26:30.041 }' 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # asan_lib= 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # grep libclang_rt.asan 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # asan_lib= 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:30.041 22:35:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:30.041 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:30.041 ... 00:26:30.041 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:30.041 ... 00:26:30.041 fio-3.35 00:26:30.041 Starting 4 threads 00:26:30.041 EAL: No free 2048 kB hugepages reported on node 1 00:26:35.300 00:26:35.300 filename0: (groupid=0, jobs=1): err= 0: pid=3940221: Wed Jul 24 22:36:00 2024 00:26:35.300 read: IOPS=1703, BW=13.3MiB/s (14.0MB/s)(66.6MiB/5003msec) 00:26:35.300 slat (usec): min=7, max=177, avg=24.35, stdev=13.35 00:26:35.300 clat (usec): min=799, max=8911, avg=4603.51, stdev=506.76 00:26:35.300 lat (usec): min=819, max=8932, avg=4627.86, stdev=506.74 00:26:35.300 clat percentiles (usec): 00:26:35.300 | 1.00th=[ 3097], 5.00th=[ 3851], 10.00th=[ 4228], 20.00th=[ 4490], 00:26:35.300 | 30.00th=[ 4555], 40.00th=[ 4555], 50.00th=[ 4621], 60.00th=[ 4621], 00:26:35.300 | 70.00th=[ 4686], 80.00th=[ 4752], 90.00th=[ 4883], 95.00th=[ 5211], 00:26:35.300 | 99.00th=[ 6521], 99.50th=[ 7111], 99.90th=[ 8029], 99.95th=[ 8094], 00:26:35.300 | 99.99th=[ 8848] 00:26:35.300 bw ( KiB/s): min=13296, max=14288, per=25.14%, avg=13632.00, stdev=268.90, samples=10 00:26:35.300 iops : min= 1662, max= 1786, avg=1704.00, stdev=33.61, samples=10 00:26:35.300 lat (usec) : 1000=0.04% 00:26:35.300 lat (msec) : 2=0.22%, 4=6.78%, 10=92.96% 00:26:35.300 cpu : usr=83.69%, sys=8.62%, ctx=128, majf=0, minf=0 00:26:35.300 IO depths : 1=1.2%, 2=19.5%, 4=54.1%, 8=25.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:35.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:35.300 complete : 0=0.0%, 4=91.3%, 8=8.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:35.300 issued rwts: total=8525,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:35.300 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:35.300 filename0: (groupid=0, jobs=1): err= 0: pid=3940222: Wed Jul 24 22:36:00 2024 00:26:35.300 read: IOPS=1676, BW=13.1MiB/s (13.7MB/s)(65.5MiB/5002msec) 00:26:35.300 slat (nsec): min=7218, max=83902, avg=23165.96, stdev=13830.79 00:26:35.301 clat (usec): min=1040, max=8267, avg=4686.69, stdev=555.23 00:26:35.301 lat (usec): min=1051, max=8292, avg=4709.86, stdev=554.03 00:26:35.301 clat percentiles (usec): 00:26:35.301 | 1.00th=[ 3261], 5.00th=[ 4146], 10.00th=[ 4359], 20.00th=[ 4490], 00:26:35.301 | 30.00th=[ 4555], 40.00th=[ 4621], 50.00th=[ 4621], 60.00th=[ 4686], 00:26:35.301 | 70.00th=[ 4686], 80.00th=[ 4752], 90.00th=[ 5014], 95.00th=[ 5538], 00:26:35.301 | 99.00th=[ 7177], 99.50th=[ 7570], 99.90th=[ 8029], 99.95th=[ 8160], 00:26:35.301 | 99.99th=[ 8291] 00:26:35.301 bw ( KiB/s): min=13008, max=13568, per=24.74%, avg=13416.89, stdev=177.83, samples=9 00:26:35.301 iops : min= 1626, max= 1696, avg=1677.11, stdev=22.23, samples=9 00:26:35.301 lat (msec) : 2=0.29%, 4=3.55%, 10=96.16% 00:26:35.301 cpu : usr=95.14%, sys=4.30%, ctx=9, majf=0, minf=9 00:26:35.301 IO depths : 1=0.1%, 2=17.7%, 4=55.3%, 8=27.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:35.301 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:35.301 complete : 0=0.0%, 4=91.7%, 8=8.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:35.301 issued rwts: total=8385,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:35.301 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:35.301 filename1: (groupid=0, jobs=1): err= 0: pid=3940223: Wed Jul 24 22:36:00 2024 00:26:35.301 read: IOPS=1693, BW=13.2MiB/s (13.9MB/s)(66.2MiB/5002msec) 00:26:35.301 slat (nsec): min=7429, max=82475, avg=15891.23, stdev=11489.34 00:26:35.301 clat (usec): min=1186, max=8526, avg=4675.99, stdev=512.12 00:26:35.301 lat (usec): min=1198, max=8540, avg=4691.88, stdev=512.29 00:26:35.301 clat percentiles (usec): 00:26:35.301 | 1.00th=[ 3425], 5.00th=[ 3916], 10.00th=[ 4228], 20.00th=[ 4555], 00:26:35.301 | 30.00th=[ 4621], 40.00th=[ 4621], 50.00th=[ 4686], 60.00th=[ 4686], 00:26:35.301 | 70.00th=[ 4752], 80.00th=[ 4752], 90.00th=[ 4883], 95.00th=[ 5211], 00:26:35.301 | 99.00th=[ 7046], 99.50th=[ 7701], 99.90th=[ 8225], 99.95th=[ 8455], 00:26:35.301 | 99.99th=[ 8586] 00:26:35.301 bw ( KiB/s): min=13152, max=14080, per=24.96%, avg=13534.22, stdev=271.35, samples=9 00:26:35.301 iops : min= 1644, max= 1760, avg=1691.78, stdev=33.92, samples=9 00:26:35.301 lat (msec) : 2=0.09%, 4=5.82%, 10=94.09% 00:26:35.301 cpu : usr=94.84%, sys=4.64%, ctx=12, majf=0, minf=0 00:26:35.301 IO depths : 1=0.1%, 2=7.6%, 4=65.7%, 8=26.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:35.301 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:35.301 complete : 0=0.0%, 4=91.5%, 8=8.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:35.301 issued rwts: total=8473,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:35.301 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:35.301 filename1: (groupid=0, jobs=1): err= 0: pid=3940224: Wed Jul 24 22:36:00 2024 00:26:35.301 read: IOPS=1704, BW=13.3MiB/s (14.0MB/s)(66.6MiB/5003msec) 00:26:35.301 slat (nsec): min=7665, max=82827, avg=22016.60, stdev=14064.43 00:26:35.301 clat (usec): min=1278, max=8609, avg=4618.15, stdev=511.44 00:26:35.301 lat (usec): min=1289, max=8642, avg=4640.17, stdev=511.41 00:26:35.301 clat percentiles (usec): 00:26:35.301 | 1.00th=[ 2999], 5.00th=[ 3884], 10.00th=[ 4228], 20.00th=[ 4490], 00:26:35.301 | 30.00th=[ 4555], 40.00th=[ 4621], 50.00th=[ 4621], 60.00th=[ 4686], 00:26:35.301 | 70.00th=[ 4686], 80.00th=[ 4752], 90.00th=[ 4883], 95.00th=[ 5211], 00:26:35.301 | 99.00th=[ 6587], 99.50th=[ 7242], 99.90th=[ 8029], 99.95th=[ 8160], 00:26:35.301 | 99.99th=[ 8586] 00:26:35.301 bw ( KiB/s): min=13386, max=14080, per=25.14%, avg=13631.40, stdev=235.23, samples=10 00:26:35.301 iops : min= 1673, max= 1760, avg=1703.90, stdev=29.43, samples=10 00:26:35.301 lat (msec) : 2=0.32%, 4=6.49%, 10=93.20% 00:26:35.301 cpu : usr=95.14%, sys=4.28%, ctx=9, majf=0, minf=0 00:26:35.301 IO depths : 1=0.3%, 2=18.3%, 4=54.5%, 8=26.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:35.301 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:35.301 complete : 0=0.0%, 4=91.9%, 8=8.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:35.301 issued rwts: total=8526,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:35.301 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:35.301 00:26:35.301 Run status group 0 (all jobs): 00:26:35.301 READ: bw=53.0MiB/s (55.5MB/s), 13.1MiB/s-13.3MiB/s (13.7MB/s-14.0MB/s), io=265MiB (278MB), run=5002-5003msec 00:26:35.301 22:36:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:26:35.301 22:36:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:35.301 22:36:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:35.301 22:36:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:35.301 22:36:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:35.301 22:36:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:35.301 22:36:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.301 22:36:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:35.301 22:36:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.301 22:36:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:35.301 22:36:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.301 22:36:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:35.301 22:36:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.301 22:36:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:35.301 22:36:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:35.301 22:36:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:26:35.301 22:36:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:35.301 22:36:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.301 22:36:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:35.301 22:36:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.301 22:36:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:35.301 22:36:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.301 22:36:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:35.301 22:36:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.301 00:26:35.301 real 0m24.227s 00:26:35.301 user 4m35.157s 00:26:35.301 sys 0m6.291s 00:26:35.301 22:36:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:35.301 22:36:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:35.301 ************************************ 00:26:35.301 END TEST fio_dif_rand_params 00:26:35.301 ************************************ 00:26:35.301 22:36:00 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:26:35.301 22:36:00 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:35.301 22:36:00 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:35.301 22:36:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:35.301 ************************************ 00:26:35.301 START TEST fio_dif_digest 00:26:35.301 ************************************ 00:26:35.301 22:36:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:26:35.301 22:36:00 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:26:35.301 22:36:00 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:26:35.301 22:36:00 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:26:35.301 22:36:00 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:26:35.301 22:36:00 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:26:35.301 22:36:00 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:26:35.301 22:36:00 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:26:35.301 22:36:00 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:26:35.301 22:36:00 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:26:35.301 22:36:00 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:26:35.301 22:36:00 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:26:35.301 22:36:00 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:26:35.301 22:36:00 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:26:35.301 22:36:00 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:26:35.301 22:36:00 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:26:35.301 22:36:00 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:35.301 22:36:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.301 22:36:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:35.301 bdev_null0 00:26:35.301 22:36:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.301 22:36:00 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:35.301 22:36:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.301 22:36:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:35.301 22:36:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.301 22:36:00 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:35.301 22:36:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.301 22:36:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:35.301 22:36:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.301 22:36:00 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:35.301 22:36:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.301 22:36:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:35.301 [2024-07-24 22:36:00.695775] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:35.301 22:36:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.301 22:36:00 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:26:35.301 22:36:00 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:26:35.301 22:36:00 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:35.301 22:36:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:26:35.301 22:36:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:26:35.302 22:36:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:35.302 22:36:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:35.302 { 00:26:35.302 "params": { 00:26:35.302 "name": "Nvme$subsystem", 00:26:35.302 "trtype": "$TEST_TRANSPORT", 00:26:35.302 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:35.302 "adrfam": "ipv4", 00:26:35.302 "trsvcid": "$NVMF_PORT", 00:26:35.302 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:35.302 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:35.302 "hdgst": ${hdgst:-false}, 00:26:35.302 "ddgst": ${ddgst:-false} 00:26:35.302 }, 00:26:35.302 "method": "bdev_nvme_attach_controller" 00:26:35.302 } 00:26:35.302 EOF 00:26:35.302 )") 00:26:35.302 22:36:00 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:35.302 22:36:00 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:26:35.302 22:36:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:35.302 22:36:00 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:26:35.302 22:36:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:26:35.302 22:36:00 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:26:35.302 22:36:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:35.302 22:36:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local sanitizers 00:26:35.302 22:36:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1338 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:35.302 22:36:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # shift 00:26:35.302 22:36:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:26:35.302 22:36:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local asan_lib= 00:26:35.302 22:36:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:26:35.302 22:36:00 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:26:35.302 22:36:00 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:26:35.302 22:36:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:35.302 22:36:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # grep libasan 00:26:35.302 22:36:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:26:35.302 22:36:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:26:35.302 22:36:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:26:35.302 22:36:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:35.302 "params": { 00:26:35.302 "name": "Nvme0", 00:26:35.302 "trtype": "tcp", 00:26:35.302 "traddr": "10.0.0.2", 00:26:35.302 "adrfam": "ipv4", 00:26:35.302 "trsvcid": "4420", 00:26:35.302 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:35.302 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:35.302 "hdgst": true, 00:26:35.302 "ddgst": true 00:26:35.302 }, 00:26:35.302 "method": "bdev_nvme_attach_controller" 00:26:35.302 }' 00:26:35.302 22:36:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # asan_lib= 00:26:35.302 22:36:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:26:35.302 22:36:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:26:35.302 22:36:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:35.302 22:36:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # grep libclang_rt.asan 00:26:35.302 22:36:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:26:35.302 22:36:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # asan_lib= 00:26:35.302 22:36:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:26:35.302 22:36:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:35.302 22:36:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:35.302 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:35.302 ... 00:26:35.302 fio-3.35 00:26:35.302 Starting 3 threads 00:26:35.302 EAL: No free 2048 kB hugepages reported on node 1 00:26:47.508 00:26:47.508 filename0: (groupid=0, jobs=1): err= 0: pid=3940883: Wed Jul 24 22:36:11 2024 00:26:47.508 read: IOPS=167, BW=20.9MiB/s (21.9MB/s)(210MiB/10045msec) 00:26:47.508 slat (nsec): min=5430, max=38598, avg=15723.08, stdev=3122.80 00:26:47.508 clat (usec): min=13701, max=56773, avg=17913.25, stdev=1655.62 00:26:47.508 lat (usec): min=13714, max=56791, avg=17928.97, stdev=1655.75 00:26:47.508 clat percentiles (usec): 00:26:47.508 | 1.00th=[15533], 5.00th=[16319], 10.00th=[16581], 20.00th=[16909], 00:26:47.508 | 30.00th=[17171], 40.00th=[17433], 50.00th=[17695], 60.00th=[17957], 00:26:47.508 | 70.00th=[18220], 80.00th=[18744], 90.00th=[19268], 95.00th=[19792], 00:26:47.508 | 99.00th=[20841], 99.50th=[21365], 99.90th=[52167], 99.95th=[56886], 00:26:47.508 | 99.99th=[56886] 00:26:47.508 bw ( KiB/s): min=20777, max=22016, per=31.91%, avg=21454.85, stdev=408.28, samples=20 00:26:47.508 iops : min= 162, max= 172, avg=167.60, stdev= 3.22, samples=20 00:26:47.508 lat (msec) : 20=96.84%, 50=3.04%, 100=0.12% 00:26:47.508 cpu : usr=94.07%, sys=5.51%, ctx=32, majf=0, minf=79 00:26:47.508 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:47.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:47.508 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:47.508 issued rwts: total=1678,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:47.508 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:47.508 filename0: (groupid=0, jobs=1): err= 0: pid=3940884: Wed Jul 24 22:36:11 2024 00:26:47.508 read: IOPS=182, BW=22.8MiB/s (24.0MB/s)(230MiB/10045msec) 00:26:47.508 slat (nsec): min=5250, max=35801, avg=15804.24, stdev=3269.25 00:26:47.508 clat (usec): min=13070, max=55460, avg=16370.27, stdev=1593.46 00:26:47.508 lat (usec): min=13083, max=55481, avg=16386.08, stdev=1593.59 00:26:47.508 clat percentiles (usec): 00:26:47.508 | 1.00th=[14091], 5.00th=[14615], 10.00th=[15008], 20.00th=[15533], 00:26:47.508 | 30.00th=[15795], 40.00th=[16057], 50.00th=[16319], 60.00th=[16581], 00:26:47.508 | 70.00th=[16909], 80.00th=[17171], 90.00th=[17433], 95.00th=[17957], 00:26:47.508 | 99.00th=[19268], 99.50th=[19792], 99.90th=[51119], 99.95th=[55313], 00:26:47.508 | 99.99th=[55313] 00:26:47.508 bw ( KiB/s): min=22784, max=24064, per=34.92%, avg=23475.20, stdev=407.74, samples=20 00:26:47.508 iops : min= 178, max= 188, avg=183.40, stdev= 3.19, samples=20 00:26:47.508 lat (msec) : 20=99.51%, 50=0.38%, 100=0.11% 00:26:47.508 cpu : usr=93.38%, sys=6.20%, ctx=27, majf=0, minf=223 00:26:47.508 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:47.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:47.508 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:47.508 issued rwts: total=1836,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:47.508 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:47.508 filename0: (groupid=0, jobs=1): err= 0: pid=3940885: Wed Jul 24 22:36:11 2024 00:26:47.508 read: IOPS=175, BW=21.9MiB/s (23.0MB/s)(220MiB/10044msec) 00:26:47.508 slat (usec): min=5, max=144, avg=25.50, stdev= 8.13 00:26:47.508 clat (usec): min=13477, max=54989, avg=17046.53, stdev=1727.23 00:26:47.508 lat (usec): min=13501, max=55020, avg=17072.03, stdev=1726.91 00:26:47.508 clat percentiles (usec): 00:26:47.508 | 1.00th=[14353], 5.00th=[15139], 10.00th=[15533], 20.00th=[16057], 00:26:47.508 | 30.00th=[16319], 40.00th=[16712], 50.00th=[16909], 60.00th=[17171], 00:26:47.508 | 70.00th=[17433], 80.00th=[17957], 90.00th=[18482], 95.00th=[19006], 00:26:47.508 | 99.00th=[20317], 99.50th=[21103], 99.90th=[51643], 99.95th=[54789], 00:26:47.508 | 99.99th=[54789] 00:26:47.508 bw ( KiB/s): min=21504, max=23552, per=33.51%, avg=22530.20, stdev=580.46, samples=20 00:26:47.508 iops : min= 168, max= 184, avg=176.00, stdev= 4.54, samples=20 00:26:47.508 lat (msec) : 20=98.75%, 50=1.14%, 100=0.11% 00:26:47.508 cpu : usr=79.77%, sys=11.26%, ctx=651, majf=0, minf=104 00:26:47.508 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:47.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:47.508 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:47.508 issued rwts: total=1762,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:47.508 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:47.508 00:26:47.508 Run status group 0 (all jobs): 00:26:47.508 READ: bw=65.7MiB/s (68.8MB/s), 20.9MiB/s-22.8MiB/s (21.9MB/s-24.0MB/s), io=660MiB (692MB), run=10044-10045msec 00:26:47.508 22:36:11 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:26:47.508 22:36:11 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:26:47.508 22:36:11 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:26:47.508 22:36:11 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:47.508 22:36:11 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:26:47.508 22:36:11 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:47.508 22:36:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.508 22:36:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:47.508 22:36:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.508 22:36:11 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:47.508 22:36:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:47.508 22:36:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:47.508 22:36:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:47.508 00:26:47.508 real 0m11.209s 00:26:47.508 user 0m27.805s 00:26:47.508 sys 0m2.546s 00:26:47.508 22:36:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:47.508 22:36:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:47.508 ************************************ 00:26:47.508 END TEST fio_dif_digest 00:26:47.508 ************************************ 00:26:47.508 22:36:11 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:26:47.508 22:36:11 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:26:47.508 22:36:11 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:47.508 22:36:11 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:26:47.508 22:36:11 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:47.508 22:36:11 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:26:47.508 22:36:11 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:47.508 22:36:11 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:47.508 rmmod nvme_tcp 00:26:47.508 rmmod nvme_fabrics 00:26:47.508 rmmod nvme_keyring 00:26:47.508 22:36:11 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:47.508 22:36:11 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:26:47.508 22:36:11 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:26:47.508 22:36:11 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 3936158 ']' 00:26:47.508 22:36:11 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 3936158 00:26:47.508 22:36:11 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 3936158 ']' 00:26:47.508 22:36:11 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 3936158 00:26:47.508 22:36:11 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:26:47.508 22:36:11 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:47.508 22:36:11 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3936158 00:26:47.508 22:36:11 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:47.508 22:36:11 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:47.508 22:36:11 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3936158' 00:26:47.508 killing process with pid 3936158 00:26:47.508 22:36:11 nvmf_dif -- common/autotest_common.sh@967 -- # kill 3936158 00:26:47.508 22:36:11 nvmf_dif -- common/autotest_common.sh@972 -- # wait 3936158 00:26:47.508 22:36:12 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:26:47.508 22:36:12 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:47.508 Waiting for block devices as requested 00:26:47.508 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:26:47.768 0000:00:04.7 (8086 3c27): vfio-pci -> ioatdma 00:26:47.768 0000:00:04.6 (8086 3c26): vfio-pci -> ioatdma 00:26:47.768 0000:00:04.5 (8086 3c25): vfio-pci -> ioatdma 00:26:47.768 0000:00:04.4 (8086 3c24): vfio-pci -> ioatdma 00:26:48.028 0000:00:04.3 (8086 3c23): vfio-pci -> ioatdma 00:26:48.028 0000:00:04.2 (8086 3c22): vfio-pci -> ioatdma 00:26:48.028 0000:00:04.1 (8086 3c21): vfio-pci -> ioatdma 00:26:48.028 0000:00:04.0 (8086 3c20): vfio-pci -> ioatdma 00:26:48.287 0000:80:04.7 (8086 3c27): vfio-pci -> ioatdma 00:26:48.287 0000:80:04.6 (8086 3c26): vfio-pci -> ioatdma 00:26:48.287 0000:80:04.5 (8086 3c25): vfio-pci -> ioatdma 00:26:48.287 0000:80:04.4 (8086 3c24): vfio-pci -> ioatdma 00:26:48.546 0000:80:04.3 (8086 3c23): vfio-pci -> ioatdma 00:26:48.546 0000:80:04.2 (8086 3c22): vfio-pci -> ioatdma 00:26:48.546 0000:80:04.1 (8086 3c21): vfio-pci -> ioatdma 00:26:48.805 0000:80:04.0 (8086 3c20): vfio-pci -> ioatdma 00:26:48.805 22:36:14 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:48.805 22:36:14 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:48.805 22:36:14 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:48.805 22:36:14 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:48.805 22:36:14 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:48.805 22:36:14 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:48.805 22:36:14 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:50.708 22:36:16 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:50.708 00:26:50.708 real 1m5.492s 00:26:50.708 user 6m29.162s 00:26:50.708 sys 0m17.377s 00:26:50.708 22:36:16 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:50.708 22:36:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:50.708 ************************************ 00:26:50.708 END TEST nvmf_dif 00:26:50.708 ************************************ 00:26:50.708 22:36:16 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:50.708 22:36:16 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:50.708 22:36:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:50.708 22:36:16 -- common/autotest_common.sh@10 -- # set +x 00:26:50.966 ************************************ 00:26:50.966 START TEST nvmf_abort_qd_sizes 00:26:50.966 ************************************ 00:26:50.966 22:36:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:50.966 * Looking for test storage... 00:26:50.966 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:50.966 22:36:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:50.966 22:36:16 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:26:50.966 22:36:16 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:50.966 22:36:16 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:50.966 22:36:16 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:50.966 22:36:16 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:50.966 22:36:16 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:50.966 22:36:16 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:50.966 22:36:16 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:50.966 22:36:16 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:50.966 22:36:16 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:50.966 22:36:16 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:50.966 22:36:16 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:26:50.966 22:36:16 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:26:50.966 22:36:16 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:50.966 22:36:16 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:50.966 22:36:16 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:50.966 22:36:16 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:50.966 22:36:16 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:50.966 22:36:16 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:50.966 22:36:16 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:50.966 22:36:16 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:50.967 22:36:16 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.967 22:36:16 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.967 22:36:16 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.967 22:36:16 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:26:50.967 22:36:16 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.967 22:36:16 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:26:50.967 22:36:16 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:50.967 22:36:16 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:50.967 22:36:16 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:50.967 22:36:16 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:50.967 22:36:16 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:50.967 22:36:16 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:50.967 22:36:16 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:50.967 22:36:16 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:50.967 22:36:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:26:50.967 22:36:16 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:50.967 22:36:16 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:50.967 22:36:16 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:50.967 22:36:16 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:50.967 22:36:16 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:50.967 22:36:16 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:50.967 22:36:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:50.967 22:36:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:50.967 22:36:16 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:50.967 22:36:16 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:50.967 22:36:16 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:26:50.967 22:36:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:52.871 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:52.871 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:26:52.871 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:52.871 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:52.871 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:52.871 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:52.871 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:52.871 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:26:52.871 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:52.871 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:26:52.871 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:26:52.871 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:26:52.871 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:26:52.871 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:26:52.871 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:26:52.871 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:52.871 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:52.871 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:52.871 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:52.871 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:52.871 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:52.871 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:52.871 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:52.871 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:52.872 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:52.872 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:52.872 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:52.872 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:52.872 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:52.872 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:52.872 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:52.872 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:52.872 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:52.872 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.0 (0x8086 - 0x159b)' 00:26:52.872 Found 0000:08:00.0 (0x8086 - 0x159b) 00:26:52.872 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:52.872 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:52.872 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:52.872 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:52.872 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:52.872 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:52.872 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:08:00.1 (0x8086 - 0x159b)' 00:26:52.872 Found 0000:08:00.1 (0x8086 - 0x159b) 00:26:52.872 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:52.872 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:52.872 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:52.872 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:52.872 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:52.872 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:52.872 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:52.872 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:52.872 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:52.872 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:52.872 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:52.872 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:52.872 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:52.872 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:52.872 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:52.872 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.0: cvl_0_0' 00:26:52.872 Found net devices under 0000:08:00.0: cvl_0_0 00:26:52.872 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:52.872 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:52.872 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:52.872 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:52.872 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:52.872 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:52.872 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:52.872 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:52.872 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:08:00.1: cvl_0_1' 00:26:52.872 Found net devices under 0000:08:00.1: cvl_0_1 00:26:52.872 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:52.872 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:52.872 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:26:52.872 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:52.872 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:52.872 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:52.872 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:52.872 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:52.872 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:52.872 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:52.872 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:52.872 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:52.872 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:52.872 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:52.872 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:52.872 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:52.872 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:52.872 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:52.872 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:52.872 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:52.872 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:52.872 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:52.872 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:52.872 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:52.872 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:52.872 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:52.872 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:52.872 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:26:52.872 00:26:52.872 --- 10.0.0.2 ping statistics --- 00:26:52.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:52.872 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:26:52.872 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:52.872 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:52.872 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:26:52.872 00:26:52.872 --- 10.0.0.1 ping statistics --- 00:26:52.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:52.872 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:26:52.872 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:52.872 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:26:52.872 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:26:52.872 22:36:18 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:53.812 0000:00:04.7 (8086 3c27): ioatdma -> vfio-pci 00:26:53.812 0000:00:04.6 (8086 3c26): ioatdma -> vfio-pci 00:26:53.812 0000:00:04.5 (8086 3c25): ioatdma -> vfio-pci 00:26:53.812 0000:00:04.4 (8086 3c24): ioatdma -> vfio-pci 00:26:53.812 0000:00:04.3 (8086 3c23): ioatdma -> vfio-pci 00:26:53.812 0000:00:04.2 (8086 3c22): ioatdma -> vfio-pci 00:26:53.812 0000:00:04.1 (8086 3c21): ioatdma -> vfio-pci 00:26:53.812 0000:00:04.0 (8086 3c20): ioatdma -> vfio-pci 00:26:53.812 0000:80:04.7 (8086 3c27): ioatdma -> vfio-pci 00:26:53.812 0000:80:04.6 (8086 3c26): ioatdma -> vfio-pci 00:26:53.812 0000:80:04.5 (8086 3c25): ioatdma -> vfio-pci 00:26:53.812 0000:80:04.4 (8086 3c24): ioatdma -> vfio-pci 00:26:53.812 0000:80:04.3 (8086 3c23): ioatdma -> vfio-pci 00:26:53.812 0000:80:04.2 (8086 3c22): ioatdma -> vfio-pci 00:26:53.812 0000:80:04.1 (8086 3c21): ioatdma -> vfio-pci 00:26:53.812 0000:80:04.0 (8086 3c20): ioatdma -> vfio-pci 00:26:54.745 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:26:54.745 22:36:20 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:54.745 22:36:20 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:54.745 22:36:20 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:54.745 22:36:20 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:54.745 22:36:20 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:54.745 22:36:20 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:54.745 22:36:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:26:54.745 22:36:20 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:54.745 22:36:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:54.745 22:36:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:54.745 22:36:20 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=3945149 00:26:54.745 22:36:20 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:26:54.745 22:36:20 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 3945149 00:26:54.745 22:36:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 3945149 ']' 00:26:54.745 22:36:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:54.745 22:36:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:54.745 22:36:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:54.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:54.745 22:36:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:54.745 22:36:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:54.745 [2024-07-24 22:36:20.409477] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:26:54.745 [2024-07-24 22:36:20.409586] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:54.745 EAL: No free 2048 kB hugepages reported on node 1 00:26:55.003 [2024-07-24 22:36:20.489942] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:55.003 [2024-07-24 22:36:20.648834] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:55.003 [2024-07-24 22:36:20.648922] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:55.003 [2024-07-24 22:36:20.648953] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:55.003 [2024-07-24 22:36:20.648981] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:55.003 [2024-07-24 22:36:20.649004] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:55.003 [2024-07-24 22:36:20.649100] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:55.003 [2024-07-24 22:36:20.649158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:55.003 [2024-07-24 22:36:20.649216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:55.003 [2024-07-24 22:36:20.649226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:55.261 22:36:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:55.261 22:36:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:26:55.261 22:36:20 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:55.261 22:36:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:55.261 22:36:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:55.261 22:36:20 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:55.261 22:36:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:26:55.261 22:36:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:26:55.261 22:36:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:26:55.261 22:36:20 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:26:55.261 22:36:20 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:26:55.261 22:36:20 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:84:00.0 ]] 00:26:55.261 22:36:20 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:26:55.261 22:36:20 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:26:55.261 22:36:20 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:84:00.0 ]] 00:26:55.261 22:36:20 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:26:55.261 22:36:20 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:26:55.261 22:36:20 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:26:55.261 22:36:20 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:26:55.261 22:36:20 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:84:00.0 00:26:55.261 22:36:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:26:55.261 22:36:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:84:00.0 00:26:55.261 22:36:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:26:55.261 22:36:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:55.261 22:36:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:55.261 22:36:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:55.261 ************************************ 00:26:55.261 START TEST spdk_target_abort 00:26:55.261 ************************************ 00:26:55.261 22:36:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:26:55.261 22:36:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:26:55.261 22:36:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:84:00.0 -b spdk_target 00:26:55.261 22:36:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.261 22:36:20 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:58.544 spdk_targetn1 00:26:58.544 22:36:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.544 22:36:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:58.544 22:36:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.544 22:36:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:58.544 [2024-07-24 22:36:23.674664] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:58.544 22:36:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.544 22:36:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:26:58.544 22:36:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.544 22:36:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:58.544 22:36:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.544 22:36:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:26:58.544 22:36:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.544 22:36:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:58.544 22:36:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.544 22:36:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:26:58.544 22:36:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.544 22:36:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:58.544 [2024-07-24 22:36:23.706915] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:58.544 22:36:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.544 22:36:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:26:58.544 22:36:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:26:58.544 22:36:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:26:58.544 22:36:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:26:58.544 22:36:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:26:58.544 22:36:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:26:58.544 22:36:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:26:58.544 22:36:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:26:58.544 22:36:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:26:58.544 22:36:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:58.544 22:36:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:26:58.544 22:36:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:58.544 22:36:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:26:58.544 22:36:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:58.544 22:36:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:26:58.544 22:36:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:58.544 22:36:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:58.544 22:36:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:58.544 22:36:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:58.544 22:36:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:58.544 22:36:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:58.544 EAL: No free 2048 kB hugepages reported on node 1 00:27:01.907 Initializing NVMe Controllers 00:27:01.907 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:27:01.907 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:01.907 Initialization complete. Launching workers. 00:27:01.907 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9607, failed: 0 00:27:01.907 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1212, failed to submit 8395 00:27:01.907 success 687, unsuccess 525, failed 0 00:27:01.907 22:36:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:01.907 22:36:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:01.907 EAL: No free 2048 kB hugepages reported on node 1 00:27:05.186 Initializing NVMe Controllers 00:27:05.186 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:27:05.186 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:05.186 Initialization complete. Launching workers. 00:27:05.186 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8490, failed: 0 00:27:05.186 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1208, failed to submit 7282 00:27:05.186 success 360, unsuccess 848, failed 0 00:27:05.186 22:36:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:05.186 22:36:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:05.186 EAL: No free 2048 kB hugepages reported on node 1 00:27:08.465 Initializing NVMe Controllers 00:27:08.465 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:27:08.465 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:08.465 Initialization complete. Launching workers. 00:27:08.465 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 29892, failed: 0 00:27:08.465 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2815, failed to submit 27077 00:27:08.465 success 423, unsuccess 2392, failed 0 00:27:08.465 22:36:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:27:08.465 22:36:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.465 22:36:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:08.465 22:36:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.465 22:36:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:27:08.465 22:36:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.465 22:36:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:09.399 22:36:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:09.399 22:36:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3945149 00:27:09.399 22:36:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 3945149 ']' 00:27:09.399 22:36:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 3945149 00:27:09.399 22:36:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:27:09.399 22:36:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:09.399 22:36:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3945149 00:27:09.399 22:36:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:09.399 22:36:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:09.399 22:36:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3945149' 00:27:09.399 killing process with pid 3945149 00:27:09.399 22:36:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 3945149 00:27:09.399 22:36:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 3945149 00:27:09.399 00:27:09.399 real 0m14.197s 00:27:09.399 user 0m53.904s 00:27:09.399 sys 0m2.473s 00:27:09.399 22:36:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:09.399 22:36:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:09.399 ************************************ 00:27:09.399 END TEST spdk_target_abort 00:27:09.399 ************************************ 00:27:09.399 22:36:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:27:09.399 22:36:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:09.399 22:36:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:09.399 22:36:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:09.659 ************************************ 00:27:09.659 START TEST kernel_target_abort 00:27:09.659 ************************************ 00:27:09.659 22:36:35 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:27:09.659 22:36:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:27:09.659 22:36:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:27:09.659 22:36:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:09.659 22:36:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:09.659 22:36:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:09.659 22:36:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:09.659 22:36:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:09.659 22:36:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:09.659 22:36:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:09.659 22:36:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:09.659 22:36:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:09.659 22:36:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:09.659 22:36:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:09.659 22:36:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:09.659 22:36:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:09.659 22:36:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:09.659 22:36:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:09.659 22:36:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:27:09.659 22:36:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:09.659 22:36:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:09.659 22:36:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:09.659 22:36:35 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:10.596 Waiting for block devices as requested 00:27:10.596 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:27:10.596 0000:00:04.7 (8086 3c27): vfio-pci -> ioatdma 00:27:10.596 0000:00:04.6 (8086 3c26): vfio-pci -> ioatdma 00:27:10.596 0000:00:04.5 (8086 3c25): vfio-pci -> ioatdma 00:27:10.856 0000:00:04.4 (8086 3c24): vfio-pci -> ioatdma 00:27:10.856 0000:00:04.3 (8086 3c23): vfio-pci -> ioatdma 00:27:10.856 0000:00:04.2 (8086 3c22): vfio-pci -> ioatdma 00:27:10.856 0000:00:04.1 (8086 3c21): vfio-pci -> ioatdma 00:27:11.114 0000:00:04.0 (8086 3c20): vfio-pci -> ioatdma 00:27:11.114 0000:80:04.7 (8086 3c27): vfio-pci -> ioatdma 00:27:11.114 0000:80:04.6 (8086 3c26): vfio-pci -> ioatdma 00:27:11.114 0000:80:04.5 (8086 3c25): vfio-pci -> ioatdma 00:27:11.373 0000:80:04.4 (8086 3c24): vfio-pci -> ioatdma 00:27:11.373 0000:80:04.3 (8086 3c23): vfio-pci -> ioatdma 00:27:11.373 0000:80:04.2 (8086 3c22): vfio-pci -> ioatdma 00:27:11.631 0000:80:04.1 (8086 3c21): vfio-pci -> ioatdma 00:27:11.631 0000:80:04.0 (8086 3c20): vfio-pci -> ioatdma 00:27:11.631 22:36:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:11.631 22:36:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:11.631 22:36:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:11.631 22:36:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # local device=nvme0n1 00:27:11.631 22:36:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:11.631 22:36:37 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:27:11.631 22:36:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:11.631 22:36:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:11.631 22:36:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:11.631 No valid GPT data, bailing 00:27:11.631 22:36:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:11.631 22:36:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:27:11.631 22:36:37 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:27:11.631 22:36:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:11.631 22:36:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:11.631 22:36:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:11.631 22:36:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:11.631 22:36:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:11.631 22:36:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:11.631 22:36:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:27:11.631 22:36:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:11.631 22:36:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:27:11.631 22:36:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:11.631 22:36:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:27:11.631 22:36:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:27:11.631 22:36:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:27:11.631 22:36:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:11.889 22:36:37 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc --hostid=a27f578f-8275-e111-bd1d-001e673e77fc -a 10.0.0.1 -t tcp -s 4420 00:27:11.889 00:27:11.889 Discovery Log Number of Records 2, Generation counter 2 00:27:11.889 =====Discovery Log Entry 0====== 00:27:11.889 trtype: tcp 00:27:11.889 adrfam: ipv4 00:27:11.889 subtype: current discovery subsystem 00:27:11.889 treq: not specified, sq flow control disable supported 00:27:11.889 portid: 1 00:27:11.889 trsvcid: 4420 00:27:11.889 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:11.889 traddr: 10.0.0.1 00:27:11.889 eflags: none 00:27:11.889 sectype: none 00:27:11.889 =====Discovery Log Entry 1====== 00:27:11.889 trtype: tcp 00:27:11.889 adrfam: ipv4 00:27:11.889 subtype: nvme subsystem 00:27:11.889 treq: not specified, sq flow control disable supported 00:27:11.889 portid: 1 00:27:11.889 trsvcid: 4420 00:27:11.889 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:11.889 traddr: 10.0.0.1 00:27:11.889 eflags: none 00:27:11.889 sectype: none 00:27:11.889 22:36:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:27:11.889 22:36:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:27:11.889 22:36:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:27:11.889 22:36:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:27:11.889 22:36:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:27:11.889 22:36:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:27:11.889 22:36:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:27:11.889 22:36:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:27:11.889 22:36:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:27:11.889 22:36:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:11.889 22:36:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:27:11.889 22:36:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:11.889 22:36:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:27:11.889 22:36:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:11.889 22:36:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:27:11.889 22:36:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:11.889 22:36:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:27:11.889 22:36:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:11.889 22:36:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:11.889 22:36:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:11.889 22:36:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:11.889 EAL: No free 2048 kB hugepages reported on node 1 00:27:15.167 Initializing NVMe Controllers 00:27:15.167 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:15.167 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:15.167 Initialization complete. Launching workers. 00:27:15.167 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 39340, failed: 0 00:27:15.167 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 39340, failed to submit 0 00:27:15.167 success 0, unsuccess 39340, failed 0 00:27:15.167 22:36:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:15.167 22:36:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:15.167 EAL: No free 2048 kB hugepages reported on node 1 00:27:18.635 Initializing NVMe Controllers 00:27:18.635 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:18.635 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:18.635 Initialization complete. Launching workers. 00:27:18.635 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 71998, failed: 0 00:27:18.635 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 18130, failed to submit 53868 00:27:18.635 success 0, unsuccess 18130, failed 0 00:27:18.635 22:36:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:18.635 22:36:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:18.635 EAL: No free 2048 kB hugepages reported on node 1 00:27:21.159 Initializing NVMe Controllers 00:27:21.159 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:21.159 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:21.159 Initialization complete. Launching workers. 00:27:21.159 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 70084, failed: 0 00:27:21.159 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 17518, failed to submit 52566 00:27:21.159 success 0, unsuccess 17518, failed 0 00:27:21.159 22:36:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:27:21.159 22:36:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:21.159 22:36:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:27:21.159 22:36:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:21.160 22:36:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:21.160 22:36:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:21.160 22:36:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:21.160 22:36:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:21.160 22:36:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:21.160 22:36:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:22.534 0000:00:04.7 (8086 3c27): ioatdma -> vfio-pci 00:27:22.534 0000:00:04.6 (8086 3c26): ioatdma -> vfio-pci 00:27:22.534 0000:00:04.5 (8086 3c25): ioatdma -> vfio-pci 00:27:22.535 0000:00:04.4 (8086 3c24): ioatdma -> vfio-pci 00:27:22.535 0000:00:04.3 (8086 3c23): ioatdma -> vfio-pci 00:27:22.535 0000:00:04.2 (8086 3c22): ioatdma -> vfio-pci 00:27:22.535 0000:00:04.1 (8086 3c21): ioatdma -> vfio-pci 00:27:22.535 0000:00:04.0 (8086 3c20): ioatdma -> vfio-pci 00:27:22.535 0000:80:04.7 (8086 3c27): ioatdma -> vfio-pci 00:27:22.535 0000:80:04.6 (8086 3c26): ioatdma -> vfio-pci 00:27:22.535 0000:80:04.5 (8086 3c25): ioatdma -> vfio-pci 00:27:22.535 0000:80:04.4 (8086 3c24): ioatdma -> vfio-pci 00:27:22.535 0000:80:04.3 (8086 3c23): ioatdma -> vfio-pci 00:27:22.535 0000:80:04.2 (8086 3c22): ioatdma -> vfio-pci 00:27:22.535 0000:80:04.1 (8086 3c21): ioatdma -> vfio-pci 00:27:22.535 0000:80:04.0 (8086 3c20): ioatdma -> vfio-pci 00:27:23.472 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:27:23.472 00:27:23.472 real 0m13.912s 00:27:23.472 user 0m6.129s 00:27:23.472 sys 0m3.083s 00:27:23.472 22:36:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:23.472 22:36:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:23.472 ************************************ 00:27:23.472 END TEST kernel_target_abort 00:27:23.472 ************************************ 00:27:23.472 22:36:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:27:23.472 22:36:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:27:23.472 22:36:49 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:23.472 22:36:49 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:27:23.472 22:36:49 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:23.472 22:36:49 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:27:23.472 22:36:49 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:23.472 22:36:49 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:23.472 rmmod nvme_tcp 00:27:23.472 rmmod nvme_fabrics 00:27:23.472 rmmod nvme_keyring 00:27:23.472 22:36:49 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:23.472 22:36:49 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:27:23.472 22:36:49 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:27:23.472 22:36:49 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 3945149 ']' 00:27:23.472 22:36:49 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 3945149 00:27:23.472 22:36:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 3945149 ']' 00:27:23.472 22:36:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 3945149 00:27:23.472 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3945149) - No such process 00:27:23.472 22:36:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 3945149 is not found' 00:27:23.472 Process with pid 3945149 is not found 00:27:23.472 22:36:49 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:27:23.472 22:36:49 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:24.406 Waiting for block devices as requested 00:27:24.406 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:27:24.664 0000:00:04.7 (8086 3c27): vfio-pci -> ioatdma 00:27:24.664 0000:00:04.6 (8086 3c26): vfio-pci -> ioatdma 00:27:24.664 0000:00:04.5 (8086 3c25): vfio-pci -> ioatdma 00:27:24.664 0000:00:04.4 (8086 3c24): vfio-pci -> ioatdma 00:27:24.923 0000:00:04.3 (8086 3c23): vfio-pci -> ioatdma 00:27:24.923 0000:00:04.2 (8086 3c22): vfio-pci -> ioatdma 00:27:24.923 0000:00:04.1 (8086 3c21): vfio-pci -> ioatdma 00:27:24.923 0000:00:04.0 (8086 3c20): vfio-pci -> ioatdma 00:27:25.183 0000:80:04.7 (8086 3c27): vfio-pci -> ioatdma 00:27:25.183 0000:80:04.6 (8086 3c26): vfio-pci -> ioatdma 00:27:25.183 0000:80:04.5 (8086 3c25): vfio-pci -> ioatdma 00:27:25.443 0000:80:04.4 (8086 3c24): vfio-pci -> ioatdma 00:27:25.443 0000:80:04.3 (8086 3c23): vfio-pci -> ioatdma 00:27:25.443 0000:80:04.2 (8086 3c22): vfio-pci -> ioatdma 00:27:25.443 0000:80:04.1 (8086 3c21): vfio-pci -> ioatdma 00:27:25.703 0000:80:04.0 (8086 3c20): vfio-pci -> ioatdma 00:27:25.703 22:36:51 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:25.703 22:36:51 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:25.703 22:36:51 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:25.703 22:36:51 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:25.703 22:36:51 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:25.703 22:36:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:25.703 22:36:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:27.610 22:36:53 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:27.610 00:27:27.610 real 0m36.880s 00:27:27.610 user 1m2.124s 00:27:27.610 sys 0m8.503s 00:27:27.610 22:36:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:27.610 22:36:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:27.610 ************************************ 00:27:27.610 END TEST nvmf_abort_qd_sizes 00:27:27.610 ************************************ 00:27:27.872 22:36:53 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:27:27.872 22:36:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:27.872 22:36:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:27.872 22:36:53 -- common/autotest_common.sh@10 -- # set +x 00:27:27.872 ************************************ 00:27:27.872 START TEST keyring_file 00:27:27.872 ************************************ 00:27:27.872 22:36:53 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:27:27.872 * Looking for test storage... 00:27:27.872 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:27:27.872 22:36:53 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:27:27.872 22:36:53 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:27.872 22:36:53 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:27:27.872 22:36:53 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:27.872 22:36:53 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:27.872 22:36:53 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:27.872 22:36:53 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:27.872 22:36:53 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:27.872 22:36:53 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:27.872 22:36:53 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:27.872 22:36:53 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:27.872 22:36:53 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:27.872 22:36:53 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:27.872 22:36:53 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:27:27.872 22:36:53 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:27:27.872 22:36:53 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:27.872 22:36:53 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:27.873 22:36:53 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:27.873 22:36:53 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:27.873 22:36:53 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:27.873 22:36:53 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:27.873 22:36:53 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:27.873 22:36:53 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:27.873 22:36:53 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.873 22:36:53 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.873 22:36:53 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.873 22:36:53 keyring_file -- paths/export.sh@5 -- # export PATH 00:27:27.873 22:36:53 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.873 22:36:53 keyring_file -- nvmf/common.sh@47 -- # : 0 00:27:27.873 22:36:53 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:27.873 22:36:53 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:27.873 22:36:53 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:27.873 22:36:53 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:27.873 22:36:53 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:27.873 22:36:53 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:27.873 22:36:53 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:27.873 22:36:53 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:27.873 22:36:53 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:27:27.873 22:36:53 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:27:27.873 22:36:53 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:27:27.873 22:36:53 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:27:27.873 22:36:53 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:27:27.873 22:36:53 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:27:27.873 22:36:53 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:27:27.873 22:36:53 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:27.873 22:36:53 keyring_file -- keyring/common.sh@17 -- # name=key0 00:27:27.873 22:36:53 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:27.873 22:36:53 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:27.873 22:36:53 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:27.873 22:36:53 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.F1NvYJTLFR 00:27:27.873 22:36:53 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:27.873 22:36:53 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:27.873 22:36:53 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:27:27.873 22:36:53 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:27.873 22:36:53 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:27:27.873 22:36:53 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:27:27.873 22:36:53 keyring_file -- nvmf/common.sh@705 -- # python - 00:27:27.873 22:36:53 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.F1NvYJTLFR 00:27:27.873 22:36:53 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.F1NvYJTLFR 00:27:27.873 22:36:53 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.F1NvYJTLFR 00:27:27.873 22:36:53 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:27:27.873 22:36:53 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:27.873 22:36:53 keyring_file -- keyring/common.sh@17 -- # name=key1 00:27:27.873 22:36:53 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:27:27.873 22:36:53 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:27.873 22:36:53 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:27.873 22:36:53 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.e2K00eNlbB 00:27:27.873 22:36:53 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:27:27.873 22:36:53 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:27:27.873 22:36:53 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:27:27.873 22:36:53 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:27.873 22:36:53 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:27:27.873 22:36:53 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:27:27.873 22:36:53 keyring_file -- nvmf/common.sh@705 -- # python - 00:27:27.873 22:36:53 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.e2K00eNlbB 00:27:27.873 22:36:53 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.e2K00eNlbB 00:27:27.873 22:36:53 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.e2K00eNlbB 00:27:27.873 22:36:53 keyring_file -- keyring/file.sh@30 -- # tgtpid=3949676 00:27:27.873 22:36:53 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:27:27.873 22:36:53 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3949676 00:27:27.873 22:36:53 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 3949676 ']' 00:27:27.873 22:36:53 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:27.873 22:36:53 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:27.873 22:36:53 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:27.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:27.873 22:36:53 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:27.873 22:36:53 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:28.132 [2024-07-24 22:36:53.585112] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:27:28.132 [2024-07-24 22:36:53.585211] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3949676 ] 00:27:28.132 EAL: No free 2048 kB hugepages reported on node 1 00:27:28.132 [2024-07-24 22:36:53.646089] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:28.132 [2024-07-24 22:36:53.766704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:28.389 22:36:53 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:28.389 22:36:53 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:27:28.389 22:36:53 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:27:28.389 22:36:53 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.389 22:36:53 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:28.389 [2024-07-24 22:36:53.999811] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:28.389 null0 00:27:28.389 [2024-07-24 22:36:54.031869] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:28.389 [2024-07-24 22:36:54.032277] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:28.389 [2024-07-24 22:36:54.039877] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:27:28.389 22:36:54 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.389 22:36:54 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:28.389 22:36:54 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:28.389 22:36:54 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:28.389 22:36:54 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:28.389 22:36:54 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:28.389 22:36:54 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:28.389 22:36:54 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:28.389 22:36:54 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:28.389 22:36:54 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.389 22:36:54 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:28.389 [2024-07-24 22:36:54.051894] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:27:28.389 request: 00:27:28.389 { 00:27:28.389 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:27:28.389 "secure_channel": false, 00:27:28.389 "listen_address": { 00:27:28.389 "trtype": "tcp", 00:27:28.389 "traddr": "127.0.0.1", 00:27:28.389 "trsvcid": "4420" 00:27:28.389 }, 00:27:28.389 "method": "nvmf_subsystem_add_listener", 00:27:28.389 "req_id": 1 00:27:28.389 } 00:27:28.389 Got JSON-RPC error response 00:27:28.389 response: 00:27:28.389 { 00:27:28.389 "code": -32602, 00:27:28.389 "message": "Invalid parameters" 00:27:28.389 } 00:27:28.389 22:36:54 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:28.389 22:36:54 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:28.390 22:36:54 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:28.390 22:36:54 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:28.390 22:36:54 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:28.390 22:36:54 keyring_file -- keyring/file.sh@46 -- # bperfpid=3949755 00:27:28.390 22:36:54 keyring_file -- keyring/file.sh@48 -- # waitforlisten 3949755 /var/tmp/bperf.sock 00:27:28.390 22:36:54 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 3949755 ']' 00:27:28.390 22:36:54 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:28.390 22:36:54 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:28.390 22:36:54 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:27:28.390 22:36:54 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:28.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:28.390 22:36:54 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:28.390 22:36:54 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:28.647 [2024-07-24 22:36:54.105895] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:27:28.647 [2024-07-24 22:36:54.105986] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3949755 ] 00:27:28.647 EAL: No free 2048 kB hugepages reported on node 1 00:27:28.647 [2024-07-24 22:36:54.167641] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:28.647 [2024-07-24 22:36:54.284301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:28.904 22:36:54 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:28.904 22:36:54 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:27:28.904 22:36:54 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.F1NvYJTLFR 00:27:28.904 22:36:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.F1NvYJTLFR 00:27:29.161 22:36:54 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.e2K00eNlbB 00:27:29.161 22:36:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.e2K00eNlbB 00:27:29.419 22:36:54 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:27:29.419 22:36:54 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:27:29.419 22:36:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:29.419 22:36:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:29.419 22:36:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:29.677 22:36:55 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.F1NvYJTLFR == \/\t\m\p\/\t\m\p\.\F\1\N\v\Y\J\T\L\F\R ]] 00:27:29.677 22:36:55 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:27:29.677 22:36:55 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:27:29.677 22:36:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:29.677 22:36:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:29.677 22:36:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:29.936 22:36:55 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.e2K00eNlbB == \/\t\m\p\/\t\m\p\.\e\2\K\0\0\e\N\l\b\B ]] 00:27:29.936 22:36:55 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:27:29.936 22:36:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:29.936 22:36:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:29.936 22:36:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:29.936 22:36:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:29.936 22:36:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:30.195 22:36:55 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:27:30.195 22:36:55 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:27:30.195 22:36:55 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:30.195 22:36:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:30.195 22:36:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:30.195 22:36:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:30.195 22:36:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:30.452 22:36:55 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:27:30.452 22:36:55 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:30.452 22:36:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:30.709 [2024-07-24 22:36:56.219431] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:30.709 nvme0n1 00:27:30.709 22:36:56 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:27:30.709 22:36:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:30.709 22:36:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:30.709 22:36:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:30.709 22:36:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:30.709 22:36:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:30.966 22:36:56 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:27:30.966 22:36:56 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:27:30.966 22:36:56 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:30.966 22:36:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:30.966 22:36:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:30.966 22:36:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:30.966 22:36:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:31.224 22:36:56 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:27:31.224 22:36:56 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:31.224 Running I/O for 1 seconds... 00:27:32.596 00:27:32.596 Latency(us) 00:27:32.596 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:32.596 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:27:32.596 nvme0n1 : 1.01 7382.08 28.84 0.00 0.00 17223.64 6165.24 25631.86 00:27:32.596 =================================================================================================================== 00:27:32.596 Total : 7382.08 28.84 0.00 0.00 17223.64 6165.24 25631.86 00:27:32.596 0 00:27:32.596 22:36:57 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:27:32.596 22:36:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:27:32.596 22:36:58 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:27:32.596 22:36:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:32.596 22:36:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:32.596 22:36:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:32.596 22:36:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:32.596 22:36:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:32.854 22:36:58 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:27:32.854 22:36:58 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:27:32.854 22:36:58 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:32.854 22:36:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:32.854 22:36:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:32.854 22:36:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:32.854 22:36:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:33.112 22:36:58 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:27:33.112 22:36:58 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:33.112 22:36:58 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:33.112 22:36:58 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:33.112 22:36:58 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:27:33.112 22:36:58 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:33.112 22:36:58 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:27:33.112 22:36:58 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:33.112 22:36:58 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:33.112 22:36:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:33.370 [2024-07-24 22:36:59.063403] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:27:33.370 [2024-07-24 22:36:59.064192] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbeb520 (107): Transport endpoint is not connected 00:27:33.370 [2024-07-24 22:36:59.065183] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbeb520 (9): Bad file descriptor 00:27:33.370 [2024-07-24 22:36:59.066182] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:33.370 [2024-07-24 22:36:59.066203] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:27:33.370 [2024-07-24 22:36:59.066218] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:33.370 request: 00:27:33.370 { 00:27:33.370 "name": "nvme0", 00:27:33.370 "trtype": "tcp", 00:27:33.370 "traddr": "127.0.0.1", 00:27:33.370 "adrfam": "ipv4", 00:27:33.370 "trsvcid": "4420", 00:27:33.370 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:33.370 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:33.370 "prchk_reftag": false, 00:27:33.370 "prchk_guard": false, 00:27:33.370 "hdgst": false, 00:27:33.370 "ddgst": false, 00:27:33.370 "psk": "key1", 00:27:33.370 "method": "bdev_nvme_attach_controller", 00:27:33.370 "req_id": 1 00:27:33.370 } 00:27:33.370 Got JSON-RPC error response 00:27:33.370 response: 00:27:33.370 { 00:27:33.370 "code": -5, 00:27:33.370 "message": "Input/output error" 00:27:33.370 } 00:27:33.627 22:36:59 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:33.627 22:36:59 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:33.627 22:36:59 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:33.627 22:36:59 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:33.627 22:36:59 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:27:33.627 22:36:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:33.627 22:36:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:33.627 22:36:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:33.627 22:36:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:33.627 22:36:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:33.885 22:36:59 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:27:33.885 22:36:59 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:27:33.885 22:36:59 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:33.885 22:36:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:33.885 22:36:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:33.885 22:36:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:33.885 22:36:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:34.143 22:36:59 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:27:34.143 22:36:59 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:27:34.143 22:36:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:34.400 22:36:59 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:27:34.400 22:36:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:27:34.658 22:37:00 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:27:34.658 22:37:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:34.658 22:37:00 keyring_file -- keyring/file.sh@77 -- # jq length 00:27:34.943 22:37:00 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:27:34.943 22:37:00 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.F1NvYJTLFR 00:27:34.943 22:37:00 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.F1NvYJTLFR 00:27:34.943 22:37:00 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:34.943 22:37:00 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.F1NvYJTLFR 00:27:34.943 22:37:00 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:27:34.943 22:37:00 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:34.943 22:37:00 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:27:34.943 22:37:00 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:34.943 22:37:00 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.F1NvYJTLFR 00:27:34.943 22:37:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.F1NvYJTLFR 00:27:35.234 [2024-07-24 22:37:00.791581] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.F1NvYJTLFR': 0100660 00:27:35.234 [2024-07-24 22:37:00.791624] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:27:35.234 request: 00:27:35.234 { 00:27:35.234 "name": "key0", 00:27:35.234 "path": "/tmp/tmp.F1NvYJTLFR", 00:27:35.234 "method": "keyring_file_add_key", 00:27:35.234 "req_id": 1 00:27:35.234 } 00:27:35.234 Got JSON-RPC error response 00:27:35.234 response: 00:27:35.234 { 00:27:35.234 "code": -1, 00:27:35.234 "message": "Operation not permitted" 00:27:35.234 } 00:27:35.234 22:37:00 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:35.234 22:37:00 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:35.234 22:37:00 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:35.234 22:37:00 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:35.234 22:37:00 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.F1NvYJTLFR 00:27:35.234 22:37:00 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.F1NvYJTLFR 00:27:35.234 22:37:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.F1NvYJTLFR 00:27:35.511 22:37:01 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.F1NvYJTLFR 00:27:35.511 22:37:01 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:27:35.511 22:37:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:35.511 22:37:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:35.511 22:37:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:35.511 22:37:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:35.511 22:37:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:35.768 22:37:01 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:27:35.769 22:37:01 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:35.769 22:37:01 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:35.769 22:37:01 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:35.769 22:37:01 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:27:35.769 22:37:01 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:35.769 22:37:01 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:27:35.769 22:37:01 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:35.769 22:37:01 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:35.769 22:37:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:36.027 [2024-07-24 22:37:01.537613] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.F1NvYJTLFR': No such file or directory 00:27:36.027 [2024-07-24 22:37:01.537654] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:27:36.027 [2024-07-24 22:37:01.537687] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:27:36.027 [2024-07-24 22:37:01.537700] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:36.027 [2024-07-24 22:37:01.537714] bdev_nvme.c:6296:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:27:36.027 request: 00:27:36.027 { 00:27:36.027 "name": "nvme0", 00:27:36.027 "trtype": "tcp", 00:27:36.027 "traddr": "127.0.0.1", 00:27:36.027 "adrfam": "ipv4", 00:27:36.027 "trsvcid": "4420", 00:27:36.027 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:36.027 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:36.027 "prchk_reftag": false, 00:27:36.027 "prchk_guard": false, 00:27:36.027 "hdgst": false, 00:27:36.027 "ddgst": false, 00:27:36.027 "psk": "key0", 00:27:36.027 "method": "bdev_nvme_attach_controller", 00:27:36.027 "req_id": 1 00:27:36.027 } 00:27:36.027 Got JSON-RPC error response 00:27:36.027 response: 00:27:36.027 { 00:27:36.027 "code": -19, 00:27:36.027 "message": "No such device" 00:27:36.027 } 00:27:36.027 22:37:01 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:36.027 22:37:01 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:36.027 22:37:01 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:36.027 22:37:01 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:36.027 22:37:01 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:27:36.027 22:37:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:36.285 22:37:01 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:27:36.285 22:37:01 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:36.285 22:37:01 keyring_file -- keyring/common.sh@17 -- # name=key0 00:27:36.285 22:37:01 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:36.285 22:37:01 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:36.285 22:37:01 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:36.285 22:37:01 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.l9hKLjyck0 00:27:36.285 22:37:01 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:36.285 22:37:01 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:36.285 22:37:01 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:27:36.285 22:37:01 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:36.285 22:37:01 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:27:36.285 22:37:01 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:27:36.285 22:37:01 keyring_file -- nvmf/common.sh@705 -- # python - 00:27:36.285 22:37:01 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.l9hKLjyck0 00:27:36.285 22:37:01 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.l9hKLjyck0 00:27:36.285 22:37:01 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.l9hKLjyck0 00:27:36.285 22:37:01 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.l9hKLjyck0 00:27:36.285 22:37:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.l9hKLjyck0 00:27:36.543 22:37:02 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:36.543 22:37:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:36.801 nvme0n1 00:27:36.801 22:37:02 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:27:36.801 22:37:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:36.801 22:37:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:36.801 22:37:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:36.801 22:37:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:36.801 22:37:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:37.058 22:37:02 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:27:37.058 22:37:02 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:27:37.058 22:37:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:37.316 22:37:03 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:27:37.316 22:37:03 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:27:37.316 22:37:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:37.316 22:37:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:37.316 22:37:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:37.886 22:37:03 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:27:37.886 22:37:03 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:27:37.886 22:37:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:37.886 22:37:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:37.886 22:37:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:37.886 22:37:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:37.886 22:37:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:38.144 22:37:03 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:27:38.144 22:37:03 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:27:38.144 22:37:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:27:38.402 22:37:03 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:27:38.402 22:37:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:38.402 22:37:03 keyring_file -- keyring/file.sh@104 -- # jq length 00:27:38.659 22:37:04 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:27:38.659 22:37:04 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.l9hKLjyck0 00:27:38.659 22:37:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.l9hKLjyck0 00:27:38.917 22:37:04 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.e2K00eNlbB 00:27:38.917 22:37:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.e2K00eNlbB 00:27:39.175 22:37:04 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:39.175 22:37:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:39.432 nvme0n1 00:27:39.432 22:37:04 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:27:39.432 22:37:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:27:39.689 22:37:05 keyring_file -- keyring/file.sh@112 -- # config='{ 00:27:39.689 "subsystems": [ 00:27:39.690 { 00:27:39.690 "subsystem": "keyring", 00:27:39.690 "config": [ 00:27:39.690 { 00:27:39.690 "method": "keyring_file_add_key", 00:27:39.690 "params": { 00:27:39.690 "name": "key0", 00:27:39.690 "path": "/tmp/tmp.l9hKLjyck0" 00:27:39.690 } 00:27:39.690 }, 00:27:39.690 { 00:27:39.690 "method": "keyring_file_add_key", 00:27:39.690 "params": { 00:27:39.690 "name": "key1", 00:27:39.690 "path": "/tmp/tmp.e2K00eNlbB" 00:27:39.690 } 00:27:39.690 } 00:27:39.690 ] 00:27:39.690 }, 00:27:39.690 { 00:27:39.690 "subsystem": "iobuf", 00:27:39.690 "config": [ 00:27:39.690 { 00:27:39.690 "method": "iobuf_set_options", 00:27:39.690 "params": { 00:27:39.690 "small_pool_count": 8192, 00:27:39.690 "large_pool_count": 1024, 00:27:39.690 "small_bufsize": 8192, 00:27:39.690 "large_bufsize": 135168 00:27:39.690 } 00:27:39.690 } 00:27:39.690 ] 00:27:39.690 }, 00:27:39.690 { 00:27:39.690 "subsystem": "sock", 00:27:39.690 "config": [ 00:27:39.690 { 00:27:39.690 "method": "sock_set_default_impl", 00:27:39.690 "params": { 00:27:39.690 "impl_name": "posix" 00:27:39.690 } 00:27:39.690 }, 00:27:39.690 { 00:27:39.690 "method": "sock_impl_set_options", 00:27:39.690 "params": { 00:27:39.690 "impl_name": "ssl", 00:27:39.690 "recv_buf_size": 4096, 00:27:39.690 "send_buf_size": 4096, 00:27:39.690 "enable_recv_pipe": true, 00:27:39.690 "enable_quickack": false, 00:27:39.690 "enable_placement_id": 0, 00:27:39.690 "enable_zerocopy_send_server": true, 00:27:39.690 "enable_zerocopy_send_client": false, 00:27:39.690 "zerocopy_threshold": 0, 00:27:39.690 "tls_version": 0, 00:27:39.690 "enable_ktls": false 00:27:39.690 } 00:27:39.690 }, 00:27:39.690 { 00:27:39.690 "method": "sock_impl_set_options", 00:27:39.690 "params": { 00:27:39.690 "impl_name": "posix", 00:27:39.690 "recv_buf_size": 2097152, 00:27:39.690 "send_buf_size": 2097152, 00:27:39.690 "enable_recv_pipe": true, 00:27:39.690 "enable_quickack": false, 00:27:39.690 "enable_placement_id": 0, 00:27:39.690 "enable_zerocopy_send_server": true, 00:27:39.690 "enable_zerocopy_send_client": false, 00:27:39.690 "zerocopy_threshold": 0, 00:27:39.690 "tls_version": 0, 00:27:39.690 "enable_ktls": false 00:27:39.690 } 00:27:39.690 } 00:27:39.690 ] 00:27:39.690 }, 00:27:39.690 { 00:27:39.690 "subsystem": "vmd", 00:27:39.690 "config": [] 00:27:39.690 }, 00:27:39.690 { 00:27:39.690 "subsystem": "accel", 00:27:39.690 "config": [ 00:27:39.690 { 00:27:39.690 "method": "accel_set_options", 00:27:39.690 "params": { 00:27:39.690 "small_cache_size": 128, 00:27:39.690 "large_cache_size": 16, 00:27:39.690 "task_count": 2048, 00:27:39.690 "sequence_count": 2048, 00:27:39.690 "buf_count": 2048 00:27:39.690 } 00:27:39.690 } 00:27:39.690 ] 00:27:39.690 }, 00:27:39.690 { 00:27:39.690 "subsystem": "bdev", 00:27:39.690 "config": [ 00:27:39.690 { 00:27:39.690 "method": "bdev_set_options", 00:27:39.690 "params": { 00:27:39.690 "bdev_io_pool_size": 65535, 00:27:39.690 "bdev_io_cache_size": 256, 00:27:39.690 "bdev_auto_examine": true, 00:27:39.690 "iobuf_small_cache_size": 128, 00:27:39.690 "iobuf_large_cache_size": 16 00:27:39.690 } 00:27:39.690 }, 00:27:39.690 { 00:27:39.690 "method": "bdev_raid_set_options", 00:27:39.690 "params": { 00:27:39.690 "process_window_size_kb": 1024, 00:27:39.690 "process_max_bandwidth_mb_sec": 0 00:27:39.690 } 00:27:39.690 }, 00:27:39.690 { 00:27:39.690 "method": "bdev_iscsi_set_options", 00:27:39.690 "params": { 00:27:39.690 "timeout_sec": 30 00:27:39.690 } 00:27:39.690 }, 00:27:39.690 { 00:27:39.690 "method": "bdev_nvme_set_options", 00:27:39.690 "params": { 00:27:39.690 "action_on_timeout": "none", 00:27:39.690 "timeout_us": 0, 00:27:39.690 "timeout_admin_us": 0, 00:27:39.690 "keep_alive_timeout_ms": 10000, 00:27:39.690 "arbitration_burst": 0, 00:27:39.690 "low_priority_weight": 0, 00:27:39.690 "medium_priority_weight": 0, 00:27:39.690 "high_priority_weight": 0, 00:27:39.690 "nvme_adminq_poll_period_us": 10000, 00:27:39.690 "nvme_ioq_poll_period_us": 0, 00:27:39.690 "io_queue_requests": 512, 00:27:39.690 "delay_cmd_submit": true, 00:27:39.690 "transport_retry_count": 4, 00:27:39.690 "bdev_retry_count": 3, 00:27:39.690 "transport_ack_timeout": 0, 00:27:39.690 "ctrlr_loss_timeout_sec": 0, 00:27:39.690 "reconnect_delay_sec": 0, 00:27:39.690 "fast_io_fail_timeout_sec": 0, 00:27:39.690 "disable_auto_failback": false, 00:27:39.690 "generate_uuids": false, 00:27:39.690 "transport_tos": 0, 00:27:39.690 "nvme_error_stat": false, 00:27:39.690 "rdma_srq_size": 0, 00:27:39.690 "io_path_stat": false, 00:27:39.690 "allow_accel_sequence": false, 00:27:39.690 "rdma_max_cq_size": 0, 00:27:39.690 "rdma_cm_event_timeout_ms": 0, 00:27:39.690 "dhchap_digests": [ 00:27:39.690 "sha256", 00:27:39.690 "sha384", 00:27:39.690 "sha512" 00:27:39.690 ], 00:27:39.690 "dhchap_dhgroups": [ 00:27:39.690 "null", 00:27:39.690 "ffdhe2048", 00:27:39.690 "ffdhe3072", 00:27:39.690 "ffdhe4096", 00:27:39.690 "ffdhe6144", 00:27:39.690 "ffdhe8192" 00:27:39.690 ] 00:27:39.690 } 00:27:39.690 }, 00:27:39.690 { 00:27:39.690 "method": "bdev_nvme_attach_controller", 00:27:39.690 "params": { 00:27:39.690 "name": "nvme0", 00:27:39.690 "trtype": "TCP", 00:27:39.690 "adrfam": "IPv4", 00:27:39.690 "traddr": "127.0.0.1", 00:27:39.690 "trsvcid": "4420", 00:27:39.690 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:39.690 "prchk_reftag": false, 00:27:39.690 "prchk_guard": false, 00:27:39.690 "ctrlr_loss_timeout_sec": 0, 00:27:39.690 "reconnect_delay_sec": 0, 00:27:39.690 "fast_io_fail_timeout_sec": 0, 00:27:39.690 "psk": "key0", 00:27:39.690 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:39.691 "hdgst": false, 00:27:39.691 "ddgst": false 00:27:39.691 } 00:27:39.691 }, 00:27:39.691 { 00:27:39.691 "method": "bdev_nvme_set_hotplug", 00:27:39.691 "params": { 00:27:39.691 "period_us": 100000, 00:27:39.691 "enable": false 00:27:39.691 } 00:27:39.691 }, 00:27:39.691 { 00:27:39.691 "method": "bdev_wait_for_examine" 00:27:39.691 } 00:27:39.691 ] 00:27:39.691 }, 00:27:39.691 { 00:27:39.691 "subsystem": "nbd", 00:27:39.691 "config": [] 00:27:39.691 } 00:27:39.691 ] 00:27:39.691 }' 00:27:39.691 22:37:05 keyring_file -- keyring/file.sh@114 -- # killprocess 3949755 00:27:39.691 22:37:05 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 3949755 ']' 00:27:39.691 22:37:05 keyring_file -- common/autotest_common.sh@952 -- # kill -0 3949755 00:27:39.691 22:37:05 keyring_file -- common/autotest_common.sh@953 -- # uname 00:27:39.691 22:37:05 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:39.691 22:37:05 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3949755 00:27:39.691 22:37:05 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:39.691 22:37:05 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:39.691 22:37:05 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3949755' 00:27:39.691 killing process with pid 3949755 00:27:39.691 22:37:05 keyring_file -- common/autotest_common.sh@967 -- # kill 3949755 00:27:39.691 Received shutdown signal, test time was about 1.000000 seconds 00:27:39.691 00:27:39.691 Latency(us) 00:27:39.691 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:39.691 =================================================================================================================== 00:27:39.691 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:39.691 22:37:05 keyring_file -- common/autotest_common.sh@972 -- # wait 3949755 00:27:39.949 22:37:05 keyring_file -- keyring/file.sh@117 -- # bperfpid=3950909 00:27:39.949 22:37:05 keyring_file -- keyring/file.sh@119 -- # waitforlisten 3950909 /var/tmp/bperf.sock 00:27:39.949 22:37:05 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 3950909 ']' 00:27:39.949 22:37:05 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:39.949 22:37:05 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:27:39.949 22:37:05 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:39.949 22:37:05 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:39.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:39.949 22:37:05 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:27:39.949 "subsystems": [ 00:27:39.949 { 00:27:39.949 "subsystem": "keyring", 00:27:39.949 "config": [ 00:27:39.949 { 00:27:39.949 "method": "keyring_file_add_key", 00:27:39.949 "params": { 00:27:39.949 "name": "key0", 00:27:39.949 "path": "/tmp/tmp.l9hKLjyck0" 00:27:39.949 } 00:27:39.949 }, 00:27:39.949 { 00:27:39.949 "method": "keyring_file_add_key", 00:27:39.949 "params": { 00:27:39.949 "name": "key1", 00:27:39.949 "path": "/tmp/tmp.e2K00eNlbB" 00:27:39.949 } 00:27:39.949 } 00:27:39.949 ] 00:27:39.949 }, 00:27:39.949 { 00:27:39.949 "subsystem": "iobuf", 00:27:39.949 "config": [ 00:27:39.949 { 00:27:39.949 "method": "iobuf_set_options", 00:27:39.949 "params": { 00:27:39.949 "small_pool_count": 8192, 00:27:39.949 "large_pool_count": 1024, 00:27:39.949 "small_bufsize": 8192, 00:27:39.949 "large_bufsize": 135168 00:27:39.949 } 00:27:39.949 } 00:27:39.949 ] 00:27:39.949 }, 00:27:39.949 { 00:27:39.949 "subsystem": "sock", 00:27:39.949 "config": [ 00:27:39.949 { 00:27:39.949 "method": "sock_set_default_impl", 00:27:39.949 "params": { 00:27:39.949 "impl_name": "posix" 00:27:39.949 } 00:27:39.949 }, 00:27:39.949 { 00:27:39.949 "method": "sock_impl_set_options", 00:27:39.949 "params": { 00:27:39.949 "impl_name": "ssl", 00:27:39.949 "recv_buf_size": 4096, 00:27:39.949 "send_buf_size": 4096, 00:27:39.949 "enable_recv_pipe": true, 00:27:39.949 "enable_quickack": false, 00:27:39.949 "enable_placement_id": 0, 00:27:39.949 "enable_zerocopy_send_server": true, 00:27:39.949 "enable_zerocopy_send_client": false, 00:27:39.949 "zerocopy_threshold": 0, 00:27:39.949 "tls_version": 0, 00:27:39.949 "enable_ktls": false 00:27:39.949 } 00:27:39.949 }, 00:27:39.949 { 00:27:39.949 "method": "sock_impl_set_options", 00:27:39.949 "params": { 00:27:39.949 "impl_name": "posix", 00:27:39.949 "recv_buf_size": 2097152, 00:27:39.949 "send_buf_size": 2097152, 00:27:39.949 "enable_recv_pipe": true, 00:27:39.949 "enable_quickack": false, 00:27:39.949 "enable_placement_id": 0, 00:27:39.949 "enable_zerocopy_send_server": true, 00:27:39.949 "enable_zerocopy_send_client": false, 00:27:39.950 "zerocopy_threshold": 0, 00:27:39.950 "tls_version": 0, 00:27:39.950 "enable_ktls": false 00:27:39.950 } 00:27:39.950 } 00:27:39.950 ] 00:27:39.950 }, 00:27:39.950 { 00:27:39.950 "subsystem": "vmd", 00:27:39.950 "config": [] 00:27:39.950 }, 00:27:39.950 { 00:27:39.950 "subsystem": "accel", 00:27:39.950 "config": [ 00:27:39.950 { 00:27:39.950 "method": "accel_set_options", 00:27:39.950 "params": { 00:27:39.950 "small_cache_size": 128, 00:27:39.950 "large_cache_size": 16, 00:27:39.950 "task_count": 2048, 00:27:39.950 "sequence_count": 2048, 00:27:39.950 "buf_count": 2048 00:27:39.950 } 00:27:39.950 } 00:27:39.950 ] 00:27:39.950 }, 00:27:39.950 { 00:27:39.950 "subsystem": "bdev", 00:27:39.950 "config": [ 00:27:39.950 { 00:27:39.950 "method": "bdev_set_options", 00:27:39.950 "params": { 00:27:39.950 "bdev_io_pool_size": 65535, 00:27:39.950 "bdev_io_cache_size": 256, 00:27:39.950 "bdev_auto_examine": true, 00:27:39.950 "iobuf_small_cache_size": 128, 00:27:39.950 "iobuf_large_cache_size": 16 00:27:39.950 } 00:27:39.950 }, 00:27:39.950 { 00:27:39.950 "method": "bdev_raid_set_options", 00:27:39.950 "params": { 00:27:39.950 "process_window_size_kb": 1024, 00:27:39.950 "process_max_bandwidth_mb_sec": 0 00:27:39.950 } 00:27:39.950 }, 00:27:39.950 { 00:27:39.950 "method": "bdev_iscsi_set_options", 00:27:39.950 "params": { 00:27:39.950 "timeout_sec": 30 00:27:39.950 } 00:27:39.950 }, 00:27:39.950 { 00:27:39.950 "method": "bdev_nvme_set_options", 00:27:39.950 "params": { 00:27:39.950 "action_on_timeout": "none", 00:27:39.950 "timeout_us": 0, 00:27:39.950 "timeout_admin_us": 0, 00:27:39.950 "keep_alive_timeout_ms": 10000, 00:27:39.950 "arbitration_burst": 0, 00:27:39.950 "low_priority_weight": 0, 00:27:39.950 "medium_priority_weight": 0, 00:27:39.950 "high_priority_weight": 0, 00:27:39.950 "nvme_adminq_poll_period_us": 10000, 00:27:39.950 "nvme_ioq_poll_period_us": 0, 00:27:39.950 "io_queue_requests": 512, 00:27:39.950 "delay_cmd_submit": true, 00:27:39.950 "transport_retry_count": 4, 00:27:39.950 "bdev_retry_count": 3, 00:27:39.950 "transport_ack_timeout": 0, 00:27:39.950 "ctrlr_loss_timeout_sec": 0, 00:27:39.950 "reconnect_delay_sec": 0, 00:27:39.950 "fast_io_fail_timeout_sec": 0, 00:27:39.950 "disable_auto_failback": false, 00:27:39.950 "generate_uuids": false, 00:27:39.950 "transport_tos": 0, 00:27:39.950 "nvme_error_stat": false, 00:27:39.950 "rdma_srq_size": 0, 00:27:39.950 "io_path_stat": false, 00:27:39.950 "allow_accel_sequence": false, 00:27:39.950 "rdma_max_cq_size": 0, 00:27:39.950 "rdma_cm_event_timeout_ms": 0, 00:27:39.950 "dhchap_digests": [ 00:27:39.950 "sha256", 00:27:39.950 "sha384", 00:27:39.950 "sha512" 00:27:39.950 ], 00:27:39.950 "dhchap_dhgroups": [ 00:27:39.950 "null", 00:27:39.950 "ffdhe2048", 00:27:39.950 "ffdhe3072", 00:27:39.950 "ffdhe4096", 00:27:39.950 "ffdhe6144", 00:27:39.950 "ffdhe8192" 00:27:39.950 ] 00:27:39.950 } 00:27:39.950 }, 00:27:39.950 { 00:27:39.950 "method": "bdev_nvme_attach_controller", 00:27:39.950 "params": { 00:27:39.950 "name": "nvme0", 00:27:39.950 "trtype": "TCP", 00:27:39.950 "adrfam": "IPv4", 00:27:39.950 "traddr": "127.0.0.1", 00:27:39.950 "trsvcid": "4420", 00:27:39.950 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:39.950 "prchk_reftag": false, 00:27:39.950 "prchk_guard": false, 00:27:39.950 "ctrlr_loss_timeout_sec": 0, 00:27:39.950 "reconnect_delay_sec": 0, 00:27:39.950 "fast_io_fail_timeout_sec": 0, 00:27:39.950 "psk": "key0", 00:27:39.950 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:39.950 "hdgst": false, 00:27:39.950 "ddgst": false 00:27:39.950 } 00:27:39.950 }, 00:27:39.950 { 00:27:39.950 "method": "bdev_nvme_set_hotplug", 00:27:39.950 "params": { 00:27:39.950 "period_us": 100000, 00:27:39.950 "enable": false 00:27:39.950 } 00:27:39.950 }, 00:27:39.950 { 00:27:39.950 "method": "bdev_wait_for_examine" 00:27:39.950 } 00:27:39.950 ] 00:27:39.950 }, 00:27:39.950 { 00:27:39.950 "subsystem": "nbd", 00:27:39.950 "config": [] 00:27:39.950 } 00:27:39.950 ] 00:27:39.950 }' 00:27:39.950 22:37:05 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:39.950 22:37:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:39.950 [2024-07-24 22:37:05.573803] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:27:39.950 [2024-07-24 22:37:05.573890] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3950909 ] 00:27:39.950 EAL: No free 2048 kB hugepages reported on node 1 00:27:39.950 [2024-07-24 22:37:05.634740] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:40.208 [2024-07-24 22:37:05.751781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:40.466 [2024-07-24 22:37:05.930137] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:41.031 22:37:06 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:41.031 22:37:06 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:27:41.031 22:37:06 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:27:41.031 22:37:06 keyring_file -- keyring/file.sh@120 -- # jq length 00:27:41.031 22:37:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:41.289 22:37:06 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:27:41.289 22:37:06 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:27:41.289 22:37:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:41.289 22:37:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:41.289 22:37:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:41.289 22:37:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:41.289 22:37:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:41.547 22:37:07 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:27:41.547 22:37:07 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:27:41.547 22:37:07 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:41.547 22:37:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:41.547 22:37:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:41.547 22:37:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:41.547 22:37:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:42.111 22:37:07 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:27:42.111 22:37:07 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:27:42.111 22:37:07 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:27:42.111 22:37:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:27:42.111 22:37:07 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:27:42.111 22:37:07 keyring_file -- keyring/file.sh@1 -- # cleanup 00:27:42.111 22:37:07 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.l9hKLjyck0 /tmp/tmp.e2K00eNlbB 00:27:42.111 22:37:07 keyring_file -- keyring/file.sh@20 -- # killprocess 3950909 00:27:42.111 22:37:07 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 3950909 ']' 00:27:42.111 22:37:07 keyring_file -- common/autotest_common.sh@952 -- # kill -0 3950909 00:27:42.111 22:37:07 keyring_file -- common/autotest_common.sh@953 -- # uname 00:27:42.111 22:37:07 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:42.111 22:37:07 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3950909 00:27:42.111 22:37:07 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:42.111 22:37:07 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:42.111 22:37:07 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3950909' 00:27:42.111 killing process with pid 3950909 00:27:42.111 22:37:07 keyring_file -- common/autotest_common.sh@967 -- # kill 3950909 00:27:42.111 Received shutdown signal, test time was about 1.000000 seconds 00:27:42.111 00:27:42.111 Latency(us) 00:27:42.111 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:42.111 =================================================================================================================== 00:27:42.111 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:42.111 22:37:07 keyring_file -- common/autotest_common.sh@972 -- # wait 3950909 00:27:42.369 22:37:08 keyring_file -- keyring/file.sh@21 -- # killprocess 3949676 00:27:42.369 22:37:08 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 3949676 ']' 00:27:42.369 22:37:08 keyring_file -- common/autotest_common.sh@952 -- # kill -0 3949676 00:27:42.369 22:37:08 keyring_file -- common/autotest_common.sh@953 -- # uname 00:27:42.369 22:37:08 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:42.369 22:37:08 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3949676 00:27:42.369 22:37:08 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:42.369 22:37:08 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:42.369 22:37:08 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3949676' 00:27:42.369 killing process with pid 3949676 00:27:42.369 22:37:08 keyring_file -- common/autotest_common.sh@967 -- # kill 3949676 00:27:42.369 [2024-07-24 22:37:08.037130] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:27:42.369 22:37:08 keyring_file -- common/autotest_common.sh@972 -- # wait 3949676 00:27:42.938 00:27:42.938 real 0m15.020s 00:27:42.938 user 0m38.111s 00:27:42.938 sys 0m3.266s 00:27:42.938 22:37:08 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:42.938 22:37:08 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:42.938 ************************************ 00:27:42.938 END TEST keyring_file 00:27:42.938 ************************************ 00:27:42.938 22:37:08 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:27:42.938 22:37:08 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:27:42.938 22:37:08 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:42.938 22:37:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:42.938 22:37:08 -- common/autotest_common.sh@10 -- # set +x 00:27:42.938 ************************************ 00:27:42.938 START TEST keyring_linux 00:27:42.938 ************************************ 00:27:42.938 22:37:08 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:27:42.938 * Looking for test storage... 00:27:42.938 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:27:42.938 22:37:08 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:27:42.938 22:37:08 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:42.938 22:37:08 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:27:42.938 22:37:08 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:42.938 22:37:08 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:42.938 22:37:08 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:42.938 22:37:08 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:42.938 22:37:08 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:42.938 22:37:08 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:42.938 22:37:08 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:42.938 22:37:08 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:42.938 22:37:08 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:42.938 22:37:08 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:42.938 22:37:08 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a27f578f-8275-e111-bd1d-001e673e77fc 00:27:42.938 22:37:08 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=a27f578f-8275-e111-bd1d-001e673e77fc 00:27:42.938 22:37:08 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:42.938 22:37:08 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:42.938 22:37:08 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:42.938 22:37:08 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:42.938 22:37:08 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:42.938 22:37:08 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:42.938 22:37:08 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:42.938 22:37:08 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:42.938 22:37:08 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:42.938 22:37:08 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:42.938 22:37:08 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:42.938 22:37:08 keyring_linux -- paths/export.sh@5 -- # export PATH 00:27:42.938 22:37:08 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:42.938 22:37:08 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:27:42.938 22:37:08 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:42.938 22:37:08 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:42.938 22:37:08 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:42.938 22:37:08 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:42.938 22:37:08 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:42.938 22:37:08 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:42.938 22:37:08 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:42.938 22:37:08 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:42.938 22:37:08 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:27:42.938 22:37:08 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:27:42.938 22:37:08 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:27:42.938 22:37:08 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:27:42.938 22:37:08 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:27:42.938 22:37:08 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:27:42.938 22:37:08 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:27:42.938 22:37:08 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:27:42.938 22:37:08 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:27:42.938 22:37:08 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:42.938 22:37:08 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:27:42.938 22:37:08 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:27:42.938 22:37:08 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:42.938 22:37:08 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:42.938 22:37:08 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:27:42.938 22:37:08 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:42.938 22:37:08 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:27:42.938 22:37:08 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:27:42.938 22:37:08 keyring_linux -- nvmf/common.sh@705 -- # python - 00:27:42.938 22:37:08 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:27:42.938 22:37:08 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:27:42.938 /tmp/:spdk-test:key0 00:27:42.938 22:37:08 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:27:42.938 22:37:08 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:27:42.938 22:37:08 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:27:42.938 22:37:08 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:27:42.938 22:37:08 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:27:42.938 22:37:08 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:27:42.938 22:37:08 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:27:42.938 22:37:08 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:27:42.938 22:37:08 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:27:42.938 22:37:08 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:42.938 22:37:08 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:27:42.938 22:37:08 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:27:42.938 22:37:08 keyring_linux -- nvmf/common.sh@705 -- # python - 00:27:42.938 22:37:08 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:27:42.938 22:37:08 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:27:42.938 /tmp/:spdk-test:key1 00:27:42.939 22:37:08 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3951295 00:27:42.939 22:37:08 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:27:42.939 22:37:08 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3951295 00:27:42.939 22:37:08 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 3951295 ']' 00:27:42.939 22:37:08 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:42.939 22:37:08 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:42.939 22:37:08 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:42.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:42.939 22:37:08 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:42.939 22:37:08 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:43.197 [2024-07-24 22:37:08.651091] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:27:43.197 [2024-07-24 22:37:08.651186] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3951295 ] 00:27:43.197 EAL: No free 2048 kB hugepages reported on node 1 00:27:43.197 [2024-07-24 22:37:08.711998] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:43.197 [2024-07-24 22:37:08.830371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:43.454 22:37:09 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:43.454 22:37:09 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:27:43.454 22:37:09 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:27:43.454 22:37:09 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:43.454 22:37:09 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:43.454 [2024-07-24 22:37:09.058822] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:43.454 null0 00:27:43.454 [2024-07-24 22:37:09.090876] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:43.454 [2024-07-24 22:37:09.091256] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:43.454 22:37:09 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.454 22:37:09 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:27:43.454 13056247 00:27:43.454 22:37:09 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:27:43.454 138849588 00:27:43.454 22:37:09 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3951396 00:27:43.454 22:37:09 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3951396 /var/tmp/bperf.sock 00:27:43.454 22:37:09 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:27:43.454 22:37:09 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 3951396 ']' 00:27:43.454 22:37:09 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:43.454 22:37:09 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:43.454 22:37:09 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:43.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:43.454 22:37:09 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:43.454 22:37:09 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:43.712 [2024-07-24 22:37:09.160635] Starting SPDK v24.09-pre git sha1 643864934 / DPDK 24.03.0 initialization... 00:27:43.712 [2024-07-24 22:37:09.160731] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3951396 ] 00:27:43.712 EAL: No free 2048 kB hugepages reported on node 1 00:27:43.712 [2024-07-24 22:37:09.234913] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:43.712 [2024-07-24 22:37:09.386549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:43.969 22:37:09 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:43.969 22:37:09 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:27:43.969 22:37:09 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:27:43.969 22:37:09 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:27:44.226 22:37:09 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:27:44.226 22:37:09 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:44.484 22:37:10 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:27:44.484 22:37:10 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:27:45.049 [2024-07-24 22:37:10.460776] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:45.049 nvme0n1 00:27:45.049 22:37:10 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:27:45.049 22:37:10 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:27:45.049 22:37:10 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:27:45.050 22:37:10 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:27:45.050 22:37:10 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:27:45.050 22:37:10 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:45.307 22:37:10 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:27:45.307 22:37:10 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:27:45.307 22:37:10 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:27:45.307 22:37:10 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:27:45.307 22:37:10 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:45.307 22:37:10 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:45.307 22:37:10 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:27:45.565 22:37:11 keyring_linux -- keyring/linux.sh@25 -- # sn=13056247 00:27:45.565 22:37:11 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:27:45.565 22:37:11 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:27:45.565 22:37:11 keyring_linux -- keyring/linux.sh@26 -- # [[ 13056247 == \1\3\0\5\6\2\4\7 ]] 00:27:45.565 22:37:11 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 13056247 00:27:45.565 22:37:11 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:27:45.565 22:37:11 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:45.565 Running I/O for 1 seconds... 00:27:46.939 00:27:46.939 Latency(us) 00:27:46.939 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:46.939 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:46.939 nvme0n1 : 1.01 7263.47 28.37 0.00 0.00 17475.57 11311.03 29903.83 00:27:46.939 =================================================================================================================== 00:27:46.939 Total : 7263.47 28.37 0.00 0.00 17475.57 11311.03 29903.83 00:27:46.939 0 00:27:46.939 22:37:12 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:27:46.939 22:37:12 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:27:46.939 22:37:12 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:27:46.939 22:37:12 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:27:46.939 22:37:12 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:27:46.939 22:37:12 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:27:46.939 22:37:12 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:46.939 22:37:12 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:27:47.197 22:37:12 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:27:47.197 22:37:12 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:27:47.197 22:37:12 keyring_linux -- keyring/linux.sh@23 -- # return 00:27:47.197 22:37:12 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:47.197 22:37:12 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:27:47.197 22:37:12 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:47.197 22:37:12 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:27:47.197 22:37:12 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:47.197 22:37:12 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:27:47.197 22:37:12 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:47.197 22:37:12 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:47.197 22:37:12 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:47.454 [2024-07-24 22:37:13.118986] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:27:47.454 [2024-07-24 22:37:13.119362] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1592500 (107): Transport endpoint is not connected 00:27:47.455 [2024-07-24 22:37:13.120353] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1592500 (9): Bad file descriptor 00:27:47.455 [2024-07-24 22:37:13.121360] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:47.455 [2024-07-24 22:37:13.121380] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:27:47.455 [2024-07-24 22:37:13.121395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:47.455 request: 00:27:47.455 { 00:27:47.455 "name": "nvme0", 00:27:47.455 "trtype": "tcp", 00:27:47.455 "traddr": "127.0.0.1", 00:27:47.455 "adrfam": "ipv4", 00:27:47.455 "trsvcid": "4420", 00:27:47.455 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:47.455 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:47.455 "prchk_reftag": false, 00:27:47.455 "prchk_guard": false, 00:27:47.455 "hdgst": false, 00:27:47.455 "ddgst": false, 00:27:47.455 "psk": ":spdk-test:key1", 00:27:47.455 "method": "bdev_nvme_attach_controller", 00:27:47.455 "req_id": 1 00:27:47.455 } 00:27:47.455 Got JSON-RPC error response 00:27:47.455 response: 00:27:47.455 { 00:27:47.455 "code": -5, 00:27:47.455 "message": "Input/output error" 00:27:47.455 } 00:27:47.455 22:37:13 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:27:47.455 22:37:13 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:47.455 22:37:13 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:47.455 22:37:13 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:47.455 22:37:13 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:27:47.455 22:37:13 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:27:47.455 22:37:13 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:27:47.455 22:37:13 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:27:47.455 22:37:13 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:27:47.455 22:37:13 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:27:47.455 22:37:13 keyring_linux -- keyring/linux.sh@33 -- # sn=13056247 00:27:47.455 22:37:13 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 13056247 00:27:47.455 1 links removed 00:27:47.455 22:37:13 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:27:47.455 22:37:13 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:27:47.455 22:37:13 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:27:47.455 22:37:13 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:27:47.455 22:37:13 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:27:47.455 22:37:13 keyring_linux -- keyring/linux.sh@33 -- # sn=138849588 00:27:47.455 22:37:13 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 138849588 00:27:47.455 1 links removed 00:27:47.455 22:37:13 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3951396 00:27:47.455 22:37:13 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 3951396 ']' 00:27:47.455 22:37:13 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 3951396 00:27:47.455 22:37:13 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:27:47.455 22:37:13 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:47.455 22:37:13 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3951396 00:27:47.717 22:37:13 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:47.717 22:37:13 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:47.717 22:37:13 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3951396' 00:27:47.717 killing process with pid 3951396 00:27:47.717 22:37:13 keyring_linux -- common/autotest_common.sh@967 -- # kill 3951396 00:27:47.717 Received shutdown signal, test time was about 1.000000 seconds 00:27:47.717 00:27:47.717 Latency(us) 00:27:47.717 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:47.717 =================================================================================================================== 00:27:47.717 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:47.717 22:37:13 keyring_linux -- common/autotest_common.sh@972 -- # wait 3951396 00:27:47.717 22:37:13 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3951295 00:27:47.717 22:37:13 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 3951295 ']' 00:27:47.717 22:37:13 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 3951295 00:27:47.717 22:37:13 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:27:47.717 22:37:13 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:47.717 22:37:13 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3951295 00:27:47.717 22:37:13 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:47.717 22:37:13 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:47.717 22:37:13 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3951295' 00:27:47.717 killing process with pid 3951295 00:27:47.717 22:37:13 keyring_linux -- common/autotest_common.sh@967 -- # kill 3951295 00:27:47.717 22:37:13 keyring_linux -- common/autotest_common.sh@972 -- # wait 3951295 00:27:48.284 00:27:48.284 real 0m5.310s 00:27:48.284 user 0m10.619s 00:27:48.284 sys 0m1.678s 00:27:48.284 22:37:13 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:48.284 22:37:13 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:48.284 ************************************ 00:27:48.284 END TEST keyring_linux 00:27:48.284 ************************************ 00:27:48.284 22:37:13 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:27:48.284 22:37:13 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:27:48.284 22:37:13 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:27:48.284 22:37:13 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:27:48.284 22:37:13 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:27:48.284 22:37:13 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:27:48.284 22:37:13 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:27:48.284 22:37:13 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:27:48.284 22:37:13 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:27:48.284 22:37:13 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:27:48.284 22:37:13 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:27:48.284 22:37:13 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:27:48.284 22:37:13 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:27:48.284 22:37:13 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:27:48.284 22:37:13 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:27:48.284 22:37:13 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:27:48.284 22:37:13 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:27:48.284 22:37:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:48.284 22:37:13 -- common/autotest_common.sh@10 -- # set +x 00:27:48.284 22:37:13 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:27:48.284 22:37:13 -- common/autotest_common.sh@1390 -- # local autotest_es=0 00:27:48.284 22:37:13 -- common/autotest_common.sh@1391 -- # xtrace_disable 00:27:48.284 22:37:13 -- common/autotest_common.sh@10 -- # set +x 00:27:49.661 INFO: APP EXITING 00:27:49.661 INFO: killing all VMs 00:27:49.661 INFO: killing vhost app 00:27:49.661 WARN: no vhost pid file found 00:27:49.661 INFO: EXIT DONE 00:27:50.597 0000:84:00.0 (8086 0a54): Already using the nvme driver 00:27:50.597 0000:00:04.7 (8086 3c27): Already using the ioatdma driver 00:27:50.597 0000:00:04.6 (8086 3c26): Already using the ioatdma driver 00:27:50.597 0000:00:04.5 (8086 3c25): Already using the ioatdma driver 00:27:50.597 0000:00:04.4 (8086 3c24): Already using the ioatdma driver 00:27:50.597 0000:00:04.3 (8086 3c23): Already using the ioatdma driver 00:27:50.597 0000:00:04.2 (8086 3c22): Already using the ioatdma driver 00:27:50.597 0000:00:04.1 (8086 3c21): Already using the ioatdma driver 00:27:50.597 0000:00:04.0 (8086 3c20): Already using the ioatdma driver 00:27:50.597 0000:80:04.7 (8086 3c27): Already using the ioatdma driver 00:27:50.597 0000:80:04.6 (8086 3c26): Already using the ioatdma driver 00:27:50.856 0000:80:04.5 (8086 3c25): Already using the ioatdma driver 00:27:50.856 0000:80:04.4 (8086 3c24): Already using the ioatdma driver 00:27:50.856 0000:80:04.3 (8086 3c23): Already using the ioatdma driver 00:27:50.856 0000:80:04.2 (8086 3c22): Already using the ioatdma driver 00:27:50.856 0000:80:04.1 (8086 3c21): Already using the ioatdma driver 00:27:50.856 0000:80:04.0 (8086 3c20): Already using the ioatdma driver 00:27:51.795 Cleaning 00:27:51.795 Removing: /var/run/dpdk/spdk0/config 00:27:51.795 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:27:51.795 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:27:51.795 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:27:51.795 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:27:51.795 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:27:51.795 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:27:51.795 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:27:51.795 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:27:51.795 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:27:51.795 Removing: /var/run/dpdk/spdk0/hugepage_info 00:27:51.795 Removing: /var/run/dpdk/spdk1/config 00:27:51.795 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:27:51.795 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:27:51.795 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:27:51.795 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:27:51.795 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:27:51.795 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:27:51.795 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:27:51.795 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:27:51.795 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:27:51.795 Removing: /var/run/dpdk/spdk1/hugepage_info 00:27:51.795 Removing: /var/run/dpdk/spdk1/mp_socket 00:27:51.795 Removing: /var/run/dpdk/spdk2/config 00:27:51.795 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:27:51.795 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:27:51.795 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:27:51.795 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:27:51.795 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:27:51.795 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:27:51.795 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:27:51.795 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:27:51.795 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:27:51.795 Removing: /var/run/dpdk/spdk2/hugepage_info 00:27:51.795 Removing: /var/run/dpdk/spdk3/config 00:27:51.795 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:27:51.795 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:27:51.795 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:27:52.054 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:27:52.054 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:27:52.054 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:27:52.054 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:27:52.054 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:27:52.054 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:27:52.054 Removing: /var/run/dpdk/spdk3/hugepage_info 00:27:52.054 Removing: /var/run/dpdk/spdk4/config 00:27:52.054 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:27:52.054 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:27:52.055 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:27:52.055 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:27:52.055 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:27:52.055 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:27:52.055 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:27:52.055 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:27:52.055 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:27:52.055 Removing: /var/run/dpdk/spdk4/hugepage_info 00:27:52.055 Removing: /dev/shm/bdev_svc_trace.1 00:27:52.055 Removing: /dev/shm/nvmf_trace.0 00:27:52.055 Removing: /dev/shm/spdk_tgt_trace.pid3744610 00:27:52.055 Removing: /var/run/dpdk/spdk0 00:27:52.055 Removing: /var/run/dpdk/spdk1 00:27:52.055 Removing: /var/run/dpdk/spdk2 00:27:52.055 Removing: /var/run/dpdk/spdk3 00:27:52.055 Removing: /var/run/dpdk/spdk4 00:27:52.055 Removing: /var/run/dpdk/spdk_pid3743389 00:27:52.055 Removing: /var/run/dpdk/spdk_pid3743959 00:27:52.055 Removing: /var/run/dpdk/spdk_pid3744610 00:27:52.055 Removing: /var/run/dpdk/spdk_pid3744981 00:27:52.055 Removing: /var/run/dpdk/spdk_pid3745513 00:27:52.055 Removing: /var/run/dpdk/spdk_pid3745617 00:27:52.055 Removing: /var/run/dpdk/spdk_pid3746173 00:27:52.055 Removing: /var/run/dpdk/spdk_pid3746182 00:27:52.055 Removing: /var/run/dpdk/spdk_pid3746393 00:27:52.055 Removing: /var/run/dpdk/spdk_pid3747428 00:27:52.055 Removing: /var/run/dpdk/spdk_pid3748144 00:27:52.055 Removing: /var/run/dpdk/spdk_pid3748396 00:27:52.055 Removing: /var/run/dpdk/spdk_pid3748550 00:27:52.055 Removing: /var/run/dpdk/spdk_pid3748720 00:27:52.055 Removing: /var/run/dpdk/spdk_pid3748876 00:27:52.055 Removing: /var/run/dpdk/spdk_pid3749020 00:27:52.055 Removing: /var/run/dpdk/spdk_pid3749219 00:27:52.055 Removing: /var/run/dpdk/spdk_pid3749376 00:27:52.055 Removing: /var/run/dpdk/spdk_pid3749644 00:27:52.055 Removing: /var/run/dpdk/spdk_pid3751665 00:27:52.055 Removing: /var/run/dpdk/spdk_pid3751795 00:27:52.055 Removing: /var/run/dpdk/spdk_pid3751931 00:27:52.055 Removing: /var/run/dpdk/spdk_pid3752027 00:27:52.055 Removing: /var/run/dpdk/spdk_pid3752273 00:27:52.055 Removing: /var/run/dpdk/spdk_pid3752365 00:27:52.055 Removing: /var/run/dpdk/spdk_pid3752607 00:27:52.055 Removing: /var/run/dpdk/spdk_pid3752706 00:27:52.055 Removing: /var/run/dpdk/spdk_pid3752839 00:27:52.055 Removing: /var/run/dpdk/spdk_pid3752853 00:27:52.055 Removing: /var/run/dpdk/spdk_pid3752987 00:27:52.055 Removing: /var/run/dpdk/spdk_pid3753079 00:27:52.055 Removing: /var/run/dpdk/spdk_pid3753385 00:27:52.055 Removing: /var/run/dpdk/spdk_pid3753520 00:27:52.055 Removing: /var/run/dpdk/spdk_pid3753760 00:27:52.055 Removing: /var/run/dpdk/spdk_pid3753896 00:27:52.055 Removing: /var/run/dpdk/spdk_pid3753920 00:27:52.055 Removing: /var/run/dpdk/spdk_pid3754081 00:27:52.055 Removing: /var/run/dpdk/spdk_pid3754200 00:27:52.055 Removing: /var/run/dpdk/spdk_pid3754325 00:27:52.055 Removing: /var/run/dpdk/spdk_pid3754539 00:27:52.055 Removing: /var/run/dpdk/spdk_pid3754660 00:27:52.055 Removing: /var/run/dpdk/spdk_pid3754785 00:27:52.055 Removing: /var/run/dpdk/spdk_pid3754998 00:27:52.055 Removing: /var/run/dpdk/spdk_pid3755124 00:27:52.055 Removing: /var/run/dpdk/spdk_pid3755245 00:27:52.055 Removing: /var/run/dpdk/spdk_pid3755440 00:27:52.055 Removing: /var/run/dpdk/spdk_pid3755583 00:27:52.055 Removing: /var/run/dpdk/spdk_pid3755709 00:27:52.055 Removing: /var/run/dpdk/spdk_pid3755860 00:27:52.055 Removing: /var/run/dpdk/spdk_pid3756037 00:27:52.055 Removing: /var/run/dpdk/spdk_pid3756166 00:27:52.055 Removing: /var/run/dpdk/spdk_pid3756297 00:27:52.055 Removing: /var/run/dpdk/spdk_pid3756501 00:27:52.055 Removing: /var/run/dpdk/spdk_pid3756625 00:27:52.055 Removing: /var/run/dpdk/spdk_pid3756840 00:27:52.055 Removing: /var/run/dpdk/spdk_pid3756969 00:27:52.055 Removing: /var/run/dpdk/spdk_pid3757095 00:27:52.055 Removing: /var/run/dpdk/spdk_pid3757251 00:27:52.055 Removing: /var/run/dpdk/spdk_pid3757423 00:27:52.055 Removing: /var/run/dpdk/spdk_pid3759046 00:27:52.055 Removing: /var/run/dpdk/spdk_pid3761157 00:27:52.055 Removing: /var/run/dpdk/spdk_pid3767111 00:27:52.055 Removing: /var/run/dpdk/spdk_pid3767515 00:27:52.313 Removing: /var/run/dpdk/spdk_pid3769374 00:27:52.313 Removing: /var/run/dpdk/spdk_pid3769582 00:27:52.313 Removing: /var/run/dpdk/spdk_pid3771533 00:27:52.313 Removing: /var/run/dpdk/spdk_pid3774482 00:27:52.313 Removing: /var/run/dpdk/spdk_pid3776249 00:27:52.313 Removing: /var/run/dpdk/spdk_pid3781204 00:27:52.313 Removing: /var/run/dpdk/spdk_pid3785132 00:27:52.313 Removing: /var/run/dpdk/spdk_pid3786131 00:27:52.313 Removing: /var/run/dpdk/spdk_pid3786643 00:27:52.313 Removing: /var/run/dpdk/spdk_pid3795298 00:27:52.313 Removing: /var/run/dpdk/spdk_pid3797063 00:27:52.313 Removing: /var/run/dpdk/spdk_pid3816747 00:27:52.313 Removing: /var/run/dpdk/spdk_pid3819281 00:27:52.313 Removing: /var/run/dpdk/spdk_pid3822977 00:27:52.313 Removing: /var/run/dpdk/spdk_pid3825959 00:27:52.313 Removing: /var/run/dpdk/spdk_pid3825965 00:27:52.313 Removing: /var/run/dpdk/spdk_pid3826465 00:27:52.313 Removing: /var/run/dpdk/spdk_pid3826954 00:27:52.313 Removing: /var/run/dpdk/spdk_pid3827454 00:27:52.313 Removing: /var/run/dpdk/spdk_pid3827755 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3827766 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3827875 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3827980 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3828051 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3828485 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3828977 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3829475 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3829787 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3829872 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3829989 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3830769 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3831334 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3835396 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3858029 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3860279 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3861287 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3862800 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3862907 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3863000 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3863111 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3863368 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3864389 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3864936 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3865259 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3866584 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3866905 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3867260 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3869201 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3873634 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3875774 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3878765 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3879525 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3880386 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3882402 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3884145 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3887375 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3887411 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3889671 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3889967 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3890395 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3890689 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3890694 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3892800 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3893072 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3895117 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3896627 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3899251 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3901955 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3907139 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3910532 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3910605 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3921229 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3921543 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3921884 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3922276 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3922724 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3923040 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3923440 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3923751 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3925660 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3925798 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3928700 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3928842 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3930098 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3933978 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3934019 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3936228 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3937296 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3938363 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3939016 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3940087 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3940800 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3945471 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3945770 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3946066 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3947203 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3947505 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3947811 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3949676 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3949755 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3950909 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3951295 00:27:52.314 Removing: /var/run/dpdk/spdk_pid3951396 00:27:52.314 Clean 00:27:52.572 22:37:18 -- common/autotest_common.sh@1449 -- # return 0 00:27:52.572 22:37:18 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:27:52.572 22:37:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:52.572 22:37:18 -- common/autotest_common.sh@10 -- # set +x 00:27:52.572 22:37:18 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:27:52.572 22:37:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:52.572 22:37:18 -- common/autotest_common.sh@10 -- # set +x 00:27:52.572 22:37:18 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:27:52.572 22:37:18 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:27:52.572 22:37:18 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:27:52.572 22:37:18 -- spdk/autotest.sh@391 -- # hash lcov 00:27:52.572 22:37:18 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:27:52.572 22:37:18 -- spdk/autotest.sh@393 -- # hostname 00:27:52.572 22:37:18 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-02 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:27:52.830 geninfo: WARNING: invalid characters removed from testname! 00:28:31.538 22:37:51 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:31.538 22:37:56 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:33.443 22:37:59 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:36.730 22:38:02 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:40.006 22:38:05 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:42.533 22:38:07 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:45.827 22:38:11 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:28:45.827 22:38:11 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:45.827 22:38:11 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:28:45.827 22:38:11 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:45.827 22:38:11 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:45.827 22:38:11 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:45.827 22:38:11 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:45.828 22:38:11 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:45.828 22:38:11 -- paths/export.sh@5 -- $ export PATH 00:28:45.828 22:38:11 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:45.828 22:38:11 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:28:45.828 22:38:11 -- common/autobuild_common.sh@447 -- $ date +%s 00:28:45.828 22:38:11 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721853491.XXXXXX 00:28:45.828 22:38:11 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721853491.uUc6DP 00:28:45.828 22:38:11 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:28:45.828 22:38:11 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:28:45.828 22:38:11 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:28:45.828 22:38:11 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:28:45.828 22:38:11 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:28:45.828 22:38:11 -- common/autobuild_common.sh@463 -- $ get_config_params 00:28:45.828 22:38:11 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:28:45.828 22:38:11 -- common/autotest_common.sh@10 -- $ set +x 00:28:45.828 22:38:11 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:28:45.828 22:38:11 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:28:45.828 22:38:11 -- pm/common@17 -- $ local monitor 00:28:45.828 22:38:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:45.828 22:38:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:45.828 22:38:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:45.828 22:38:11 -- pm/common@21 -- $ date +%s 00:28:45.828 22:38:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:45.828 22:38:11 -- pm/common@21 -- $ date +%s 00:28:45.828 22:38:11 -- pm/common@25 -- $ sleep 1 00:28:45.828 22:38:11 -- pm/common@21 -- $ date +%s 00:28:45.828 22:38:11 -- pm/common@21 -- $ date +%s 00:28:45.828 22:38:11 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721853491 00:28:45.828 22:38:11 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721853491 00:28:45.828 22:38:11 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721853491 00:28:45.828 22:38:11 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721853491 00:28:45.828 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721853491_collect-vmstat.pm.log 00:28:45.828 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721853491_collect-cpu-load.pm.log 00:28:45.828 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721853491_collect-cpu-temp.pm.log 00:28:45.828 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721853491_collect-bmc-pm.bmc.pm.log 00:28:46.394 22:38:12 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:28:46.394 22:38:12 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j32 00:28:46.395 22:38:12 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:46.395 22:38:12 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:28:46.395 22:38:12 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:28:46.395 22:38:12 -- spdk/autopackage.sh@19 -- $ timing_finish 00:28:46.395 22:38:12 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:28:46.395 22:38:12 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:28:46.395 22:38:12 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:28:46.655 22:38:12 -- spdk/autopackage.sh@20 -- $ exit 0 00:28:46.655 22:38:12 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:28:46.655 22:38:12 -- pm/common@29 -- $ signal_monitor_resources TERM 00:28:46.655 22:38:12 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:28:46.655 22:38:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:46.655 22:38:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:28:46.655 22:38:12 -- pm/common@44 -- $ pid=3960184 00:28:46.655 22:38:12 -- pm/common@50 -- $ kill -TERM 3960184 00:28:46.655 22:38:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:46.655 22:38:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:28:46.655 22:38:12 -- pm/common@44 -- $ pid=3960186 00:28:46.655 22:38:12 -- pm/common@50 -- $ kill -TERM 3960186 00:28:46.655 22:38:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:46.655 22:38:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:28:46.655 22:38:12 -- pm/common@44 -- $ pid=3960188 00:28:46.655 22:38:12 -- pm/common@50 -- $ kill -TERM 3960188 00:28:46.655 22:38:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:46.655 22:38:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:28:46.655 22:38:12 -- pm/common@44 -- $ pid=3960216 00:28:46.655 22:38:12 -- pm/common@50 -- $ sudo -E kill -TERM 3960216 00:28:46.655 + [[ -n 3666886 ]] 00:28:46.655 + sudo kill 3666886 00:28:46.665 [Pipeline] } 00:28:46.687 [Pipeline] // stage 00:28:46.695 [Pipeline] } 00:28:46.712 [Pipeline] // timeout 00:28:46.719 [Pipeline] } 00:28:46.737 [Pipeline] // catchError 00:28:46.743 [Pipeline] } 00:28:46.761 [Pipeline] // wrap 00:28:46.768 [Pipeline] } 00:28:46.784 [Pipeline] // catchError 00:28:46.791 [Pipeline] stage 00:28:46.793 [Pipeline] { (Epilogue) 00:28:46.804 [Pipeline] catchError 00:28:46.806 [Pipeline] { 00:28:46.819 [Pipeline] echo 00:28:46.821 Cleanup processes 00:28:46.827 [Pipeline] sh 00:28:47.115 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:47.115 3960354 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:28:47.115 3960400 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:47.129 [Pipeline] sh 00:28:47.415 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:47.415 ++ grep -v 'sudo pgrep' 00:28:47.415 ++ awk '{print $1}' 00:28:47.415 + sudo kill -9 3960354 00:28:47.427 [Pipeline] sh 00:28:47.714 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:28:55.933 [Pipeline] sh 00:28:56.218 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:28:56.218 Artifacts sizes are good 00:28:56.236 [Pipeline] archiveArtifacts 00:28:56.245 Archiving artifacts 00:28:56.484 [Pipeline] sh 00:28:56.774 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:28:56.787 [Pipeline] cleanWs 00:28:56.795 [WS-CLEANUP] Deleting project workspace... 00:28:56.795 [WS-CLEANUP] Deferred wipeout is used... 00:28:56.801 [WS-CLEANUP] done 00:28:56.802 [Pipeline] } 00:28:56.816 [Pipeline] // catchError 00:28:56.824 [Pipeline] sh 00:28:57.099 + logger -p user.info -t JENKINS-CI 00:28:57.107 [Pipeline] } 00:28:57.117 [Pipeline] // stage 00:28:57.121 [Pipeline] } 00:28:57.133 [Pipeline] // node 00:28:57.137 [Pipeline] End of Pipeline 00:28:57.156 Finished: SUCCESS